DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* Re: [dpdk-dev] [dpdk-techboard] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 16:46  0%               ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
@ 2021-01-28 17:36  0%                 ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-28 17:36 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, techboard, dev

On Thu, Jan 28, 2021 at 05:46:16PM +0100, Thomas Monjalon wrote:
> 28/01/2021 16:16, Bruce Richardson:
> > On Thu, Jan 28, 2021 at 02:16:10PM +0000, Bruce Richardson wrote:
> > > On Thu, Jan 28, 2021 at 02:36:25PM +0100, David Marchand wrote:
> > > > On Thu, Jan 28, 2021 at 12:20 PM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > > > If the compiler has neither error or diagnose_if support, an internal
> > > > > > API can be called without ALLOW_INTERNAL_API.
> > > > > > I prefer a build error complaining on an unknown attribute rather than
> > > > > > silence a check.
> > > > > >
> > > > > > I.e.
> > > > > >
> > > > > > #ifndef ALLOW_INTERNAL_API
> > > > > >
> > > > > > #if __has_attribute(diagnose_if) /* For clang */
> > > > > > #define __rte_internal \
> > > > > > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > > > > > section(".text.internal")))
> > > > > >
> > > > > > #else
> > > > > > #define __rte_internal \
> > > > > > __attribute__((error("Symbol is not public ABI"), \
> > > > > > section(".text.internal")))
> > > > > >
> > > > > > #endif
> > > > > >
> > > > > > #else /* ALLOW_INTERNAL_API */
> > > > > >
> > > > > > #define __rte_internal \
> > > > > > section(".text.internal")))
> > > > > >
> > > > > > #endif
> > > > > >
> > > > >
> > > > > Would this not mean that if someone was using a compiler that supported
> > > > > neither that they could never include any header file which contained any
> > > > > internal functions? I'd rather err on the side of allowing this, on the
> > > > > basis that the symbol should be already documented as internal and this is
> > > > > only an additional sanity check.
> > > > 
> > > > - Still not a fan.
> > > > We will never know about those compilers behavior, like how we caught
> > > > the clang issue and found an alternative.
> > > > 
> > > 
> > > So I understand, but I'm still concerned about breaking something that was
> > > previously working. It's one thing a DPDK developer catching issues with
> > > clang, quite another a user catching it when trying to build their own
> > > application.
> > > 
> > > We probably need some other opinions on this one.
> > > 
> > Adding Tech-board to see if we can get some more thoughts on this before I do
> > another revision of this set.
> 
> What are the alternatives?
> 

The basic problem is what to do when a compiler is used which does not support
the required macros to flag an error for use of an internal-only function.
For example, we discovered this because clang does not support the #error
macro.

In those cases, as I see it, we really have two choices:

1 ignore flagging the error and silently allow possible use of the internal
  function.
2 have the compiler flag an error for an unknown macro

The problem that I have with #2 is that without knowing the macro, the
compiler will likely error out any time a user app includes any header with
an internal function, even if the function is unused.

On the other hand, the likelihood of impact if we choose #2 and do error out is
quite small, since modern clang versions will support the modern macro
checks we need, and just about any gcc versions we care about are going to
support #error.

For #1, the downside is that we will miss error checks on some older
versions of gcc e.g. RHEL 7, and the user may inadvertently use an internal
function without knowing it.

David, anything else to add here?

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 15:16  0%             ` Bruce Richardson
@ 2021-01-28 16:46  0%               ` Thomas Monjalon
  2021-01-28 17:36  0%                 ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-28 16:46 UTC (permalink / raw)
  To: David Marchand, Bruce Richardson; +Cc: techboard, dev

28/01/2021 16:16, Bruce Richardson:
> On Thu, Jan 28, 2021 at 02:16:10PM +0000, Bruce Richardson wrote:
> > On Thu, Jan 28, 2021 at 02:36:25PM +0100, David Marchand wrote:
> > > On Thu, Jan 28, 2021 at 12:20 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > > > If the compiler has neither error or diagnose_if support, an internal
> > > > > API can be called without ALLOW_INTERNAL_API.
> > > > > I prefer a build error complaining on an unknown attribute rather than
> > > > > silence a check.
> > > > >
> > > > > I.e.
> > > > >
> > > > > #ifndef ALLOW_INTERNAL_API
> > > > >
> > > > > #if __has_attribute(diagnose_if) /* For clang */
> > > > > #define __rte_internal \
> > > > > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > > > > section(".text.internal")))
> > > > >
> > > > > #else
> > > > > #define __rte_internal \
> > > > > __attribute__((error("Symbol is not public ABI"), \
> > > > > section(".text.internal")))
> > > > >
> > > > > #endif
> > > > >
> > > > > #else /* ALLOW_INTERNAL_API */
> > > > >
> > > > > #define __rte_internal \
> > > > > section(".text.internal")))
> > > > >
> > > > > #endif
> > > > >
> > > >
> > > > Would this not mean that if someone was using a compiler that supported
> > > > neither that they could never include any header file which contained any
> > > > internal functions? I'd rather err on the side of allowing this, on the
> > > > basis that the symbol should be already documented as internal and this is
> > > > only an additional sanity check.
> > > 
> > > - Still not a fan.
> > > We will never know about those compilers behavior, like how we caught
> > > the clang issue and found an alternative.
> > > 
> > 
> > So I understand, but I'm still concerned about breaking something that was
> > previously working. It's one thing a DPDK developer catching issues with
> > clang, quite another a user catching it when trying to build their own
> > application.
> > 
> > We probably need some other opinions on this one.
> > 
> Adding Tech-board to see if we can get some more thoughts on this before I do
> another revision of this set.

What are the alternatives?



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 14:16  0%           ` Bruce Richardson
@ 2021-01-28 15:16  0%             ` Bruce Richardson
  2021-01-28 16:46  0%               ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-28 15:16 UTC (permalink / raw)
  To: David Marchand, techboard; +Cc: dev

On Thu, Jan 28, 2021 at 02:16:10PM +0000, Bruce Richardson wrote:
> On Thu, Jan 28, 2021 at 02:36:25PM +0100, David Marchand wrote:
> > On Thu, Jan 28, 2021 at 12:20 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > > > If the compiler has neither error or diagnose_if support, an internal
> > > > API can be called without ALLOW_INTERNAL_API.
> > > > I prefer a build error complaining on an unknown attribute rather than
> > > > silence a check.
> > > >
> > > > I.e.
> > > >
> > > > #ifndef ALLOW_INTERNAL_API
> > > >
> > > > #if __has_attribute(diagnose_if) /* For clang */
> > > > #define __rte_internal \
> > > > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > > > section(".text.internal")))
> > > >
> > > > #else
> > > > #define __rte_internal \
> > > > __attribute__((error("Symbol is not public ABI"), \
> > > > section(".text.internal")))
> > > >
> > > > #endif
> > > >
> > > > #else /* ALLOW_INTERNAL_API */
> > > >
> > > > #define __rte_internal \
> > > > section(".text.internal")))
> > > >
> > > > #endif
> > > >
> > >
> > > Would this not mean that if someone was using a compiler that supported
> > > neither that they could never include any header file which contained any
> > > internal functions? I'd rather err on the side of allowing this, on the
> > > basis that the symbol should be already documented as internal and this is
> > > only an additional sanity check.
> > 
> > - Still not a fan.
> > We will never know about those compilers behavior, like how we caught
> > the clang issue and found an alternative.
> > 
> 
> So I understand, but I'm still concerned about breaking something that was
> previously working. It's one thing a DPDK developer catching issues with
> clang, quite another a user catching it when trying to build their own
> application.
> 
> We probably need some other opinions on this one.
> 
Adding Tech-board to see if we can get some more thoughts on this before I do
another revision of this set.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 13:36  0%         ` David Marchand
@ 2021-01-28 14:16  0%           ` Bruce Richardson
  2021-01-28 15:16  0%             ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-28 14:16 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Wang, Haiyue, Ray Kinsella, Neil Horman

On Thu, Jan 28, 2021 at 02:36:25PM +0100, David Marchand wrote:
> On Thu, Jan 28, 2021 at 12:20 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > > If the compiler has neither error or diagnose_if support, an internal
> > > API can be called without ALLOW_INTERNAL_API.
> > > I prefer a build error complaining on an unknown attribute rather than
> > > silence a check.
> > >
> > > I.e.
> > >
> > > #ifndef ALLOW_INTERNAL_API
> > >
> > > #if __has_attribute(diagnose_if) /* For clang */
> > > #define __rte_internal \
> > > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > > section(".text.internal")))
> > >
> > > #else
> > > #define __rte_internal \
> > > __attribute__((error("Symbol is not public ABI"), \
> > > section(".text.internal")))
> > >
> > > #endif
> > >
> > > #else /* ALLOW_INTERNAL_API */
> > >
> > > #define __rte_internal \
> > > section(".text.internal")))
> > >
> > > #endif
> > >
> >
> > Would this not mean that if someone was using a compiler that supported
> > neither that they could never include any header file which contained any
> > internal functions? I'd rather err on the side of allowing this, on the
> > basis that the symbol should be already documented as internal and this is
> > only an additional sanity check.
> 
> - Still not a fan.
> We will never know about those compilers behavior, like how we caught
> the clang issue and found an alternative.
> 

So I understand, but I'm still concerned about breaking something that was
previously working. It's one thing a DPDK developer catching issues with
clang, quite another a user catching it when trying to build their own
application.

We probably need some other opinions on this one.

> 
> - I just caught a build error with RHEL7 gcc:
> 
> [1/2127] Compiling C object
> lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o
> FAILED: lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o
> ccache cc -Ilib/librte_eal.a.p -Ilib -I../lib -I. -I.. -Iconfig
> -I../config -Ilib/librte_eal/include -I../lib/librte_eal/include
> -Ilib/librte_eal/linux/include -I../lib/librte_eal/linux/include
> -Ilib/librte_eal/x86/include -I../lib/librte_eal/x86/include
> -Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal
> -I../lib/librte_eal -Ilib/librte_kvargs -I../lib/librte_kvargs
> -Ilib/librte_metrics -I../lib/librte_metrics -Ilib/librte_telemetry
> -I../lib/librte_telemetry -pipe -D_FILE_OFFSET_BITS=64 -Wall
> -Winvalid-pch -O3 -include rte_config.h -Wextra -Wcast-qual
> -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security
> -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
> -Wold-style-definition -Wpointer-arith -Wsign-compare
> -Wstrict-prototypes -Wundef -Wwrite-strings
> -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
> -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API '-DABI_VERSION="21.1"'
> -MD -MQ lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o -MF
> lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o.d -o
> lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o -c
> ../lib/librte_eal/common/eal_common_config.c
> In file included from ../lib/librte_eal/include/rte_dev.h:24:0,
>                  from ../lib/librte_eal/common/eal_private.h:12,
>                  from ../lib/librte_eal/common/eal_common_config.c:9:
> ../lib/librte_eal/include/rte_compat.h:22:51: error: missing binary
> operator before token "("
>  #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
>                                                    ^
> ../lib/librte_eal/include/rte_compat.h:28:53: error: missing binary
> operator before token "("
>  #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /*
> For clang */
>                                                      ^
> 
> I can see that gcc doc recommends checking for __has_attribute availability.
> Pasting from google cache, since the gcc.gnu.org doc website seems unavailable.
> 
> """
> 4.2.6 __has_attribute
> 
> The special operator __has_attribute (operand) may be used in ‘#if’
> and ‘#elif’ expressions to test whether the attribute referenced by
> its operand is recognized by GCC. Using the operator in other contexts
> is not valid. In C code, if compiling for strict conformance to
> standards before C2x, operand must be a valid identifier. Otherwise,
> operand may be optionally introduced by the attribute-scope:: prefix.
> The attribute-scope prefix identifies the “namespace” within which the
> attribute is recognized. The scope of GCC attributes is ‘gnu’ or
> ‘__gnu__’. The __has_attribute operator by itself, without any operand
> or parentheses, acts as a predefined macro so that support for it can
> be tested in portable code. Thus, the recommended use of the operator
> is as follows:
> 
> #if defined __has_attribute
> #  if __has_attribute (nonnull)
> #    define ATTR_NONNULL __attribute__ ((nonnull))
> #  endif
> #endif
> 
> The first ‘#if’ test succeeds only when the operator is supported by
> the version of GCC (or another compiler) being used. Only when that
> test succeeds is it valid to use __has_attribute as a preprocessor
> operator. As a result, combining the two tests into a single
> expression as shown below would only be valid with a compiler that
> supports the operator but not with others that don’t.
> 
> #if defined __has_attribute && __has_attribute (nonnull)   /* not portable */
> …
> #endif
> """
> 
I really wish other tools would do like meson and document per-feature the
version in which it was introduced! Anyway, this is something I'll fix in
next version, though again we need to decide in the case of __has_attribute
not being supported do we fall to erroring out? Again that runs the risk of
users not being able to include a header which has an internal function, so
I'd prefer us to ignore errors if the appropriate macros are unsupported.

Again, other opinions probably needed.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 11:20  0%       ` Bruce Richardson
@ 2021-01-28 13:36  0%         ` David Marchand
  2021-01-28 14:16  0%           ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-28 13:36 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Wang, Haiyue, Ray Kinsella, Neil Horman

On Thu, Jan 28, 2021 at 12:20 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
> > If the compiler has neither error or diagnose_if support, an internal
> > API can be called without ALLOW_INTERNAL_API.
> > I prefer a build error complaining on an unknown attribute rather than
> > silence a check.
> >
> > I.e.
> >
> > #ifndef ALLOW_INTERNAL_API
> >
> > #if __has_attribute(diagnose_if) /* For clang */
> > #define __rte_internal \
> > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > section(".text.internal")))
> >
> > #else
> > #define __rte_internal \
> > __attribute__((error("Symbol is not public ABI"), \
> > section(".text.internal")))
> >
> > #endif
> >
> > #else /* ALLOW_INTERNAL_API */
> >
> > #define __rte_internal \
> > section(".text.internal")))
> >
> > #endif
> >
>
> Would this not mean that if someone was using a compiler that supported
> neither that they could never include any header file which contained any
> internal functions? I'd rather err on the side of allowing this, on the
> basis that the symbol should be already documented as internal and this is
> only an additional sanity check.

- Still not a fan.
We will never know about those compilers behavior, like how we caught
the clang issue and found an alternative.


- I just caught a build error with RHEL7 gcc:

[1/2127] Compiling C object
lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o
FAILED: lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o
ccache cc -Ilib/librte_eal.a.p -Ilib -I../lib -I. -I.. -Iconfig
-I../config -Ilib/librte_eal/include -I../lib/librte_eal/include
-Ilib/librte_eal/linux/include -I../lib/librte_eal/linux/include
-Ilib/librte_eal/x86/include -I../lib/librte_eal/x86/include
-Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal
-I../lib/librte_eal -Ilib/librte_kvargs -I../lib/librte_kvargs
-Ilib/librte_metrics -I../lib/librte_metrics -Ilib/librte_telemetry
-I../lib/librte_telemetry -pipe -D_FILE_OFFSET_BITS=64 -Wall
-Winvalid-pch -O3 -include rte_config.h -Wextra -Wcast-qual
-Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare
-Wstrict-prototypes -Wundef -Wwrite-strings
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API '-DABI_VERSION="21.1"'
-MD -MQ lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o -MF
lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o.d -o
lib/librte_eal.a.p/librte_eal_common_eal_common_config.c.o -c
../lib/librte_eal/common/eal_common_config.c
In file included from ../lib/librte_eal/include/rte_dev.h:24:0,
                 from ../lib/librte_eal/common/eal_private.h:12,
                 from ../lib/librte_eal/common/eal_common_config.c:9:
../lib/librte_eal/include/rte_compat.h:22:51: error: missing binary
operator before token "("
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
                                                   ^
../lib/librte_eal/include/rte_compat.h:28:53: error: missing binary
operator before token "("
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /*
For clang */
                                                     ^

I can see that gcc doc recommends checking for __has_attribute availability.
Pasting from google cache, since the gcc.gnu.org doc website seems unavailable.

"""
4.2.6 __has_attribute

The special operator __has_attribute (operand) may be used in ‘#if’
and ‘#elif’ expressions to test whether the attribute referenced by
its operand is recognized by GCC. Using the operator in other contexts
is not valid. In C code, if compiling for strict conformance to
standards before C2x, operand must be a valid identifier. Otherwise,
operand may be optionally introduced by the attribute-scope:: prefix.
The attribute-scope prefix identifies the “namespace” within which the
attribute is recognized. The scope of GCC attributes is ‘gnu’ or
‘__gnu__’. The __has_attribute operator by itself, without any operand
or parentheses, acts as a predefined macro so that support for it can
be tested in portable code. Thus, the recommended use of the operator
is as follows:

#if defined __has_attribute
#  if __has_attribute (nonnull)
#    define ATTR_NONNULL __attribute__ ((nonnull))
#  endif
#endif

The first ‘#if’ test succeeds only when the operator is supported by
the version of GCC (or another compiler) being used. Only when that
test succeeds is it valid to use __has_attribute as a preprocessor
operator. As a result, combining the two tests into a single
expression as shown below would only be valid with a compiler that
supports the operator but not with others that don’t.

#if defined __has_attribute && __has_attribute (nonnull)   /* not portable */
…
#endif
"""



-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 1/5] lpm: add sve support for lookup on Arm platform
  2021-01-28  8:03  0%         ` David Marchand
@ 2021-01-28 12:24  3%           ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-01-28 12:24 UTC (permalink / raw)
  To: David Marchand
  Cc: Ruifeng Wang, jerinj, Jan Viktorin, Bruce Richardson,
	Vladimir Medvedkin, dev, Pavan Nikhilesh, hemant.agrawal, nd,
	Honnappa Nagarahalli, nd

<snip>

> 
> On Wed, Jan 27, 2021 at 10:03 PM Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > <snip>
> >
> > >
> > > On Tue, Jan 12, 2021 at 3:57 AM Ruifeng Wang <ruifeng.wang@arm.com>
> > > wrote:
> > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > index 1afe55cdc..28b57683b 100644
> > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > @@ -402,7 +402,11 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > > xmm_t ip, uint32_t hop[4],
> > > >         uint32_t defv);
> > > >
> > > >  #if defined(RTE_ARCH_ARM)
> > > > +#ifdef __ARM_FEATURE_SVE
> > > > +#include "rte_lpm_sve.h"
> > > > +#else
> > > >  #include "rte_lpm_neon.h"
> > > > +#endif
> > > >  #elif defined(RTE_ARCH_PPC_64)
> > > >  #include "rte_lpm_altivec.h"
> > > >  #else
> > > > diff --git a/lib/librte_lpm/rte_lpm_sve.h
> > > > b/lib/librte_lpm/rte_lpm_sve.h new file mode 100644 index
> > > > 000000000..2e319373e
> > > > --- /dev/null
> > > > +++ b/lib/librte_lpm/rte_lpm_sve.h
> > > > @@ -0,0 +1,83 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright(c) 2020 Arm Limited
> > > > + */
> > > > +
> > > > +#ifndef _RTE_LPM_SVE_H_
> > > > +#define _RTE_LPM_SVE_H_
> > > > +
> > > > +#include <rte_vect.h>
> > > > +
> > > > +#ifdef __cplusplus
> > > > +extern "C" {
> > > > +#endif
> > > > +
> > > > +__rte_internal
> > > > +static void
> > >
> > > I was looking into use of the __rte_internal tag in the tree.
> > >
> > > This helper is called from a inlined API used by applications, so
> > > out of the DPDK build.
> > > It looks like the compiler is not complaining when compiling
> > > examples (I hacked my env to cross compile with gcc 10 + SVE
> > > enabled) but this seems incorrect to me.
> > >
> > > Is there really a need for this helper?
> > > It is only used below afaics.
> > I do not think it is required.
> >
> > At the same time the commit log when '__rte_internal' was introduced is
> confusing.
> > It says "Introduce the __rte_internal tag to mark internal ABI function which is
> used only by the drivers or other libraries". Why would an internal function have
> an ABI?
> 
> It happens that drivers/libraries in DPDK offer some interface for other parts of
> the DPDK to use.
> But we might want them to keep them hidden to final applications, because this
> is purely internal and/or we don't want to guarantee compatibility in later
> versions.
> For such cases, a function can be marked __rte_internal.
> 
> 
> This tag has two impacts:
> - a marked symbol is versionned as INTERNAL when exported (so this does not
> apply to inlines),
> - if an application tries to use a marked API, an error is triggered at build time to
> prevent use of such API,
Thanks David, it makes sense now. The word 'internal ABI' in the commit log caused the confusion.
Is this required because all the header files (header files meant for the application and the DPDK internal header files) are in the same directory?

From the above definition, we do not need the internal tag for this function as it is very much internal to LPM library.

> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 0/8] add checking of header includes
  2021-01-28 10:55  3%   ` [dpdk-dev] [PATCH v6 0/8] add checking of header includes David Marchand
@ 2021-01-28 11:47  0%     ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-28 11:47 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon, Ray Kinsella

On Thu, Jan 28, 2021 at 11:55:34AM +0100, David Marchand wrote:
> On Wed, Jan 27, 2021 at 6:33 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > As a general principle, each header file should include any other
> > headers it needs to provide data type definitions or macros. For
> > example, any header using the uintX_t types in structures or function
> > prototypes should include "stdint.h" to provide those type definitions.
> >
> > In practice, while many, but not all, headers in DPDK did include all
> > necessary headers, it was never actually checked that each header could
> > be included in a C file and compiled without having any compiler errors
> > about missing definitions.  The script "check-includes.sh" could be used
> > for this job, but it was not called out in the documentation, so many
> > contributors may not have been aware of it's existance. It also was
> > difficult to run from a source-code directory, as the script did not
> > automatically allow finding of headers from one DPDK library directory
> > to another [this was probably based on running it on a build created by
> > the "make" build system, where all headers were in a single directory].
> > To attempt to have a build-system integrated replacement, this patchset
> > adds a "chkincs" app in the buildtools directory to verify this on an
> > ongoing basis.
> >
> > This chkincs app does nothing when run, and is not installed as part of
> > a DPDK "ninja install", it's for build-time checking only. Its source
> > code consists of one C file per public DPDK header, where that C file
> > contains nothing except an include for that header.  Therefore, if any
> > header is added to the lib folder which fails to compile when included
> > alone, the build of chkincs will fail with a suitable error message.
> > Since this compile checking is not needed on most builds of DPDK, the
> > building of chkincs is disabled by default, but can be enabled by the
> > "test_includes" meson option. To catch errors with patch submissions,
> > the final patch of this series enables it for a single build in
> > test-meson-builds script.
> >
> > Future work could involve doing similar checks on headers for C++
> > compatibility, which was something done by the check-includes.sh script
> > but which is missing here.
> >
> > V6:
> > * Added release notes updates for:
> >    - renamed, no-longer-installed header files
> >    - new "check_includes" build option
> >    - removal of old check_includes script
> > * Included acks from previous versions
> 
> I have some comments, see replies on patches.
> I can address them if you are ok, and I would take this series for
> -rc2 without needing a respin.
> 

Thomas has some feedback too now, and I think there are one or two patches
where we might want to wait for consensus. However, if you are happy to
take these as they are and do any fixups yourself feel free.

> Sidenote: I like how we are hiding API by simply not exporting headers.
> We need more cleanups like this.
> Ethdev has been cleaned; this will probably remove the need for the
> ABI exception on eth_dev_ops.
> Eventdev, other driver classes and bus drivers will probably be the
> next to look at.
> 

Yes, there is plenty more cleanup work still needed.

* The chkincs support added here integrates nicely into the build but does
  not fully support everything that the old script did, so investigation is
  needed especially for c++ checking support.

* Beyond that we should also look to do cleanup based on IWYU to remove excess
  headers. This work gives us a nice safety-net for that as it should flag to
  us if we every remove too much from a public header.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-28 11:00  4%     ` David Marchand
@ 2021-01-28 11:20  0%       ` Bruce Richardson
  2021-01-28 13:36  0%         ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-28 11:20 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Wang, Haiyue, Ray Kinsella, Neil Horman

On Thu, Jan 28, 2021 at 12:00:46PM +0100, David Marchand wrote:
> On Wed, Jan 27, 2021 at 6:33 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > Clang does not have an "error" attribute for functions, so for marking
> > internal functions we need to check for the error attribute, and provide
> > a fallback if it is not present. For clang, we can use "diagnose_if"
> > attribute, similarly checking for its presence before use.
> >
> > Fixes: fba5af82adc8 ("eal: add internal ABI tag definition")
> > Cc: haiyue.wang@intel.com
> 
> Cc: stable@dpdk.org
> 
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  lib/librte_eal/include/rte_compat.h | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_eal/include/rte_compat.h b/lib/librte_eal/include/rte_compat.h
> > index 4cd8f68d68..c30f072aa3 100644
> > --- a/lib/librte_eal/include/rte_compat.h
> > +++ b/lib/librte_eal/include/rte_compat.h
> > @@ -19,12 +19,18 @@ __attribute__((section(".text.experimental")))
> >
> >  #endif
> >
> > -#ifndef ALLOW_INTERNAL_API
> > +#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
> >
> >  #define __rte_internal \
> >  __attribute__((error("Symbol is not public ABI"), \
> >  section(".text.internal")))
> >
> > +#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
> > +
> > +#define __rte_internal \
> > +__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > +section(".text.internal")))
> > +
> 
> If the compiler has neither error or diagnose_if support, an internal
> API can be called without ALLOW_INTERNAL_API.
> I prefer a build error complaining on an unknown attribute rather than
> silence a check.
> 
> I.e.
> 
> #ifndef ALLOW_INTERNAL_API
> 
> #if __has_attribute(diagnose_if) /* For clang */
> #define __rte_internal \
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> section(".text.internal")))
> 
> #else
> #define __rte_internal \
> __attribute__((error("Symbol is not public ABI"), \
> section(".text.internal")))
> 
> #endif
> 
> #else /* ALLOW_INTERNAL_API */
> 
> #define __rte_internal \
> section(".text.internal")))
> 
> #endif
> 

Would this not mean that if someone was using a compiler that supported
neither that they could never include any header file which contained any
internal functions? I'd rather err on the side of allowing this, on the
basis that the symbol should be already documented as internal and this is
only an additional sanity check.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  2021-01-27 17:33 13%   ` [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang Bruce Richardson
@ 2021-01-28 11:00  4%     ` David Marchand
  2021-01-28 11:20  0%       ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-28 11:00 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Wang, Haiyue, Ray Kinsella, Neil Horman

On Wed, Jan 27, 2021 at 6:33 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> Clang does not have an "error" attribute for functions, so for marking
> internal functions we need to check for the error attribute, and provide
> a fallback if it is not present. For clang, we can use "diagnose_if"
> attribute, similarly checking for its presence before use.
>
> Fixes: fba5af82adc8 ("eal: add internal ABI tag definition")
> Cc: haiyue.wang@intel.com

Cc: stable@dpdk.org

>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  lib/librte_eal/include/rte_compat.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_eal/include/rte_compat.h b/lib/librte_eal/include/rte_compat.h
> index 4cd8f68d68..c30f072aa3 100644
> --- a/lib/librte_eal/include/rte_compat.h
> +++ b/lib/librte_eal/include/rte_compat.h
> @@ -19,12 +19,18 @@ __attribute__((section(".text.experimental")))
>
>  #endif
>
> -#ifndef ALLOW_INTERNAL_API
> +#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
>
>  #define __rte_internal \
>  __attribute__((error("Symbol is not public ABI"), \
>  section(".text.internal")))
>
> +#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
> +
> +#define __rte_internal \
> +__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> +section(".text.internal")))
> +

If the compiler has neither error or diagnose_if support, an internal
API can be called without ALLOW_INTERNAL_API.
I prefer a build error complaining on an unknown attribute rather than
silence a check.

I.e.

#ifndef ALLOW_INTERNAL_API

#if __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal")))

#else
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))

#endif

#else /* ALLOW_INTERNAL_API */

#define __rte_internal \
section(".text.internal")))

#endif

-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 0/8] add checking of header includes
    2021-01-27 17:33 13%   ` [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang Bruce Richardson
@ 2021-01-28 10:55  3%   ` David Marchand
  2021-01-28 11:47  0%     ` Bruce Richardson
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-01-28 10:55 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Thomas Monjalon, Ray Kinsella

On Wed, Jan 27, 2021 at 6:33 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> As a general principle, each header file should include any other
> headers it needs to provide data type definitions or macros. For
> example, any header using the uintX_t types in structures or function
> prototypes should include "stdint.h" to provide those type definitions.
>
> In practice, while many, but not all, headers in DPDK did include all
> necessary headers, it was never actually checked that each header could
> be included in a C file and compiled without having any compiler errors
> about missing definitions.  The script "check-includes.sh" could be used
> for this job, but it was not called out in the documentation, so many
> contributors may not have been aware of it's existance. It also was
> difficult to run from a source-code directory, as the script did not
> automatically allow finding of headers from one DPDK library directory
> to another [this was probably based on running it on a build created by
> the "make" build system, where all headers were in a single directory].
> To attempt to have a build-system integrated replacement, this patchset
> adds a "chkincs" app in the buildtools directory to verify this on an
> ongoing basis.
>
> This chkincs app does nothing when run, and is not installed as part of
> a DPDK "ninja install", it's for build-time checking only. Its source
> code consists of one C file per public DPDK header, where that C file
> contains nothing except an include for that header.  Therefore, if any
> header is added to the lib folder which fails to compile when included
> alone, the build of chkincs will fail with a suitable error message.
> Since this compile checking is not needed on most builds of DPDK, the
> building of chkincs is disabled by default, but can be enabled by the
> "test_includes" meson option. To catch errors with patch submissions,
> the final patch of this series enables it for a single build in
> test-meson-builds script.
>
> Future work could involve doing similar checks on headers for C++
> compatibility, which was something done by the check-includes.sh script
> but which is missing here.
>
> V6:
> * Added release notes updates for:
>    - renamed, no-longer-installed header files
>    - new "check_includes" build option
>    - removal of old check_includes script
> * Included acks from previous versions

I have some comments, see replies on patches.
I can address them if you are ok, and I would take this series for
-rc2 without needing a respin.

Sidenote: I like how we are hiding API by simply not exporting headers.
We need more cleanups like this.
Ethdev has been cleaned; this will probably remove the need for the
ABI exception on eth_dev_ops.
Eventdev, other driver classes and bus drivers will probably be the
next to look at.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 1/5] lpm: add sve support for lookup on Arm platform
  2021-01-27 21:03  4%       ` Honnappa Nagarahalli
@ 2021-01-28  8:03  0%         ` David Marchand
  2021-01-28 12:24  3%           ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-28  8:03 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: Ruifeng Wang, jerinj, Jan Viktorin, Bruce Richardson,
	Vladimir Medvedkin, dev, Pavan Nikhilesh, hemant.agrawal, nd

On Wed, Jan 27, 2021 at 10:03 PM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
>
> >
> > On Tue, Jan 12, 2021 at 3:57 AM Ruifeng Wang <ruifeng.wang@arm.com>
> > wrote:
> > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> > > 1afe55cdc..28b57683b 100644
> > > --- a/lib/librte_lpm/rte_lpm.h
> > > +++ b/lib/librte_lpm/rte_lpm.h
> > > @@ -402,7 +402,11 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > xmm_t ip, uint32_t hop[4],
> > >         uint32_t defv);
> > >
> > >  #if defined(RTE_ARCH_ARM)
> > > +#ifdef __ARM_FEATURE_SVE
> > > +#include "rte_lpm_sve.h"
> > > +#else
> > >  #include "rte_lpm_neon.h"
> > > +#endif
> > >  #elif defined(RTE_ARCH_PPC_64)
> > >  #include "rte_lpm_altivec.h"
> > >  #else
> > > diff --git a/lib/librte_lpm/rte_lpm_sve.h
> > > b/lib/librte_lpm/rte_lpm_sve.h new file mode 100644 index
> > > 000000000..2e319373e
> > > --- /dev/null
> > > +++ b/lib/librte_lpm/rte_lpm_sve.h
> > > @@ -0,0 +1,83 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright(c) 2020 Arm Limited
> > > + */
> > > +
> > > +#ifndef _RTE_LPM_SVE_H_
> > > +#define _RTE_LPM_SVE_H_
> > > +
> > > +#include <rte_vect.h>
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +__rte_internal
> > > +static void
> >
> > I was looking into use of the __rte_internal tag in the tree.
> >
> > This helper is called from a inlined API used by applications, so out of the
> > DPDK build.
> > It looks like the compiler is not complaining when compiling examples (I
> > hacked my env to cross compile with gcc 10 + SVE enabled) but this seems
> > incorrect to me.
> >
> > Is there really a need for this helper?
> > It is only used below afaics.
> I do not think it is required.
>
> At the same time the commit log when '__rte_internal' was introduced is confusing.
> It says "Introduce the __rte_internal tag to mark internal ABI function which is used only by the drivers or other libraries". Why would an internal function have an ABI?

It happens that drivers/libraries in DPDK offer some interface for
other parts of the DPDK to use.
But we might want them to keep them hidden to final applications,
because this is purely internal and/or we don't want to guarantee
compatibility in later versions.
For such cases, a function can be marked __rte_internal.


This tag has two impacts:
- a marked symbol is versionned as INTERNAL when exported (so this
does not apply to inlines),
- if an application tries to use a marked API, an error is triggered
at build time to prevent use of such API,


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 1/5] lpm: add sve support for lookup on Arm platform
  @ 2021-01-27 21:03  4%       ` Honnappa Nagarahalli
  2021-01-28  8:03  0%         ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2021-01-27 21:03 UTC (permalink / raw)
  To: David Marchand, Ruifeng Wang
  Cc: jerinj, Jan Viktorin, Bruce Richardson, Vladimir Medvedkin, dev,
	Pavan Nikhilesh, hemant.agrawal, nd, Honnappa Nagarahalli, nd

<snip>

> 
> On Tue, Jan 12, 2021 at 3:57 AM Ruifeng Wang <ruifeng.wang@arm.com>
> wrote:
> > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> > 1afe55cdc..28b57683b 100644
> > --- a/lib/librte_lpm/rte_lpm.h
> > +++ b/lib/librte_lpm/rte_lpm.h
> > @@ -402,7 +402,11 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> xmm_t ip, uint32_t hop[4],
> >         uint32_t defv);
> >
> >  #if defined(RTE_ARCH_ARM)
> > +#ifdef __ARM_FEATURE_SVE
> > +#include "rte_lpm_sve.h"
> > +#else
> >  #include "rte_lpm_neon.h"
> > +#endif
> >  #elif defined(RTE_ARCH_PPC_64)
> >  #include "rte_lpm_altivec.h"
> >  #else
> > diff --git a/lib/librte_lpm/rte_lpm_sve.h
> > b/lib/librte_lpm/rte_lpm_sve.h new file mode 100644 index
> > 000000000..2e319373e
> > --- /dev/null
> > +++ b/lib/librte_lpm/rte_lpm_sve.h
> > @@ -0,0 +1,83 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2020 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_LPM_SVE_H_
> > +#define _RTE_LPM_SVE_H_
> > +
> > +#include <rte_vect.h>
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +__rte_internal
> > +static void
> 
> I was looking into use of the __rte_internal tag in the tree.
> 
> This helper is called from a inlined API used by applications, so out of the
> DPDK build.
> It looks like the compiler is not complaining when compiling examples (I
> hacked my env to cross compile with gcc 10 + SVE enabled) but this seems
> incorrect to me.
> 
> Is there really a need for this helper?
> It is only used below afaics.
I do not think it is required.

At the same time the commit log when '__rte_internal' was introduced is confusing.
It says "Introduce the __rte_internal tag to mark internal ABI function which is used only by the drivers or other libraries". Why would an internal function have an ABI?

> 
> 
> > +__rte_lpm_lookup_vec(const struct rte_lpm *lpm, const uint32_t *ips,
> > +               uint32_t *__rte_restrict next_hops, const uint32_t n)
> > +{
> 
> [snip]
> 
> 
> > +}
> > +
> > +static inline void
> > +rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
> > +               uint32_t defv)
> > +{
> > +       uint32_t i, ips[4];
> > +
> > +       vst1q_s32((int32_t *)ips, ip);
> > +       for (i = 0; i < 4; i++)
> > +               hop[i] = defv;
> > +
> > +       __rte_lpm_lookup_vec(lpm, ips, hop, 4); }
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang
  @ 2021-01-27 17:33 13%   ` Bruce Richardson
  2021-01-28 11:00  4%     ` David Marchand
  2021-01-28 10:55  3%   ` [dpdk-dev] [PATCH v6 0/8] add checking of header includes David Marchand
  1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-27 17:33 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, Bruce Richardson, haiyue.wang, Ray Kinsella, Neil Horman

Clang does not have an "error" attribute for functions, so for marking
internal functions we need to check for the error attribute, and provide
a fallback if it is not present. For clang, we can use "diagnose_if"
attribute, similarly checking for its presence before use.

Fixes: fba5af82adc8 ("eal: add internal ABI tag definition")
Cc: haiyue.wang@intel.com

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eal/include/rte_compat.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/include/rte_compat.h b/lib/librte_eal/include/rte_compat.h
index 4cd8f68d68..c30f072aa3 100644
--- a/lib/librte_eal/include/rte_compat.h
+++ b/lib/librte_eal/include/rte_compat.h
@@ -19,12 +19,18 @@ __attribute__((section(".text.experimental")))
 
 #endif
 
-#ifndef ALLOW_INTERNAL_API
+#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
 
+#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+
+#define __rte_internal \
+__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
+section(".text.internal")))
+
 #else
 
 #define __rte_internal \
-- 
2.27.0


^ permalink raw reply	[relevance 13%]

* Re: [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework
  2021-01-26 10:15  3% [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
  2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
@ 2021-01-27 11:59  0% ` Maxime Coquelin
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-01-27 11:59 UTC (permalink / raw)
  To: dev, chenbo.xia, olivier.matz, amorenoz, david.marchand



On 1/26/21 11:15 AM, Maxime Coquelin wrote:
> This V3 fixes comments from Chenbo on patch 44 and
> implements the ABI exception in patch 2.
> 
> This series significantly rework Virtio PMD to improve
> the Virtio-user PMD and its backends integration.
> 
> First part of the series removes the dependency of
> Virtio-user ethdev on Virtio PCI, by creating generic
> files, adding per-bus meta data, ...
> 
> Main (if not single) functionnal change of this first
> part is to remove the hack for Virtio-user to work in
> IOVA as PA mode, this hack being very fragile.
> 
> Second part of the series reworks Virtio-user internal,
> by reworking the requests handling so that vDPA and Kernel
> backends no more hack into being Vhost-user backend. It
> implies implementing new ops for all the request types.
> Also, all the backend specific actions are moved from the
> virtio_user_dev.c and virtio_user_ethdev.c to their
> backend files.
> 
> Only functionnal change in this second part is making the
> Vhost-user server mode blocking at init time, as long as
> a client is not connected. The goal of this change is to
> make the Vhost-user support much more robust, as without
> blocking, the driver has to assume features that are going
> to be supported by the client, which is very fragile and
> error prone. As a side-effect, it also simplifies the
> logic nin several place of the virtio-user PMD.
> 
> Main changes in v4:
> - Add ABI exception (David)
> - Close FDs only up to max_queue_pairs
> - virtio_user_dev_uninit_notify() to return void
> 
> Main changes in v3:
> - Rename .intr_event to .intr_detect
> - Rework last patch, properly clean allocated resources
>   on failure.
> - Rebase on top of latest net-next/main
> - Minor typo fixes in comments and log improvements
> 
> Main changes in v2:
> ===================
> - Introduce vdev driver flag for drivers to require IOVA VA mode
> - Rebase on top of -rc1 changes
> - Fix regressions introduced in V1 (vhost-kernel broken, vhost-user reconnect...)
> - Various minor issues & typos fixed
> - Fix status feature issue introduced in v20.11, only reproduceable now that server
>   mode is made blocking
> - Improve failure handling in Virtio-user
> - Improve logging
> 
> Testing coverage (All passed)
> =============================
> - Virtio-pci PMD
>  * Virtio PMD in guest with Vhost-user backend in host
>  * Virtio PMD in guest with Vhost-kernel backend in host
> - Virtio-user PMD with Vhost-user backend
>  * Vhost-user PMD server <-> Virtio-user client PMD IO loopback
>  * Vhost-user PMD client <-> Virtio-user server PMD IO loopback
>  * Vhost-user PMD client <-> Virtio-user server PMD reconnect
> - Virtio-user PMD with Vhost-kernel backend
>  * iperf test case
>  * Txonly testpmd
> - Virtio-user PMD with Vhost-vDPA backend
>  * vdpa-sim (IO loopback)
>  * CX-6 DX Kernel vDPA (Tx only)
> 
> Maxime Coquelin (44):
>   bus/vdev: add helper to get vdev from ethdev
>   bus/vdev: add driver IOVA VA mode requirement
>   net/virtio: fix getting old status on reconnect
>   net/virtio: introduce Virtio bus type
>   net/virtio: refactor virtio-user device
>   net/virtio: introduce PCI device metadata
>   net/virtio: move PCI device init in dedicated file
>   net/virtio: move PCI specific dev init to PCI ethdev init
>   net/virtio: move MSIX detection to PCI ethdev
>   net/virtio: force IOVA as VA mode for Virtio-user
>   net/virtio: store PCI type in Virtio device metadata
>   net/virtio: add callback for device closing
>   net/virtio: validate features at bus level
>   net/virtio: remove bus type enum
>   net/virtio: move PCI-specific fields to PCI device
>   net/virtio: pack virtio HW struct
>   net/virtio: move legacy IO to Virtio PCI
>   net/virtio: introduce generic virtio header
>   net/virtio: move features definition to generic header
>   net/virtio: move virtqueue defines in generic header
>   net/virtio: move config definitions to generic header
>   net/virtio: make interrupt handling more generic
>   net/virtio: move vring alignment to generic header
>   net/virtio: remove last PCI refs in non-PCI code
>   net/virtio: make Vhost-user request sender consistent
>   net/virtio: add Virtio-user ops to set owner
>   net/virtio: add Virtio-user features ops
>   net/virtio: add Virtio-user protocol features ops
>   net/virtio: add Virtio-user memory tables ops
>   net/virtio: add Virtio-user vring setting ops
>   net/virtio: add Virtio-user vring file ops
>   net/virtio: add Virtio-user vring address ops
>   net/virtio: add Virtio-user status ops
>   net/virtio: remove useless request ops
>   net/virtio: improve Virtio-user errors handling
>   net/virtio: move Vhost-user requests to Vhost-user backend
>   net/virtio: make server mode blocking
>   net/virtio: move protocol features to Vhost-user
>   net/virtio: introduce backend data
>   net/virtio: move Vhost-user specifics to its backend
>   net/virtio: move Vhost-kernel data to its backend
>   net/virtio: move Vhost-vDPA data to its backend
>   net/virtio: improve Vhost-user error logging
>   net/virtio: handle Virtio-user setup failure properly
> 
>  devtools/libabigail.abignore                  |   2 +
>  drivers/bus/vdev/rte_bus_vdev.h               |   6 +
>  drivers/bus/vdev/vdev.c                       |  29 +
>  drivers/net/virtio/meson.build                |   6 +-
>  drivers/net/virtio/virtio.c                   |  71 ++
>  drivers/net/virtio/virtio.h                   | 246 +++++
>  drivers/net/virtio/virtio_ethdev.c            | 457 +++------
>  drivers/net/virtio/virtio_ethdev.h            |   6 +-
>  drivers/net/virtio/virtio_pci.c               | 448 +++++----
>  drivers/net/virtio/virtio_pci.h               | 286 +-----
>  drivers/net/virtio/virtio_pci_ethdev.c        | 226 +++++
>  drivers/net/virtio/virtio_ring.h              |   2 +-
>  drivers/net/virtio/virtio_rxtx.c              |  90 +-
>  drivers/net/virtio/virtio_rxtx_packed.h       |  10 +-
>  drivers/net/virtio/virtio_rxtx_packed_avx.h   |  10 +-
>  drivers/net/virtio/virtio_rxtx_packed_neon.h  |  10 +-
>  drivers/net/virtio/virtio_rxtx_simple.h       |   3 +-
>  drivers/net/virtio/virtio_user/vhost.h        |  79 +-
>  drivers/net/virtio/virtio_user/vhost_kernel.c | 461 ++++++---
>  .../net/virtio/virtio_user/vhost_kernel_tap.c |  25 +-
>  .../net/virtio/virtio_user/vhost_kernel_tap.h |   1 +
>  drivers/net/virtio/virtio_user/vhost_user.c   | 898 ++++++++++++++----
>  drivers/net/virtio/virtio_user/vhost_vdpa.c   | 323 +++++--
>  .../net/virtio/virtio_user/virtio_user_dev.c  | 573 ++++++-----
>  .../net/virtio/virtio_user/virtio_user_dev.h  |  21 +-
>  drivers/net/virtio/virtio_user_ethdev.c       | 301 +-----
>  drivers/net/virtio/virtqueue.c                |   6 +-
>  drivers/net/virtio/virtqueue.h                |  45 +-
>  28 files changed, 2742 insertions(+), 1899 deletions(-)
>  create mode 100644 drivers/net/virtio/virtio.c
>  create mode 100644 drivers/net/virtio/virtio.h
>  create mode 100644 drivers/net/virtio/virtio_pci_ethdev.c
> 

Applied to dpdk-next-virtio/main.

Thanks,
Maxime


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-27  8:23  0%   ` David Marchand
@ 2021-01-27  8:25  0%     ` Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-01-27  8:25 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata, Ray Kinsella



On 1/27/21 9:23 AM, David Marchand wrote:
> On Tue, Jan 26, 2021 at 11:16 AM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch adds driver flag in vdev bus driver so that
>> vdev drivers can require VA IOVA mode to be used, which
>> for example the case of Virtio-user PMD.
>>
>> The patch implements the .get_iommu_class() callback, that
>> is called before devices probing to determine the IOVA mode
>> to be used, and adds a check right before the device is
>> probed to ensure compatible IOVA mode has been selected.
>>
>> It also adds a ABI exception rule to accommodate with an
>> update on the driver registration API
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
> 
> I only suggested some changes.
> This patch looks good to me.
> Can you change this Sob as a Acked-by?

Sure, I can do that.

Thanks!
Maxime

> Thanks Maxime.
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
  2021-01-26 11:50  0%   ` Xia, Chenbo
  2021-01-26 12:50  0%   ` David Marchand
@ 2021-01-27  8:23  0%   ` David Marchand
  2021-01-27  8:25  0%     ` Maxime Coquelin
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-27  8:23 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata, Ray Kinsella

On Tue, Jan 26, 2021 at 11:16 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch adds driver flag in vdev bus driver so that
> vdev drivers can require VA IOVA mode to be used, which
> for example the case of Virtio-user PMD.
>
> The patch implements the .get_iommu_class() callback, that
> is called before devices probing to determine the IOVA mode
> to be used, and adds a check right before the device is
> probed to ensure compatible IOVA mode has been selected.
>
> It also adds a ABI exception rule to accommodate with an
> update on the driver registration API
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>

I only suggested some changes.
This patch looks good to me.
Can you change this Sob as a Acked-by?

Thanks Maxime.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 2/8] eal: fix error attribute use for clang
  @ 2021-01-26 21:38 13%   ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-26 21:38 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, Bruce Richardson, haiyue.wang, Ray Kinsella, Neil Horman

Clang does not have an "error" attribute for functions, so for marking
internal functions we need to check for the error attribute, and provide
a fallback if it is not present. For clang, we can use "diagnose_if"
attribute, similarly checking for its presence before use.

Fixes: fba5af82adc8 ("eal: add internal ABI tag definition")
Cc: haiyue.wang@intel.com

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eal/include/rte_compat.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/include/rte_compat.h b/lib/librte_eal/include/rte_compat.h
index 4cd8f68d68..c30f072aa3 100644
--- a/lib/librte_eal/include/rte_compat.h
+++ b/lib/librte_eal/include/rte_compat.h
@@ -19,12 +19,18 @@ __attribute__((section(".text.experimental")))
 
 #endif
 
-#ifndef ALLOW_INTERNAL_API
+#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
 
+#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+
+#define __rte_internal \
+__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
+section(".text.internal")))
+
 #else
 
 #define __rte_internal \
-- 
2.27.0


^ permalink raw reply	[relevance 13%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 14:40  4%       ` David Marchand
@ 2021-01-26 15:28  0%         ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-26 15:28 UTC (permalink / raw)
  To: David Marchand
  Cc: Maxime Coquelin, dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata



On 26/01/2021 14:40, David Marchand wrote:
> On Tue, Jan 26, 2021 at 2:23 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
>>>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
>>>> index 1dc84fa74b..170304c876 100644
>>>> --- a/devtools/libabigail.abignore
>>>> +++ b/devtools/libabigail.abignore
>>>> @@ -11,6 +11,8 @@
>>>>  ; Explicit ignore for driver-only ABI
>>>>  [suppress_type]
>>>>          name = eth_dev_ops
>>>> +[suppress_function]
>>>> +        name_regexp = rte_vdev_(|un)register
>>>>
>>>>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>>>>  [suppress_type]
>>>
>>> Ray,
>>> Are you okay with this exception?
>>
>> Ask a perhaps silly question,
>> shouldn't rte_vdev_register & rte_vdev_unregister have been INTERNAL in any case?
> 
> I discussed with Thomas earlier.
> 
> The INTERNAL exception rule we have suppresses changes on symbols
> already versioned INTERNAL.
> If we mark these two symbols INTERNAL now, they are part of the stable
> v21 ABI in any case.
> libabigail will still complain about them disappearing.
> 
> $ abidiff --suppr
> /home/dmarchan/dpdk/devtools/../devtools/libabigail.abignore
> --no-added-syms --headers-dir1
> /home/dmarchan/abi/v20.11/build-gcc-shared/usr/local/include
> --headers-dir2 /home/dmarchan/builds/build-gcc-shared/install/usr/local/include
> /home/dmarchan/abi/v20.11/build-gcc-shared/dump/librte_bus_vdev.dump
> /home/dmarchan/builds/build-gcc-shared/install/dump/librte_bus_vdev.dump
> Functions changes summary: 2 Removed, 0 Changed, 0 Added functions
> Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
> 
> 2 Removed functions:
> 
>   [D] 'function void rte_vdev_register(rte_vdev_driver*)'
> {rte_vdev_register@@DPDK_21}
>   [D] 'function void rte_vdev_unregister(rte_vdev_driver*)'
> {rte_vdev_unregister@@DPDK_21}
> 
> We will need an exception in any case for them.
> 

Agreed, I didn't miss that are still going to need the exception.

If we agree that everything that is in rte_vdev_bus should be internal.
We can also fix that, while we are aware of it. 

The rule above gets my +1 and I will fix rte_vdev_bus.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 13:23  0%     ` Kinsella, Ray
@ 2021-01-26 14:40  4%       ` David Marchand
  2021-01-26 15:28  0%         ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-26 14:40 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Maxime Coquelin, dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata

On Tue, Jan 26, 2021 at 2:23 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> >> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> >> index 1dc84fa74b..170304c876 100644
> >> --- a/devtools/libabigail.abignore
> >> +++ b/devtools/libabigail.abignore
> >> @@ -11,6 +11,8 @@
> >>  ; Explicit ignore for driver-only ABI
> >>  [suppress_type]
> >>          name = eth_dev_ops
> >> +[suppress_function]
> >> +        name_regexp = rte_vdev_(|un)register
> >>
> >>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
> >>  [suppress_type]
> >
> > Ray,
> > Are you okay with this exception?
>
> Ask a perhaps silly question,
> shouldn't rte_vdev_register & rte_vdev_unregister have been INTERNAL in any case?

I discussed with Thomas earlier.

The INTERNAL exception rule we have suppresses changes on symbols
already versioned INTERNAL.
If we mark these two symbols INTERNAL now, they are part of the stable
v21 ABI in any case.
libabigail will still complain about them disappearing.

$ abidiff --suppr
/home/dmarchan/dpdk/devtools/../devtools/libabigail.abignore
--no-added-syms --headers-dir1
/home/dmarchan/abi/v20.11/build-gcc-shared/usr/local/include
--headers-dir2 /home/dmarchan/builds/build-gcc-shared/install/usr/local/include
/home/dmarchan/abi/v20.11/build-gcc-shared/dump/librte_bus_vdev.dump
/home/dmarchan/builds/build-gcc-shared/install/dump/librte_bus_vdev.dump
Functions changes summary: 2 Removed, 0 Changed, 0 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

2 Removed functions:

  [D] 'function void rte_vdev_register(rte_vdev_driver*)'
{rte_vdev_register@@DPDK_21}
  [D] 'function void rte_vdev_unregister(rte_vdev_driver*)'
{rte_vdev_unregister@@DPDK_21}

We will need an exception in any case for them.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  2021-01-26 14:24  0%         ` Bruce Richardson
@ 2021-01-26 14:39  0%           ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-26 14:39 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon, ferruh.yigit

On Tue, Jan 26, 2021 at 02:24:02PM +0000, Bruce Richardson wrote:
> On Tue, Jan 26, 2021 at 03:04:25PM +0100, David Marchand wrote:
> > On Tue, Jan 26, 2021 at 12:15 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Mon, Jan 25, 2021 at 04:51:19PM +0100, David Marchand wrote:
> > > > On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > >
> > > > > As a general principle, each header file should include any other
> > > > > headers it needs to provide data type definitions or macros. For
> > > > > example, any header using the uintX_t types in structures or function
> > > > > prototypes should include "stdint.h" to provide those type definitions.
> > > > >
> > > > > In practice, while many, but not all, headers in DPDK did include all
> > > > > necessary headers, it was never actually checked that each header could
> > > > > be included in a C file and compiled without having any compiler errors
> > > > > about missing definitions.  The script "check-includes.sh" could be used
> > > > > for this job, but it was not called out in the documentation, so many
> > > > > contributors may not have been aware of it's existance. It also was
> > > > > difficult to run from a source-code directory, as the script did not
> > > > > automatically allow finding of headers from one DPDK library directory
> > > > > to another [this was probably based on running it on a build created by
> > > > > the "make" build system, where all headers were in a single directory].
> > > > > To attempt to have a build-system integrated replacement, this patchset
> > > > > adds a "chkincs" app in the buildtools directory to verify this on an
> > > > > ongoing basis.
> > > > >
> > > > > This chkincs app does nothing when run, and is not installed as part of
> > > > > a DPDK "ninja install", it's for build-time checking only. Its source
> > > > > code consists of one C file per public DPDK header, where that C file
> > > > > contains nothing except an include for that header.  Therefore, if any
> > > > > header is added to the lib folder which fails to compile when included
> > > > > alone, the build of chkincs will fail with a suitable error message.
> > > > > Since this compile checking is not needed on most builds of DPDK, the
> > > > > building of chkincs is disabled by default, but can be enabled by the
> > > > > "test_includes" meson option. To catch errors with patch submissions,
> > > > > the final patch of this series enables it for a single build in
> > > > > test-meson-builds script.
> > > > >
> > > > > Future work could involve doing similar checks on headers for C++
> > > > > compatibility, which was something done by the check-includes.sh script
> > > > > but which is missing here..
> > > > >
> > > > > V3:
> > > > > * Shrunk patchset as most header fixes already applied
> > > > > * Moved chkincs from "apps" to the "buildtools" directory, which is a
> > > > >   better location for something not for installation for end-user use.
> > > > > * Added patch to drop check-includes script.
> > > > >
> > > > > V2:
> > > > > * Add maintainers file entry for new app
> > > > > * Drop patch for c11 ring header
> > > > > * Use build variable "headers_no_chkincs" for tracking exceptions
> > > > >
> > > > > Bruce Richardson (4):
> > > > >   eal: add missing include to mcslock
> > > > >   build: separate out headers for include checking
> > > > >   buildtools/chkincs: add app to verify header includes
> > > > >   devtools: remove check-includes script
> > > > >
> > > > >  MAINTAINERS                                  |   5 +-
> > > > >  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
> > > > >  buildtools/chkincs/main.c                    |   4 +
> > > > >  buildtools/chkincs/meson.build               |  40 +++
> > > > >  devtools/check-includes.sh                   | 259 -------------------
> > > > >  devtools/test-meson-builds.sh                |   2 +-
> > > > >  doc/guides/contributing/coding_style.rst     |  12 +
> > > > >  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
> > > > >  lib/librte_eal/include/meson.build           |   2 +-
> > > > >  lib/librte_eal/x86/include/meson.build       |  14 +-
> > > > >  lib/librte_ethdev/meson.build                |   4 +-
> > > > >  lib/librte_hash/meson.build                  |   4 +-
> > > > >  lib/librte_ipsec/meson.build                 |   3 +-
> > > > >  lib/librte_lpm/meson.build                   |   2 +-
> > > > >  lib/librte_regexdev/meson.build              |   2 +-
> > > > >  lib/librte_ring/meson.build                  |   4 +-
> > > > >  lib/librte_stack/meson.build                 |   4 +-
> > > > >  lib/librte_table/meson.build                 |   7 +-
> > > > >  lib/meson.build                              |   3 +
> > > > >  meson.build                                  |   6 +
> > > > >  meson_options.txt                            |   2 +
> > > > >  21 files changed, 112 insertions(+), 280 deletions(-)
> > > > >  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
> > > > >  create mode 100644 buildtools/chkincs/main.c
> > > > >  create mode 100644 buildtools/chkincs/meson.build
> > > > >  delete mode 100755 devtools/check-includes.sh
> > > >
> > > > - clang is not happy when enabling the check:
> > > > $ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
> > > > $ devtools/test-meson-builds.sh
> > > > ...
> > > > [362/464] Compiling C object
> > > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > > > FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > > > clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> > > > -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> > > > -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> > > > -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> > > > -I../../dpdk/config -Ilib/librte_eal/include
> > > > -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> > > > -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> > > > -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> > > > -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> > > > -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> > > > -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> > > > -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> > > > -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> > > > -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> > > > -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> > > > -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> > > > -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> > > > -I../../dpdk/lib/librte_net -Ilib/librte_meter
> > > > -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> > > > -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> > > > -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> > > > -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> > > > -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> > > > -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> > > > -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> > > > -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> > > > -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> > > > -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> > > > -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> > > > -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> > > > -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> > > > -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> > > > -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> > > > -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> > > > -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> > > > -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> > > > -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> > > > -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> > > > -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> > > > -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> > > > -I../../dpdk/lib/librte_member -Ilib/librte_power
> > > > -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> > > > -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> > > > -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> > > > -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> > > > -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> > > > -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> > > > -I../../dpdk/lib/librte_sched -Ilib/librte_security
> > > > -I../../dpdk/lib/librte_security -Ilib/librte_stack
> > > > -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> > > > -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> > > > -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> > > > -I../../dpdk/lib/librte_fib -Ilib/librte_port
> > > > -I../../dpdk/lib/librte_port -Ilib/librte_table
> > > > -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> > > > -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> > > > -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> > > > -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> > > > -I../../dpdk/lib/librte_graph -Ilib/librte_node
> > > > -I../../dpdk/lib/librte_node
> > > > -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> > > > -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> > > > -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> > > > -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> > > > -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> > > > -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> > > > -Wwrite-strings -Wno-address-of-packed-member
> > > > -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> > > > -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> > > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
> > > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
> > > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
> > > > buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
> > > > In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
> > > > In file included from
> > > > /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
> > > > ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
> > > > attribute 'error' ignored [-Werror,-Wunknown-attributes]
> > > > __rte_internal
> > > > ^
> > > > ../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
> > > > from macro '__rte_internal'
> > > > __attribute__((error("Symbol is not public ABI"), \
> > > >                ^
> > > >
> > >
> > > This looks to be a real issue with our header file - clang does not have an
> > > "error" attribute. The closest equivalent I can see is "diagnose_if".
> > 
> > Indeed, it does trigger a build error, so it works as expected ;-).
> > 
> > 
> > On the header check itself, even if we find a way to properly tag
> > those symbols with the macro in rte_compat.h, the next issue is that
> > clang complains about such marked symbols without the
> > ALLOW_INTERNAL_API build flag.
> > 
> > FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o
> > clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> > -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> > -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> > -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> > -I../../dpdk/config -Ilib/librte_eal/include
> > -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> > -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> > -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> > -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> > -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> > -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> > -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> > -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> > -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> > -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> > -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> > -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> > -I../../dpdk/lib/librte_net -Ilib/librte_meter
> > -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> > -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> > -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> > -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> > -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> > -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> > -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> > -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> > -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> > -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> > -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> > -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> > -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> > -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> > -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> > -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> > -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> > -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> > -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> > -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> > -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> > -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> > -I../../dpdk/lib/librte_member -Ilib/librte_power
> > -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> > -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> > -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> > -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> > -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> > -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> > -I../../dpdk/lib/librte_sched -Ilib/librte_security
> > -I../../dpdk/lib/librte_security -Ilib/librte_stack
> > -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> > -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> > -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> > -I../../dpdk/lib/librte_fib -Ilib/librte_port
> > -I../../dpdk/lib/librte_port -Ilib/librte_table
> > -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> > -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> > -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> > -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> > -I../../dpdk/lib/librte_graph -Ilib/librte_node
> > -I../../dpdk/lib/librte_node
> > -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> > -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> > -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> > -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> > -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> > -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> > -Wwrite-strings -Wno-address-of-packed-member
> > -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> > -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -MF
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o.d -o
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -c
> > buildtools/chkincs/chkincs.p/rte_ethdev_pci.c
> > In file included from buildtools/chkincs/chkincs.p/rte_ethdev_pci.c:1:
> > /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_pci.h:86:13: error:
> > Symbol is not public ABI
> >                 eth_dev = rte_eth_dev_allocate(name);
> >                           ^
> > ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:1003:1: note: from
> > 'diagnose_if' attribute on 'rte_eth_dev_allocate':
> > __rte_internal
> > ^~~~~~~~~~~~~~
> > ../../dpdk/lib/librte_eal/include/rte_compat.h:30:16: note: expanded
> > from macro '__rte_internal'
> > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> >                ^           ~
> > [...]
> > 
> > 
> > gcc seems more lenient about this.
> > 
> > 
> > 
> > > Therefore, I'd suggest we need to change compat.h to be something like:
> > >
> > >   #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
> > >
> > >   #define __rte_internal \
> > >   __attribute__((error("Symbol is not public ABI"), \
> > >   section(".text.internal")))
> > >
> > >   #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
> > >
> > >   #define __rte_internal \
> > >   __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > >   section(".text.internal")))
> > >
> > >   #else
> > >
> > >   #define __rte_internal \
> > >   __attribute__((section(".text.internal")))
> > >
> > >   #endif
> > >
> > > Any thoughts or suggestions for better alternatives here?
> > 
> > I'd rather leave a build error on an unknown attribute than silence
> > this check (which happens in your snippet, where it falls back to the
> > #else part).
> > 
> > Did you consider the deprecated() like for the experimental tag?
> > 
> 
> I've added the ALLOW_INTERNAL_API define to the build of these headers in
> v4.

+Ferruh, Thomas

Removing the ALLOW_INTERNAL_API is probably a good idea, but it does indeed
throw up the errors with clang - but not gcc, which is strange. The
offending headers seem to be (initially):

* rte_ethdev_vdev.h
* rte_ethdev_pci.h

Are these public header files, or should they skip header checking - and
installation - as internal-only?

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  2021-01-26 14:04  3%       ` David Marchand
@ 2021-01-26 14:24  0%         ` Bruce Richardson
  2021-01-26 14:39  0%           ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-26 14:24 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon

On Tue, Jan 26, 2021 at 03:04:25PM +0100, David Marchand wrote:
> On Tue, Jan 26, 2021 at 12:15 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Jan 25, 2021 at 04:51:19PM +0100, David Marchand wrote:
> > > On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > As a general principle, each header file should include any other
> > > > headers it needs to provide data type definitions or macros. For
> > > > example, any header using the uintX_t types in structures or function
> > > > prototypes should include "stdint.h" to provide those type definitions.
> > > >
> > > > In practice, while many, but not all, headers in DPDK did include all
> > > > necessary headers, it was never actually checked that each header could
> > > > be included in a C file and compiled without having any compiler errors
> > > > about missing definitions.  The script "check-includes.sh" could be used
> > > > for this job, but it was not called out in the documentation, so many
> > > > contributors may not have been aware of it's existance. It also was
> > > > difficult to run from a source-code directory, as the script did not
> > > > automatically allow finding of headers from one DPDK library directory
> > > > to another [this was probably based on running it on a build created by
> > > > the "make" build system, where all headers were in a single directory].
> > > > To attempt to have a build-system integrated replacement, this patchset
> > > > adds a "chkincs" app in the buildtools directory to verify this on an
> > > > ongoing basis.
> > > >
> > > > This chkincs app does nothing when run, and is not installed as part of
> > > > a DPDK "ninja install", it's for build-time checking only. Its source
> > > > code consists of one C file per public DPDK header, where that C file
> > > > contains nothing except an include for that header.  Therefore, if any
> > > > header is added to the lib folder which fails to compile when included
> > > > alone, the build of chkincs will fail with a suitable error message.
> > > > Since this compile checking is not needed on most builds of DPDK, the
> > > > building of chkincs is disabled by default, but can be enabled by the
> > > > "test_includes" meson option. To catch errors with patch submissions,
> > > > the final patch of this series enables it for a single build in
> > > > test-meson-builds script.
> > > >
> > > > Future work could involve doing similar checks on headers for C++
> > > > compatibility, which was something done by the check-includes.sh script
> > > > but which is missing here..
> > > >
> > > > V3:
> > > > * Shrunk patchset as most header fixes already applied
> > > > * Moved chkincs from "apps" to the "buildtools" directory, which is a
> > > >   better location for something not for installation for end-user use.
> > > > * Added patch to drop check-includes script.
> > > >
> > > > V2:
> > > > * Add maintainers file entry for new app
> > > > * Drop patch for c11 ring header
> > > > * Use build variable "headers_no_chkincs" for tracking exceptions
> > > >
> > > > Bruce Richardson (4):
> > > >   eal: add missing include to mcslock
> > > >   build: separate out headers for include checking
> > > >   buildtools/chkincs: add app to verify header includes
> > > >   devtools: remove check-includes script
> > > >
> > > >  MAINTAINERS                                  |   5 +-
> > > >  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
> > > >  buildtools/chkincs/main.c                    |   4 +
> > > >  buildtools/chkincs/meson.build               |  40 +++
> > > >  devtools/check-includes.sh                   | 259 -------------------
> > > >  devtools/test-meson-builds.sh                |   2 +-
> > > >  doc/guides/contributing/coding_style.rst     |  12 +
> > > >  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
> > > >  lib/librte_eal/include/meson.build           |   2 +-
> > > >  lib/librte_eal/x86/include/meson.build       |  14 +-
> > > >  lib/librte_ethdev/meson.build                |   4 +-
> > > >  lib/librte_hash/meson.build                  |   4 +-
> > > >  lib/librte_ipsec/meson.build                 |   3 +-
> > > >  lib/librte_lpm/meson.build                   |   2 +-
> > > >  lib/librte_regexdev/meson.build              |   2 +-
> > > >  lib/librte_ring/meson.build                  |   4 +-
> > > >  lib/librte_stack/meson.build                 |   4 +-
> > > >  lib/librte_table/meson.build                 |   7 +-
> > > >  lib/meson.build                              |   3 +
> > > >  meson.build                                  |   6 +
> > > >  meson_options.txt                            |   2 +
> > > >  21 files changed, 112 insertions(+), 280 deletions(-)
> > > >  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
> > > >  create mode 100644 buildtools/chkincs/main.c
> > > >  create mode 100644 buildtools/chkincs/meson.build
> > > >  delete mode 100755 devtools/check-includes.sh
> > >
> > > - clang is not happy when enabling the check:
> > > $ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
> > > $ devtools/test-meson-builds.sh
> > > ...
> > > [362/464] Compiling C object
> > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > > FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > > clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> > > -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> > > -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> > > -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> > > -I../../dpdk/config -Ilib/librte_eal/include
> > > -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> > > -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> > > -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> > > -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> > > -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> > > -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> > > -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> > > -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> > > -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> > > -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> > > -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> > > -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> > > -I../../dpdk/lib/librte_net -Ilib/librte_meter
> > > -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> > > -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> > > -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> > > -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> > > -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> > > -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> > > -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> > > -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> > > -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> > > -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> > > -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> > > -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> > > -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> > > -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> > > -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> > > -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> > > -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> > > -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> > > -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> > > -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> > > -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> > > -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> > > -I../../dpdk/lib/librte_member -Ilib/librte_power
> > > -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> > > -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> > > -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> > > -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> > > -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> > > -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> > > -I../../dpdk/lib/librte_sched -Ilib/librte_security
> > > -I../../dpdk/lib/librte_security -Ilib/librte_stack
> > > -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> > > -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> > > -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> > > -I../../dpdk/lib/librte_fib -Ilib/librte_port
> > > -I../../dpdk/lib/librte_port -Ilib/librte_table
> > > -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> > > -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> > > -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> > > -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> > > -I../../dpdk/lib/librte_graph -Ilib/librte_node
> > > -I../../dpdk/lib/librte_node
> > > -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> > > -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> > > -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> > > -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> > > -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> > > -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> > > -Wwrite-strings -Wno-address-of-packed-member
> > > -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> > > -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
> > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
> > > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
> > > buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
> > > In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
> > > In file included from
> > > /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
> > > ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
> > > attribute 'error' ignored [-Werror,-Wunknown-attributes]
> > > __rte_internal
> > > ^
> > > ../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
> > > from macro '__rte_internal'
> > > __attribute__((error("Symbol is not public ABI"), \
> > >                ^
> > >
> >
> > This looks to be a real issue with our header file - clang does not have an
> > "error" attribute. The closest equivalent I can see is "diagnose_if".
> 
> Indeed, it does trigger a build error, so it works as expected ;-).
> 
> 
> On the header check itself, even if we find a way to properly tag
> those symbols with the macro in rte_compat.h, the next issue is that
> clang complains about such marked symbols without the
> ALLOW_INTERNAL_API build flag.
> 
> FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o
> clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> -I../../dpdk/config -Ilib/librte_eal/include
> -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> -I../../dpdk/lib/librte_net -Ilib/librte_meter
> -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> -I../../dpdk/lib/librte_member -Ilib/librte_power
> -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> -I../../dpdk/lib/librte_sched -Ilib/librte_security
> -I../../dpdk/lib/librte_security -Ilib/librte_stack
> -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> -I../../dpdk/lib/librte_fib -Ilib/librte_port
> -I../../dpdk/lib/librte_port -Ilib/librte_table
> -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> -I../../dpdk/lib/librte_graph -Ilib/librte_node
> -I../../dpdk/lib/librte_node
> -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> -Wwrite-strings -Wno-address-of-packed-member
> -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -MF
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o.d -o
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -c
> buildtools/chkincs/chkincs.p/rte_ethdev_pci.c
> In file included from buildtools/chkincs/chkincs.p/rte_ethdev_pci.c:1:
> /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_pci.h:86:13: error:
> Symbol is not public ABI
>                 eth_dev = rte_eth_dev_allocate(name);
>                           ^
> ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:1003:1: note: from
> 'diagnose_if' attribute on 'rte_eth_dev_allocate':
> __rte_internal
> ^~~~~~~~~~~~~~
> ../../dpdk/lib/librte_eal/include/rte_compat.h:30:16: note: expanded
> from macro '__rte_internal'
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
>                ^           ~
> [...]
> 
> 
> gcc seems more lenient about this.
> 
> 
> 
> > Therefore, I'd suggest we need to change compat.h to be something like:
> >
> >   #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
> >
> >   #define __rte_internal \
> >   __attribute__((error("Symbol is not public ABI"), \
> >   section(".text.internal")))
> >
> >   #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
> >
> >   #define __rte_internal \
> >   __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> >   section(".text.internal")))
> >
> >   #else
> >
> >   #define __rte_internal \
> >   __attribute__((section(".text.internal")))
> >
> >   #endif
> >
> > Any thoughts or suggestions for better alternatives here?
> 
> I'd rather leave a build error on an unknown attribute than silence
> this check (which happens in your snippet, where it falls back to the
> #else part).
> 
> Did you consider the deprecated() like for the experimental tag?
> 

I've added the ALLOW_INTERNAL_API define to the build of these headers in
v4.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 2/7] eal: fix error attribute use for clang
  @ 2021-01-26 14:18 13%   ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-26 14:18 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, Bruce Richardson, haiyue.wang, Ray Kinsella, Neil Horman

Clang does not have an "error" attribute for functions, so for marking
internal functions we need to check for the error attribute, and provide
a fallback if it is not present. For clang, we can use "diagnose_if"
attribute, similarly checking for its presence before use.

Fixes: fba5af82adc8 ("eal: add internal ABI tag definition")
Cc: haiyue.wang@intel.com

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eal/include/rte_compat.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/include/rte_compat.h b/lib/librte_eal/include/rte_compat.h
index 4cd8f68d68..c30f072aa3 100644
--- a/lib/librte_eal/include/rte_compat.h
+++ b/lib/librte_eal/include/rte_compat.h
@@ -19,12 +19,18 @@ __attribute__((section(".text.experimental")))
 
 #endif
 
-#ifndef ALLOW_INTERNAL_API
+#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
 
+#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+
+#define __rte_internal \
+__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
+section(".text.internal")))
+
 #else
 
 #define __rte_internal \
-- 
2.27.0


^ permalink raw reply	[relevance 13%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  2021-01-26 11:15  4%     ` Bruce Richardson
@ 2021-01-26 14:04  3%       ` David Marchand
  2021-01-26 14:24  0%         ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-26 14:04 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Thomas Monjalon

On Tue, Jan 26, 2021 at 12:15 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Jan 25, 2021 at 04:51:19PM +0100, David Marchand wrote:
> > On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > As a general principle, each header file should include any other
> > > headers it needs to provide data type definitions or macros. For
> > > example, any header using the uintX_t types in structures or function
> > > prototypes should include "stdint.h" to provide those type definitions.
> > >
> > > In practice, while many, but not all, headers in DPDK did include all
> > > necessary headers, it was never actually checked that each header could
> > > be included in a C file and compiled without having any compiler errors
> > > about missing definitions.  The script "check-includes.sh" could be used
> > > for this job, but it was not called out in the documentation, so many
> > > contributors may not have been aware of it's existance. It also was
> > > difficult to run from a source-code directory, as the script did not
> > > automatically allow finding of headers from one DPDK library directory
> > > to another [this was probably based on running it on a build created by
> > > the "make" build system, where all headers were in a single directory].
> > > To attempt to have a build-system integrated replacement, this patchset
> > > adds a "chkincs" app in the buildtools directory to verify this on an
> > > ongoing basis.
> > >
> > > This chkincs app does nothing when run, and is not installed as part of
> > > a DPDK "ninja install", it's for build-time checking only. Its source
> > > code consists of one C file per public DPDK header, where that C file
> > > contains nothing except an include for that header.  Therefore, if any
> > > header is added to the lib folder which fails to compile when included
> > > alone, the build of chkincs will fail with a suitable error message.
> > > Since this compile checking is not needed on most builds of DPDK, the
> > > building of chkincs is disabled by default, but can be enabled by the
> > > "test_includes" meson option. To catch errors with patch submissions,
> > > the final patch of this series enables it for a single build in
> > > test-meson-builds script.
> > >
> > > Future work could involve doing similar checks on headers for C++
> > > compatibility, which was something done by the check-includes.sh script
> > > but which is missing here..
> > >
> > > V3:
> > > * Shrunk patchset as most header fixes already applied
> > > * Moved chkincs from "apps" to the "buildtools" directory, which is a
> > >   better location for something not for installation for end-user use.
> > > * Added patch to drop check-includes script.
> > >
> > > V2:
> > > * Add maintainers file entry for new app
> > > * Drop patch for c11 ring header
> > > * Use build variable "headers_no_chkincs" for tracking exceptions
> > >
> > > Bruce Richardson (4):
> > >   eal: add missing include to mcslock
> > >   build: separate out headers for include checking
> > >   buildtools/chkincs: add app to verify header includes
> > >   devtools: remove check-includes script
> > >
> > >  MAINTAINERS                                  |   5 +-
> > >  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
> > >  buildtools/chkincs/main.c                    |   4 +
> > >  buildtools/chkincs/meson.build               |  40 +++
> > >  devtools/check-includes.sh                   | 259 -------------------
> > >  devtools/test-meson-builds.sh                |   2 +-
> > >  doc/guides/contributing/coding_style.rst     |  12 +
> > >  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
> > >  lib/librte_eal/include/meson.build           |   2 +-
> > >  lib/librte_eal/x86/include/meson.build       |  14 +-
> > >  lib/librte_ethdev/meson.build                |   4 +-
> > >  lib/librte_hash/meson.build                  |   4 +-
> > >  lib/librte_ipsec/meson.build                 |   3 +-
> > >  lib/librte_lpm/meson.build                   |   2 +-
> > >  lib/librte_regexdev/meson.build              |   2 +-
> > >  lib/librte_ring/meson.build                  |   4 +-
> > >  lib/librte_stack/meson.build                 |   4 +-
> > >  lib/librte_table/meson.build                 |   7 +-
> > >  lib/meson.build                              |   3 +
> > >  meson.build                                  |   6 +
> > >  meson_options.txt                            |   2 +
> > >  21 files changed, 112 insertions(+), 280 deletions(-)
> > >  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
> > >  create mode 100644 buildtools/chkincs/main.c
> > >  create mode 100644 buildtools/chkincs/meson.build
> > >  delete mode 100755 devtools/check-includes.sh
> >
> > - clang is not happy when enabling the check:
> > $ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
> > $ devtools/test-meson-builds.sh
> > ...
> > [362/464] Compiling C object
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> > clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> > -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> > -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> > -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> > -I../../dpdk/config -Ilib/librte_eal/include
> > -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> > -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> > -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> > -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> > -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> > -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> > -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> > -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> > -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> > -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> > -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> > -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> > -I../../dpdk/lib/librte_net -Ilib/librte_meter
> > -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> > -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> > -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> > -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> > -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> > -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> > -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> > -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> > -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> > -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> > -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> > -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> > -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> > -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> > -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> > -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> > -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> > -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> > -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> > -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> > -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> > -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> > -I../../dpdk/lib/librte_member -Ilib/librte_power
> > -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> > -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> > -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> > -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> > -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> > -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> > -I../../dpdk/lib/librte_sched -Ilib/librte_security
> > -I../../dpdk/lib/librte_security -Ilib/librte_stack
> > -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> > -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> > -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> > -I../../dpdk/lib/librte_fib -Ilib/librte_port
> > -I../../dpdk/lib/librte_port -Ilib/librte_table
> > -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> > -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> > -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> > -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> > -I../../dpdk/lib/librte_graph -Ilib/librte_node
> > -I../../dpdk/lib/librte_node
> > -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> > -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> > -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> > -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> > -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> > -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> > -Wwrite-strings -Wno-address-of-packed-member
> > -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> > -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
> > buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
> > buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
> > In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
> > In file included from
> > /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
> > ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
> > attribute 'error' ignored [-Werror,-Wunknown-attributes]
> > __rte_internal
> > ^
> > ../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
> > from macro '__rte_internal'
> > __attribute__((error("Symbol is not public ABI"), \
> >                ^
> >
>
> This looks to be a real issue with our header file - clang does not have an
> "error" attribute. The closest equivalent I can see is "diagnose_if".

Indeed, it does trigger a build error, so it works as expected ;-).


On the header check itself, even if we find a way to properly tag
those symbols with the macro in rte_compat.h, the next issue is that
clang complains about such marked symbols without the
ALLOW_INTERNAL_API build flag.

FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o
clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
-I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
-I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
-I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
-I../../dpdk/config -Ilib/librte_eal/include
-I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
-I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
-I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
-I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
-I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
-I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
-I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
-I../../dpdk/lib/librte_eal -Ilib/librte_ring
-I../../dpdk/lib/librte_ring -Ilib/librte_rcu
-I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
-I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
-I../../dpdk/lib/librte_mbuf -Ilib/librte_net
-I../../dpdk/lib/librte_net -Ilib/librte_meter
-I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
-I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
-I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
-I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
-I../../dpdk/lib/librte_hash -Ilib/librte_timer
-I../../dpdk/lib/librte_timer -Ilib/librte_acl
-I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
-I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
-I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
-I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
-I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
-I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
-I../../dpdk/lib/librte_distributor -Ilib/librte_efd
-I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
-I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
-I../../dpdk/lib/librte_gro -Ilib/librte_gso
-I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
-I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
-I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
-I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
-I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
-I../../dpdk/lib/librte_lpm -Ilib/librte_member
-I../../dpdk/lib/librte_member -Ilib/librte_power
-I../../dpdk/lib/librte_power -Ilib/librte_pdump
-I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
-I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
-I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
-I../../dpdk/lib/librte_rib -Ilib/librte_reorder
-I../../dpdk/lib/librte_reorder -Ilib/librte_sched
-I../../dpdk/lib/librte_sched -Ilib/librte_security
-I../../dpdk/lib/librte_security -Ilib/librte_stack
-I../../dpdk/lib/librte_stack -Ilib/librte_vhost
-I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
-I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
-I../../dpdk/lib/librte_fib -Ilib/librte_port
-I../../dpdk/lib/librte_port -Ilib/librte_table
-I../../dpdk/lib/librte_table -Ilib/librte_pipeline
-I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
-I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
-I../../dpdk/lib/librte_bpf -Ilib/librte_graph
-I../../dpdk/lib/librte_graph -Ilib/librte_node
-I../../dpdk/lib/librte_node
-I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
-fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
-Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
-Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wold-style-definition
-Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
-Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -march=native
-Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -MF
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o.d -o
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_pci.c.o -c
buildtools/chkincs/chkincs.p/rte_ethdev_pci.c
In file included from buildtools/chkincs/chkincs.p/rte_ethdev_pci.c:1:
/home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_pci.h:86:13: error:
Symbol is not public ABI
                eth_dev = rte_eth_dev_allocate(name);
                          ^
../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:1003:1: note: from
'diagnose_if' attribute on 'rte_eth_dev_allocate':
__rte_internal
^~~~~~~~~~~~~~
../../dpdk/lib/librte_eal/include/rte_compat.h:30:16: note: expanded
from macro '__rte_internal'
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
               ^           ~
[...]


gcc seems more lenient about this.



> Therefore, I'd suggest we need to change compat.h to be something like:
>
>   #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
>
>   #define __rte_internal \
>   __attribute__((error("Symbol is not public ABI"), \
>   section(".text.internal")))
>
>   #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
>
>   #define __rte_internal \
>   __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
>   section(".text.internal")))
>
>   #else
>
>   #define __rte_internal \
>   __attribute__((section(".text.internal")))
>
>   #endif
>
> Any thoughts or suggestions for better alternatives here?

I'd rather leave a build error on an unknown attribute than silence
this check (which happens in your snippet, where it falls back to the
#else part).

Did you consider the deprecated() like for the experimental tag?


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 12:50  0%   ` David Marchand
@ 2021-01-26 13:23  0%     ` Kinsella, Ray
  2021-01-26 14:40  4%       ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-26 13:23 UTC (permalink / raw)
  To: David Marchand, Maxime Coquelin
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata



On 26/01/2021 12:50, David Marchand wrote:
> On Tue, Jan 26, 2021 at 11:16 AM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch adds driver flag in vdev bus driver so that
>> vdev drivers can require VA IOVA mode to be used, which
>> for example the case of Virtio-user PMD.
>>
>> The patch implements the .get_iommu_class() callback, that
>> is called before devices probing to determine the IOVA mode
>> to be used, and adds a check right before the device is
>> probed to ensure compatible IOVA mode has been selected.
>>
>> It also adds a ABI exception rule to accommodate with an
>> update on the driver registration API
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
>> ---
>>  devtools/libabigail.abignore    |  2 ++
>>  drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
>>  drivers/bus/vdev/vdev.c         | 29 +++++++++++++++++++++++++++++
>>  3 files changed, 35 insertions(+)
>>
>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
>> index 1dc84fa74b..170304c876 100644
>> --- a/devtools/libabigail.abignore
>> +++ b/devtools/libabigail.abignore
>> @@ -11,6 +11,8 @@
>>  ; Explicit ignore for driver-only ABI
>>  [suppress_type]
>>          name = eth_dev_ops
>> +[suppress_function]
>> +        name_regexp = rte_vdev_(|un)register
>>
>>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>>  [suppress_type]
> 
> Ray,
> Are you okay with this exception?

Ask a perhaps silly question, 
shouldn't rte_vdev_register & rte_vdev_unregister have been INTERNAL in any case?

> Thanks.
> 

Ray K

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
  2021-01-26 11:50  0%   ` Xia, Chenbo
@ 2021-01-26 12:50  0%   ` David Marchand
  2021-01-26 13:23  0%     ` Kinsella, Ray
  2021-01-27  8:23  0%   ` David Marchand
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-26 12:50 UTC (permalink / raw)
  To: Maxime Coquelin, Ray Kinsella
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata

On Tue, Jan 26, 2021 at 11:16 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch adds driver flag in vdev bus driver so that
> vdev drivers can require VA IOVA mode to be used, which
> for example the case of Virtio-user PMD.
>
> The patch implements the .get_iommu_class() callback, that
> is called before devices probing to determine the IOVA mode
> to be used, and adds a check right before the device is
> probed to ensure compatible IOVA mode has been selected.
>
> It also adds a ABI exception rule to accommodate with an
> update on the driver registration API
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  devtools/libabigail.abignore    |  2 ++
>  drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
>  drivers/bus/vdev/vdev.c         | 29 +++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 1dc84fa74b..170304c876 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,6 +11,8 @@
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = eth_dev_ops
> +[suppress_function]
> +        name_regexp = rte_vdev_(|un)register
>
>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>  [suppress_type]

Ray,
Are you okay with this exception?

Thanks.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-20 14:25  4% [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev Ray Kinsella
  2021-01-20 15:41  7% ` Thomas Monjalon
@ 2021-01-26 11:55  8% ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-26 11:55 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: Neil Horman, Akhil Goyal, Konstantin Ananyev, Abhinandan Gujjar,
	dev, david.marchand

20/01/2021 15:25, Ray Kinsella:
> Update the ignore entry for crytodev to use named fields instead of
> bit positions.
> 
> Fixes: 1c3ffb9559
> 
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -15,4 +15,4 @@
>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>  [suppress_type]
>          name = rte_cryptodev
> -        has_data_member_inserted_between = {0, 1023}
> +        has_data_member_inserted_between = {offset_after(attached), end}

Adding a bit more explanations in the commit message:

It is allowing changes between the last field (attached) in ABI 21.0,
and the end of the padded struct in ABI 21.

Fixes: 1c3ffb95595e ("cryptodev: add enqueue and dequeue callbacks")

Acked-by: Thomas Monjalon <thomas@monjalon.net>

Applied, thanks.



^ permalink raw reply	[relevance 8%]

* Re: [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
@ 2021-01-26 11:50  0%   ` Xia, Chenbo
  2021-01-26 12:50  0%   ` David Marchand
  2021-01-27  8:23  0%   ` David Marchand
  2 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-01-26 11:50 UTC (permalink / raw)
  To: Maxime Coquelin, dev, olivier.matz, amorenoz, david.marchand

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, January 26, 2021 6:16 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>; olivier.matz@6wind.com;
> amorenoz@redhat.com; david.marchand@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
> 
> This patch adds driver flag in vdev bus driver so that
> vdev drivers can require VA IOVA mode to be used, which
> for example the case of Virtio-user PMD.
> 
> The patch implements the .get_iommu_class() callback, that
> is called before devices probing to determine the IOVA mode
> to be used, and adds a check right before the device is
> probed to ensure compatible IOVA mode has been selected.
> 
> It also adds a ABI exception rule to accommodate with an
> update on the driver registration API
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  devtools/libabigail.abignore    |  2 ++
>  drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
>  drivers/bus/vdev/vdev.c         | 29 +++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 1dc84fa74b..170304c876 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,6 +11,8 @@
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = eth_dev_ops
> +[suppress_function]
> +        name_regexp = rte_vdev_(|un)register
> 
>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>  [suppress_type]
> diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
> index f99a41f825..fc315d10fa 100644
> --- a/drivers/bus/vdev/rte_bus_vdev.h
> +++ b/drivers/bus/vdev/rte_bus_vdev.h
> @@ -113,8 +113,12 @@ struct rte_vdev_driver {
>  	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
>  	rte_vdev_dma_map_t *dma_map;     /**< Virtual device DMA map function.
> */
>  	rte_vdev_dma_unmap_t *dma_unmap; /**< Virtual device DMA unmap function.
> */
> +	uint32_t drv_flags;              /**< Flags RTE_VDEV_DRV_*. */
>  };
> 
> +/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
> +#define RTE_VDEV_DRV_NEED_IOVA_AS_VA 0x0001
> +
>  /**
>   * Register a virtual device driver.
>   *
> diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
> index acfd78828f..9a673347ae 100644
> --- a/drivers/bus/vdev/vdev.c
> +++ b/drivers/bus/vdev/vdev.c
> @@ -189,6 +189,7 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
>  {
>  	const char *name;
>  	struct rte_vdev_driver *driver;
> +	enum rte_iova_mode iova_mode;
>  	int ret;
> 
>  	if (rte_dev_is_probed(&dev->device))
> @@ -199,6 +200,14 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
> 
>  	if (vdev_parse(name, &driver))
>  		return -1;
> +
> +	iova_mode = rte_eal_iova_mode();
> +	if ((driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA) && (iova_mode ==
> RTE_IOVA_PA)) {
> +		VDEV_LOG(ERR, "%s requires VA IOVA mode but current mode is PA,
> not initializing",
> +				name);
> +		return -1;
> +	}
> +
>  	ret = driver->probe(dev);
>  	if (ret == 0)
>  		dev->device.driver = &driver->driver;
> @@ -594,6 +603,25 @@ vdev_unplug(struct rte_device *dev)
>  	return rte_vdev_uninit(dev->name);
>  }
> 
> +static enum rte_iova_mode
> +vdev_get_iommu_class(void)
> +{
> +	const char *name;
> +	struct rte_vdev_device *dev;
> +	struct rte_vdev_driver *driver;
> +
> +	TAILQ_FOREACH(dev, &vdev_device_list, next) {
> +		name = rte_vdev_device_name(dev);
> +		if (vdev_parse(name, &driver))
> +			continue;
> +
> +		if (driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA)
> +			return RTE_IOVA_VA;
> +	}
> +
> +	return RTE_IOVA_DC;
> +}
> +
>  static struct rte_bus rte_vdev_bus = {
>  	.scan = vdev_scan,
>  	.probe = vdev_probe,
> @@ -603,6 +631,7 @@ static struct rte_bus rte_vdev_bus = {
>  	.parse = vdev_parse,
>  	.dma_map = vdev_dma_map,
>  	.dma_unmap = vdev_dma_unmap,
> +	.get_iommu_class = vdev_get_iommu_class,
>  	.dev_iterate = rte_vdev_dev_iterate,
>  };
> 
> --
> 2.29.2

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  2021-01-25 15:51  2%   ` David Marchand
  2021-01-25 18:17  0%     ` Bruce Richardson
@ 2021-01-26 11:15  4%     ` Bruce Richardson
  2021-01-26 14:04  3%       ` David Marchand
  1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2021-01-26 11:15 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon

On Mon, Jan 25, 2021 at 04:51:19PM +0100, David Marchand wrote:
> On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > As a general principle, each header file should include any other
> > headers it needs to provide data type definitions or macros. For
> > example, any header using the uintX_t types in structures or function
> > prototypes should include "stdint.h" to provide those type definitions.
> >
> > In practice, while many, but not all, headers in DPDK did include all
> > necessary headers, it was never actually checked that each header could
> > be included in a C file and compiled without having any compiler errors
> > about missing definitions.  The script "check-includes.sh" could be used
> > for this job, but it was not called out in the documentation, so many
> > contributors may not have been aware of it's existance. It also was
> > difficult to run from a source-code directory, as the script did not
> > automatically allow finding of headers from one DPDK library directory
> > to another [this was probably based on running it on a build created by
> > the "make" build system, where all headers were in a single directory].
> > To attempt to have a build-system integrated replacement, this patchset
> > adds a "chkincs" app in the buildtools directory to verify this on an
> > ongoing basis.
> >
> > This chkincs app does nothing when run, and is not installed as part of
> > a DPDK "ninja install", it's for build-time checking only. Its source
> > code consists of one C file per public DPDK header, where that C file
> > contains nothing except an include for that header.  Therefore, if any
> > header is added to the lib folder which fails to compile when included
> > alone, the build of chkincs will fail with a suitable error message.
> > Since this compile checking is not needed on most builds of DPDK, the
> > building of chkincs is disabled by default, but can be enabled by the
> > "test_includes" meson option. To catch errors with patch submissions,
> > the final patch of this series enables it for a single build in
> > test-meson-builds script.
> >
> > Future work could involve doing similar checks on headers for C++
> > compatibility, which was something done by the check-includes.sh script
> > but which is missing here..
> >
> > V3:
> > * Shrunk patchset as most header fixes already applied
> > * Moved chkincs from "apps" to the "buildtools" directory, which is a
> >   better location for something not for installation for end-user use.
> > * Added patch to drop check-includes script.
> >
> > V2:
> > * Add maintainers file entry for new app
> > * Drop patch for c11 ring header
> > * Use build variable "headers_no_chkincs" for tracking exceptions
> >
> > Bruce Richardson (4):
> >   eal: add missing include to mcslock
> >   build: separate out headers for include checking
> >   buildtools/chkincs: add app to verify header includes
> >   devtools: remove check-includes script
> >
> >  MAINTAINERS                                  |   5 +-
> >  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
> >  buildtools/chkincs/main.c                    |   4 +
> >  buildtools/chkincs/meson.build               |  40 +++
> >  devtools/check-includes.sh                   | 259 -------------------
> >  devtools/test-meson-builds.sh                |   2 +-
> >  doc/guides/contributing/coding_style.rst     |  12 +
> >  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
> >  lib/librte_eal/include/meson.build           |   2 +-
> >  lib/librte_eal/x86/include/meson.build       |  14 +-
> >  lib/librte_ethdev/meson.build                |   4 +-
> >  lib/librte_hash/meson.build                  |   4 +-
> >  lib/librte_ipsec/meson.build                 |   3 +-
> >  lib/librte_lpm/meson.build                   |   2 +-
> >  lib/librte_regexdev/meson.build              |   2 +-
> >  lib/librte_ring/meson.build                  |   4 +-
> >  lib/librte_stack/meson.build                 |   4 +-
> >  lib/librte_table/meson.build                 |   7 +-
> >  lib/meson.build                              |   3 +
> >  meson.build                                  |   6 +
> >  meson_options.txt                            |   2 +
> >  21 files changed, 112 insertions(+), 280 deletions(-)
> >  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
> >  create mode 100644 buildtools/chkincs/main.c
> >  create mode 100644 buildtools/chkincs/meson.build
> >  delete mode 100755 devtools/check-includes.sh
> 
> - clang is not happy when enabling the check:
> $ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
> $ devtools/test-meson-builds.sh
> ...
> [362/464] Compiling C object
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> -I../../dpdk/config -Ilib/librte_eal/include
> -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> -I../../dpdk/lib/librte_net -Ilib/librte_meter
> -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> -I../../dpdk/lib/librte_member -Ilib/librte_power
> -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> -I../../dpdk/lib/librte_sched -Ilib/librte_security
> -I../../dpdk/lib/librte_security -Ilib/librte_stack
> -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> -I../../dpdk/lib/librte_fib -Ilib/librte_port
> -I../../dpdk/lib/librte_port -Ilib/librte_table
> -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> -I../../dpdk/lib/librte_graph -Ilib/librte_node
> -I../../dpdk/lib/librte_node
> -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> -Wwrite-strings -Wno-address-of-packed-member
> -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
> buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
> In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
> In file included from
> /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
> ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
> attribute 'error' ignored [-Werror,-Wunknown-attributes]
> __rte_internal
> ^
> ../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
> from macro '__rte_internal'
> __attribute__((error("Symbol is not public ABI"), \
>                ^
> 

This looks to be a real issue with our header file - clang does not have an
"error" attribute. The closest equivalent I can see is "diagnose_if".
Therefore, I'd suggest we need to change compat.h to be something like:

  #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */

  #define __rte_internal \
  __attribute__((error("Symbol is not public ABI"), \
  section(".text.internal")))

  #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */

  #define __rte_internal \
  __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
  section(".text.internal")))

  #else

  #define __rte_internal \
  __attribute__((section(".text.internal")))

  #endif

Any thoughts or suggestions for better alternatives here?

/Bruce

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-26 10:15  3% [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
@ 2021-01-26 10:15  7% ` Maxime Coquelin
  2021-01-26 11:50  0%   ` Xia, Chenbo
                     ` (2 more replies)
  2021-01-27 11:59  0% ` [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
  1 sibling, 3 replies; 200+ results
From: Maxime Coquelin @ 2021-01-26 10:15 UTC (permalink / raw)
  To: dev, chenbo.xia, olivier.matz, amorenoz, david.marchand; +Cc: Maxime Coquelin

This patch adds driver flag in vdev bus driver so that
vdev drivers can require VA IOVA mode to be used, which
for example the case of Virtio-user PMD.

The patch implements the .get_iommu_class() callback, that
is called before devices probing to determine the IOVA mode
to be used, and adds a check right before the device is
probed to ensure compatible IOVA mode has been selected.

It also adds a ABI exception rule to accommodate with an
update on the driver registration API

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 devtools/libabigail.abignore    |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
 drivers/bus/vdev/vdev.c         | 29 +++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 1dc84fa74b..170304c876 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,6 +11,8 @@
 ; Explicit ignore for driver-only ABI
 [suppress_type]
         name = eth_dev_ops
+[suppress_function]
+        name_regexp = rte_vdev_(|un)register
 
 ; Ignore fields inserted in cacheline boundary of rte_cryptodev
 [suppress_type]
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index f99a41f825..fc315d10fa 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -113,8 +113,12 @@ struct rte_vdev_driver {
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
 	rte_vdev_dma_map_t *dma_map;     /**< Virtual device DMA map function. */
 	rte_vdev_dma_unmap_t *dma_unmap; /**< Virtual device DMA unmap function. */
+	uint32_t drv_flags;              /**< Flags RTE_VDEV_DRV_*. */
 };
 
+/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
+#define RTE_VDEV_DRV_NEED_IOVA_AS_VA 0x0001
+
 /**
  * Register a virtual device driver.
  *
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index acfd78828f..9a673347ae 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -189,6 +189,7 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
 {
 	const char *name;
 	struct rte_vdev_driver *driver;
+	enum rte_iova_mode iova_mode;
 	int ret;
 
 	if (rte_dev_is_probed(&dev->device))
@@ -199,6 +200,14 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
 
 	if (vdev_parse(name, &driver))
 		return -1;
+
+	iova_mode = rte_eal_iova_mode();
+	if ((driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA) && (iova_mode == RTE_IOVA_PA)) {
+		VDEV_LOG(ERR, "%s requires VA IOVA mode but current mode is PA, not initializing",
+				name);
+		return -1;
+	}
+
 	ret = driver->probe(dev);
 	if (ret == 0)
 		dev->device.driver = &driver->driver;
@@ -594,6 +603,25 @@ vdev_unplug(struct rte_device *dev)
 	return rte_vdev_uninit(dev->name);
 }
 
+static enum rte_iova_mode
+vdev_get_iommu_class(void)
+{
+	const char *name;
+	struct rte_vdev_device *dev;
+	struct rte_vdev_driver *driver;
+
+	TAILQ_FOREACH(dev, &vdev_device_list, next) {
+		name = rte_vdev_device_name(dev);
+		if (vdev_parse(name, &driver))
+			continue;
+
+		if (driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA)
+			return RTE_IOVA_VA;
+	}
+
+	return RTE_IOVA_DC;
+}
+
 static struct rte_bus rte_vdev_bus = {
 	.scan = vdev_scan,
 	.probe = vdev_probe,
@@ -603,6 +631,7 @@ static struct rte_bus rte_vdev_bus = {
 	.parse = vdev_parse,
 	.dma_map = vdev_dma_map,
 	.dma_unmap = vdev_dma_unmap,
+	.get_iommu_class = vdev_get_iommu_class,
 	.dev_iterate = rte_vdev_dev_iterate,
 };
 
-- 
2.29.2


^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework
@ 2021-01-26 10:15  3% Maxime Coquelin
  2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
  2021-01-27 11:59  0% ` [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
  0 siblings, 2 replies; 200+ results
From: Maxime Coquelin @ 2021-01-26 10:15 UTC (permalink / raw)
  To: dev, chenbo.xia, olivier.matz, amorenoz, david.marchand; +Cc: Maxime Coquelin

This V3 fixes comments from Chenbo on patch 44 and
implements the ABI exception in patch 2.

This series significantly rework Virtio PMD to improve
the Virtio-user PMD and its backends integration.

First part of the series removes the dependency of
Virtio-user ethdev on Virtio PCI, by creating generic
files, adding per-bus meta data, ...

Main (if not single) functionnal change of this first
part is to remove the hack for Virtio-user to work in
IOVA as PA mode, this hack being very fragile.

Second part of the series reworks Virtio-user internal,
by reworking the requests handling so that vDPA and Kernel
backends no more hack into being Vhost-user backend. It
implies implementing new ops for all the request types.
Also, all the backend specific actions are moved from the
virtio_user_dev.c and virtio_user_ethdev.c to their
backend files.

Only functionnal change in this second part is making the
Vhost-user server mode blocking at init time, as long as
a client is not connected. The goal of this change is to
make the Vhost-user support much more robust, as without
blocking, the driver has to assume features that are going
to be supported by the client, which is very fragile and
error prone. As a side-effect, it also simplifies the
logic nin several place of the virtio-user PMD.

Main changes in v4:
- Add ABI exception (David)
- Close FDs only up to max_queue_pairs
- virtio_user_dev_uninit_notify() to return void

Main changes in v3:
- Rename .intr_event to .intr_detect
- Rework last patch, properly clean allocated resources
  on failure.
- Rebase on top of latest net-next/main
- Minor typo fixes in comments and log improvements

Main changes in v2:
===================
- Introduce vdev driver flag for drivers to require IOVA VA mode
- Rebase on top of -rc1 changes
- Fix regressions introduced in V1 (vhost-kernel broken, vhost-user reconnect...)
- Various minor issues & typos fixed
- Fix status feature issue introduced in v20.11, only reproduceable now that server
  mode is made blocking
- Improve failure handling in Virtio-user
- Improve logging

Testing coverage (All passed)
=============================
- Virtio-pci PMD
 * Virtio PMD in guest with Vhost-user backend in host
 * Virtio PMD in guest with Vhost-kernel backend in host
- Virtio-user PMD with Vhost-user backend
 * Vhost-user PMD server <-> Virtio-user client PMD IO loopback
 * Vhost-user PMD client <-> Virtio-user server PMD IO loopback
 * Vhost-user PMD client <-> Virtio-user server PMD reconnect
- Virtio-user PMD with Vhost-kernel backend
 * iperf test case
 * Txonly testpmd
- Virtio-user PMD with Vhost-vDPA backend
 * vdpa-sim (IO loopback)
 * CX-6 DX Kernel vDPA (Tx only)

Maxime Coquelin (44):
  bus/vdev: add helper to get vdev from ethdev
  bus/vdev: add driver IOVA VA mode requirement
  net/virtio: fix getting old status on reconnect
  net/virtio: introduce Virtio bus type
  net/virtio: refactor virtio-user device
  net/virtio: introduce PCI device metadata
  net/virtio: move PCI device init in dedicated file
  net/virtio: move PCI specific dev init to PCI ethdev init
  net/virtio: move MSIX detection to PCI ethdev
  net/virtio: force IOVA as VA mode for Virtio-user
  net/virtio: store PCI type in Virtio device metadata
  net/virtio: add callback for device closing
  net/virtio: validate features at bus level
  net/virtio: remove bus type enum
  net/virtio: move PCI-specific fields to PCI device
  net/virtio: pack virtio HW struct
  net/virtio: move legacy IO to Virtio PCI
  net/virtio: introduce generic virtio header
  net/virtio: move features definition to generic header
  net/virtio: move virtqueue defines in generic header
  net/virtio: move config definitions to generic header
  net/virtio: make interrupt handling more generic
  net/virtio: move vring alignment to generic header
  net/virtio: remove last PCI refs in non-PCI code
  net/virtio: make Vhost-user request sender consistent
  net/virtio: add Virtio-user ops to set owner
  net/virtio: add Virtio-user features ops
  net/virtio: add Virtio-user protocol features ops
  net/virtio: add Virtio-user memory tables ops
  net/virtio: add Virtio-user vring setting ops
  net/virtio: add Virtio-user vring file ops
  net/virtio: add Virtio-user vring address ops
  net/virtio: add Virtio-user status ops
  net/virtio: remove useless request ops
  net/virtio: improve Virtio-user errors handling
  net/virtio: move Vhost-user requests to Vhost-user backend
  net/virtio: make server mode blocking
  net/virtio: move protocol features to Vhost-user
  net/virtio: introduce backend data
  net/virtio: move Vhost-user specifics to its backend
  net/virtio: move Vhost-kernel data to its backend
  net/virtio: move Vhost-vDPA data to its backend
  net/virtio: improve Vhost-user error logging
  net/virtio: handle Virtio-user setup failure properly

 devtools/libabigail.abignore                  |   2 +
 drivers/bus/vdev/rte_bus_vdev.h               |   6 +
 drivers/bus/vdev/vdev.c                       |  29 +
 drivers/net/virtio/meson.build                |   6 +-
 drivers/net/virtio/virtio.c                   |  71 ++
 drivers/net/virtio/virtio.h                   | 246 +++++
 drivers/net/virtio/virtio_ethdev.c            | 457 +++------
 drivers/net/virtio/virtio_ethdev.h            |   6 +-
 drivers/net/virtio/virtio_pci.c               | 448 +++++----
 drivers/net/virtio/virtio_pci.h               | 286 +-----
 drivers/net/virtio/virtio_pci_ethdev.c        | 226 +++++
 drivers/net/virtio/virtio_ring.h              |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  90 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |  10 +-
 drivers/net/virtio/virtio_rxtx_packed_avx.h   |  10 +-
 drivers/net/virtio/virtio_rxtx_packed_neon.h  |  10 +-
 drivers/net/virtio/virtio_rxtx_simple.h       |   3 +-
 drivers/net/virtio/virtio_user/vhost.h        |  79 +-
 drivers/net/virtio/virtio_user/vhost_kernel.c | 461 ++++++---
 .../net/virtio/virtio_user/vhost_kernel_tap.c |  25 +-
 .../net/virtio/virtio_user/vhost_kernel_tap.h |   1 +
 drivers/net/virtio/virtio_user/vhost_user.c   | 898 ++++++++++++++----
 drivers/net/virtio/virtio_user/vhost_vdpa.c   | 323 +++++--
 .../net/virtio/virtio_user/virtio_user_dev.c  | 573 ++++++-----
 .../net/virtio/virtio_user/virtio_user_dev.h  |  21 +-
 drivers/net/virtio/virtio_user_ethdev.c       | 301 +-----
 drivers/net/virtio/virtqueue.c                |   6 +-
 drivers/net/virtio/virtqueue.h                |  45 +-
 28 files changed, 2742 insertions(+), 1899 deletions(-)
 create mode 100644 drivers/net/virtio/virtio.c
 create mode 100644 drivers/net/virtio/virtio.h
 create mode 100644 drivers/net/virtio/virtio_pci_ethdev.c

-- 
2.29.2


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: add IPv6 DSCP option for modify field action
  2021-01-26  5:21  3%   ` Alexander Kozyrev
  2021-01-26  5:35  0%     ` Ajit Khaparde
@ 2021-01-26  5:44  0%     ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-01-26  5:44 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dev, Slava Ovsiienko, Ori Kam, NBU-Contact-Thomas Monjalon,
	ferruh.yigit, andrew.rybchenko, jerinjacobk, ajit.khaparde

On Tue, 26 Jan 2021 05:21:23 +0000
Alexander Kozyrev <akozyrev@nvidia.com> wrote:

> > From: Stephen Hemminger <stephen@networkplumber.org> on Monday, January 25, 2021 22:44
> > 
> > On Tue, 26 Jan 2021 03:38:24 +0000
> > Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >   
> > > IPv6 DSCP field ID is missing from the original list of Field IDs
> > > for MODIFY_FIELD action. Add it to support IPv6 header fully.
> > >
> > > Fixes: 73b68f4c54a ("ethdev: introduce generic modify flow action")
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> > >  lib/librte_ethdev/rte_flow.h | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > > index 46e8ee70ab..68c68cdd6c 100644
> > > --- a/lib/librte_ethdev/rte_flow.h
> > > +++ b/lib/librte_ethdev/rte_flow.h
> > > @@ -2842,6 +2842,7 @@ enum rte_flow_field_id {
> > >  	RTE_FLOW_FIELD_IPV4_TTL,
> > >  	RTE_FLOW_FIELD_IPV4_SRC,
> > >  	RTE_FLOW_FIELD_IPV4_DST,
> > > +	RTE_FLOW_FIELD_IPV6_DSCP,
> > >  	RTE_FLOW_FIELD_IPV6_HOPLIMIT,
> > >  	RTE_FLOW_FIELD_IPV6_SRC,
> > >  	RTE_FLOW_FIELD_IPV6_DST,  
> > 
> > Adding field in middle of enum will break ABI.  
> 
> I added the rte_flow_field_id enum a week ago into 20.11-rc1.
> I believe it is not too late to make it right without breaking ABI, don't you think so?

Ok if not in release yet


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: add IPv6 DSCP option for modify field action
  2021-01-26  5:21  3%   ` Alexander Kozyrev
@ 2021-01-26  5:35  0%     ` Ajit Khaparde
  2021-01-26  5:44  0%     ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-01-26  5:35 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: Stephen Hemminger, dev, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko,
	jerinjacobk

[-- Attachment #1: Type: text/plain, Size: 1382 bytes --]

On Mon, Jan 25, 2021 at 9:21 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> > From: Stephen Hemminger <stephen@networkplumber.org> on Monday, January 25, 2021 22:44
> >
> > On Tue, 26 Jan 2021 03:38:24 +0000
> > Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >
> > > IPv6 DSCP field ID is missing from the original list of Field IDs
> > > for MODIFY_FIELD action. Add it to support IPv6 header fully.
> > >
> > > Fixes: 73b68f4c54a ("ethdev: introduce generic modify flow action")
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> > >  lib/librte_ethdev/rte_flow.h | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > > index 46e8ee70ab..68c68cdd6c 100644
> > > --- a/lib/librte_ethdev/rte_flow.h
> > > +++ b/lib/librte_ethdev/rte_flow.h
> > > @@ -2842,6 +2842,7 @@ enum rte_flow_field_id {
> > >     RTE_FLOW_FIELD_IPV4_TTL,
> > >     RTE_FLOW_FIELD_IPV4_SRC,
> > >     RTE_FLOW_FIELD_IPV4_DST,
> > > +   RTE_FLOW_FIELD_IPV6_DSCP,
> > >     RTE_FLOW_FIELD_IPV6_HOPLIMIT,
> > >     RTE_FLOW_FIELD_IPV6_SRC,
> > >     RTE_FLOW_FIELD_IPV6_DST,
> >
> > Adding field in middle of enum will break ABI.
>
> I added the rte_flow_field_id enum a week ago into 20.11-rc1.
21.02-rc1

> I believe it is not too late to make it right without breaking ABI, don't you think so?

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: add IPv6 DSCP option for modify field action
  2021-01-26  3:43  3% ` Stephen Hemminger
@ 2021-01-26  5:21  3%   ` Alexander Kozyrev
  2021-01-26  5:35  0%     ` Ajit Khaparde
  2021-01-26  5:44  0%     ` Stephen Hemminger
  0 siblings, 2 replies; 200+ results
From: Alexander Kozyrev @ 2021-01-26  5:21 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, Slava Ovsiienko, Ori Kam, NBU-Contact-Thomas Monjalon,
	ferruh.yigit, andrew.rybchenko, jerinjacobk, ajit.khaparde

> From: Stephen Hemminger <stephen@networkplumber.org> on Monday, January 25, 2021 22:44
> 
> On Tue, 26 Jan 2021 03:38:24 +0000
> Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> 
> > IPv6 DSCP field ID is missing from the original list of Field IDs
> > for MODIFY_FIELD action. Add it to support IPv6 header fully.
> >
> > Fixes: 73b68f4c54a ("ethdev: introduce generic modify flow action")
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > ---
> >  lib/librte_ethdev/rte_flow.h | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index 46e8ee70ab..68c68cdd6c 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -2842,6 +2842,7 @@ enum rte_flow_field_id {
> >  	RTE_FLOW_FIELD_IPV4_TTL,
> >  	RTE_FLOW_FIELD_IPV4_SRC,
> >  	RTE_FLOW_FIELD_IPV4_DST,
> > +	RTE_FLOW_FIELD_IPV6_DSCP,
> >  	RTE_FLOW_FIELD_IPV6_HOPLIMIT,
> >  	RTE_FLOW_FIELD_IPV6_SRC,
> >  	RTE_FLOW_FIELD_IPV6_DST,
> 
> Adding field in middle of enum will break ABI.

I added the rte_flow_field_id enum a week ago into 20.11-rc1.
I believe it is not too late to make it right without breaking ABI, don't you think so?

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: add IPv6 DSCP option for modify field action
  @ 2021-01-26  3:43  3% ` Stephen Hemminger
  2021-01-26  5:21  3%   ` Alexander Kozyrev
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-01-26  3:43 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dev, viacheslavo, orika, thomas, ferruh.yigit, andrew.rybchenko,
	jerinjacobk, ajit.khaparde

On Tue, 26 Jan 2021 03:38:24 +0000
Alexander Kozyrev <akozyrev@nvidia.com> wrote:

> IPv6 DSCP field ID is missing from the original list of Field IDs
> for MODIFY_FIELD action. Add it to support IPv6 header fully.
> 
> Fixes: 73b68f4c54a ("ethdev: introduce generic modify flow action")
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---
>  lib/librte_ethdev/rte_flow.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 46e8ee70ab..68c68cdd6c 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -2842,6 +2842,7 @@ enum rte_flow_field_id {
>  	RTE_FLOW_FIELD_IPV4_TTL,
>  	RTE_FLOW_FIELD_IPV4_SRC,
>  	RTE_FLOW_FIELD_IPV4_DST,
> +	RTE_FLOW_FIELD_IPV6_DSCP,
>  	RTE_FLOW_FIELD_IPV6_HOPLIMIT,
>  	RTE_FLOW_FIELD_IPV6_SRC,
>  	RTE_FLOW_FIELD_IPV6_DST,

Adding field in middle of enum will break ABI.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  2021-01-25 15:51  2%   ` David Marchand
@ 2021-01-25 18:17  0%     ` Bruce Richardson
  2021-01-26 11:15  4%     ` Bruce Richardson
  1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2021-01-25 18:17 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon

On Mon, Jan 25, 2021 at 04:51:19PM +0100, David Marchand wrote:
> On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > As a general principle, each header file should include any other
> > headers it needs to provide data type definitions or macros. For
> > example, any header using the uintX_t types in structures or function
> > prototypes should include "stdint.h" to provide those type definitions.
> >
> > In practice, while many, but not all, headers in DPDK did include all
> > necessary headers, it was never actually checked that each header could
> > be included in a C file and compiled without having any compiler errors
> > about missing definitions.  The script "check-includes.sh" could be used
> > for this job, but it was not called out in the documentation, so many
> > contributors may not have been aware of it's existance. It also was
> > difficult to run from a source-code directory, as the script did not
> > automatically allow finding of headers from one DPDK library directory
> > to another [this was probably based on running it on a build created by
> > the "make" build system, where all headers were in a single directory].
> > To attempt to have a build-system integrated replacement, this patchset
> > adds a "chkincs" app in the buildtools directory to verify this on an
> > ongoing basis.
> >
> > This chkincs app does nothing when run, and is not installed as part of
> > a DPDK "ninja install", it's for build-time checking only. Its source
> > code consists of one C file per public DPDK header, where that C file
> > contains nothing except an include for that header.  Therefore, if any
> > header is added to the lib folder which fails to compile when included
> > alone, the build of chkincs will fail with a suitable error message.
> > Since this compile checking is not needed on most builds of DPDK, the
> > building of chkincs is disabled by default, but can be enabled by the
> > "test_includes" meson option. To catch errors with patch submissions,
> > the final patch of this series enables it for a single build in
> > test-meson-builds script.
> >
> > Future work could involve doing similar checks on headers for C++
> > compatibility, which was something done by the check-includes.sh script
> > but which is missing here..
> >
> > V3:
> > * Shrunk patchset as most header fixes already applied
> > * Moved chkincs from "apps" to the "buildtools" directory, which is a
> >   better location for something not for installation for end-user use.
> > * Added patch to drop check-includes script.
> >
> > V2:
> > * Add maintainers file entry for new app
> > * Drop patch for c11 ring header
> > * Use build variable "headers_no_chkincs" for tracking exceptions
> >
> > Bruce Richardson (4):
> >   eal: add missing include to mcslock
> >   build: separate out headers for include checking
> >   buildtools/chkincs: add app to verify header includes
> >   devtools: remove check-includes script
> >
> >  MAINTAINERS                                  |   5 +-
> >  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
> >  buildtools/chkincs/main.c                    |   4 +
> >  buildtools/chkincs/meson.build               |  40 +++
> >  devtools/check-includes.sh                   | 259 -------------------
> >  devtools/test-meson-builds.sh                |   2 +-
> >  doc/guides/contributing/coding_style.rst     |  12 +
> >  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
> >  lib/librte_eal/include/meson.build           |   2 +-
> >  lib/librte_eal/x86/include/meson.build       |  14 +-
> >  lib/librte_ethdev/meson.build                |   4 +-
> >  lib/librte_hash/meson.build                  |   4 +-
> >  lib/librte_ipsec/meson.build                 |   3 +-
> >  lib/librte_lpm/meson.build                   |   2 +-
> >  lib/librte_regexdev/meson.build              |   2 +-
> >  lib/librte_ring/meson.build                  |   4 +-
> >  lib/librte_stack/meson.build                 |   4 +-
> >  lib/librte_table/meson.build                 |   7 +-
> >  lib/meson.build                              |   3 +
> >  meson.build                                  |   6 +
> >  meson_options.txt                            |   2 +
> >  21 files changed, 112 insertions(+), 280 deletions(-)
> >  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
> >  create mode 100644 buildtools/chkincs/main.c
> >  create mode 100644 buildtools/chkincs/meson.build
> >  delete mode 100755 devtools/check-includes.sh
> 
> - clang is not happy when enabling the check:
> $ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
> $ devtools/test-meson-builds.sh
> ...
> [362/464] Compiling C object
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
> clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
> -I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
> -I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
> -I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
> -I../../dpdk/config -Ilib/librte_eal/include
> -I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
> -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> -I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
> -I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
> -I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
> -I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
> -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> -I../../dpdk/lib/librte_eal -Ilib/librte_ring
> -I../../dpdk/lib/librte_ring -Ilib/librte_rcu
> -I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
> -I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
> -I../../dpdk/lib/librte_mbuf -Ilib/librte_net
> -I../../dpdk/lib/librte_net -Ilib/librte_meter
> -I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
> -I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
> -I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
> -I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
> -I../../dpdk/lib/librte_hash -Ilib/librte_timer
> -I../../dpdk/lib/librte_timer -Ilib/librte_acl
> -I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
> -I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
> -I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
> -I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
> -I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
> -I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
> -I../../dpdk/lib/librte_distributor -Ilib/librte_efd
> -I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
> -I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
> -I../../dpdk/lib/librte_gro -Ilib/librte_gso
> -I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
> -I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
> -I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
> -I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
> -I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
> -I../../dpdk/lib/librte_lpm -Ilib/librte_member
> -I../../dpdk/lib/librte_member -Ilib/librte_power
> -I../../dpdk/lib/librte_power -Ilib/librte_pdump
> -I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
> -I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
> -I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
> -I../../dpdk/lib/librte_rib -Ilib/librte_reorder
> -I../../dpdk/lib/librte_reorder -Ilib/librte_sched
> -I../../dpdk/lib/librte_sched -Ilib/librte_security
> -I../../dpdk/lib/librte_security -Ilib/librte_stack
> -I../../dpdk/lib/librte_stack -Ilib/librte_vhost
> -I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
> -I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
> -I../../dpdk/lib/librte_fib -Ilib/librte_port
> -I../../dpdk/lib/librte_port -Ilib/librte_table
> -I../../dpdk/lib/librte_table -Ilib/librte_pipeline
> -I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
> -I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
> -I../../dpdk/lib/librte_bpf -Ilib/librte_graph
> -I../../dpdk/lib/librte_graph -Ilib/librte_node
> -I../../dpdk/lib/librte_node
> -I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
> -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
> -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> -Wwrite-strings -Wno-address-of-packed-member
> -Wno-missing-field-initializers -D_GNU_SOURCE -march=native
> -Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
> buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
> buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
> In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
> In file included from
> /home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
> ../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
> attribute 'error' ignored [-Werror,-Wunknown-attributes]
> __rte_internal
> ^
> ../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
> from macro '__rte_internal'
> __attribute__((error("Symbol is not public ABI"), \
>                ^
> 
> 
> - Other issues with ARM builds (arch-specific headers probably the reason):
> $ meson configure $HOME/builds/build-arm64-bluefield -Dcheck_includes=true
> $ devtools/test-meson-builds.sh
> ...
> In file included from buildtools/chkincs/chkincs.p/rte_rib6.c:1:
> /home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h: In function ‘get_msk_part’:
> /home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:112:10: error: implicit
> declaration of function ‘RTE_MIN’; did you mean ‘INT8_MIN’?
> [-Werror=implicit-function-declaration]
>   depth = RTE_MIN(depth, 128);
>           ^~~~~~~
>           INT8_MIN
> /home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:112:10: error: nested
> extern declaration of ‘RTE_MIN’ [-Werror=nested-externs]
> /home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:113:9: error: implicit
> declaration of function ‘RTE_MAX’; did you mean ‘INT8_MAX’?
> [-Werror=implicit-function-declaration]
>   part = RTE_MAX((int16_t)depth - (byte * 8), 0);
>          ^~~~~~~
>          INT8_MAX
> /home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:113:9: error: nested
> extern declaration of ‘RTE_MAX’ [-Werror=nested-externs]
> cc1: all warnings being treated as errors
> 
> 
> - This check should be enabled for x86 and aarch cross build in GHA.
> 
Sure, will look into all of these.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/4] add checking of header includes
  @ 2021-01-25 15:51  2%   ` David Marchand
  2021-01-25 18:17  0%     ` Bruce Richardson
  2021-01-26 11:15  4%     ` Bruce Richardson
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-01-25 15:51 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Thomas Monjalon

On Mon, Jan 25, 2021 at 3:11 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> As a general principle, each header file should include any other
> headers it needs to provide data type definitions or macros. For
> example, any header using the uintX_t types in structures or function
> prototypes should include "stdint.h" to provide those type definitions.
>
> In practice, while many, but not all, headers in DPDK did include all
> necessary headers, it was never actually checked that each header could
> be included in a C file and compiled without having any compiler errors
> about missing definitions.  The script "check-includes.sh" could be used
> for this job, but it was not called out in the documentation, so many
> contributors may not have been aware of it's existance. It also was
> difficult to run from a source-code directory, as the script did not
> automatically allow finding of headers from one DPDK library directory
> to another [this was probably based on running it on a build created by
> the "make" build system, where all headers were in a single directory].
> To attempt to have a build-system integrated replacement, this patchset
> adds a "chkincs" app in the buildtools directory to verify this on an
> ongoing basis.
>
> This chkincs app does nothing when run, and is not installed as part of
> a DPDK "ninja install", it's for build-time checking only. Its source
> code consists of one C file per public DPDK header, where that C file
> contains nothing except an include for that header.  Therefore, if any
> header is added to the lib folder which fails to compile when included
> alone, the build of chkincs will fail with a suitable error message.
> Since this compile checking is not needed on most builds of DPDK, the
> building of chkincs is disabled by default, but can be enabled by the
> "test_includes" meson option. To catch errors with patch submissions,
> the final patch of this series enables it for a single build in
> test-meson-builds script.
>
> Future work could involve doing similar checks on headers for C++
> compatibility, which was something done by the check-includes.sh script
> but which is missing here..
>
> V3:
> * Shrunk patchset as most header fixes already applied
> * Moved chkincs from "apps" to the "buildtools" directory, which is a
>   better location for something not for installation for end-user use.
> * Added patch to drop check-includes script.
>
> V2:
> * Add maintainers file entry for new app
> * Drop patch for c11 ring header
> * Use build variable "headers_no_chkincs" for tracking exceptions
>
> Bruce Richardson (4):
>   eal: add missing include to mcslock
>   build: separate out headers for include checking
>   buildtools/chkincs: add app to verify header includes
>   devtools: remove check-includes script
>
>  MAINTAINERS                                  |   5 +-
>  buildtools/chkincs/gen_c_file_for_header.py  |  12 +
>  buildtools/chkincs/main.c                    |   4 +
>  buildtools/chkincs/meson.build               |  40 +++
>  devtools/check-includes.sh                   | 259 -------------------
>  devtools/test-meson-builds.sh                |   2 +-
>  doc/guides/contributing/coding_style.rst     |  12 +
>  lib/librte_eal/include/generic/rte_mcslock.h |   1 +
>  lib/librte_eal/include/meson.build           |   2 +-
>  lib/librte_eal/x86/include/meson.build       |  14 +-
>  lib/librte_ethdev/meson.build                |   4 +-
>  lib/librte_hash/meson.build                  |   4 +-
>  lib/librte_ipsec/meson.build                 |   3 +-
>  lib/librte_lpm/meson.build                   |   2 +-
>  lib/librte_regexdev/meson.build              |   2 +-
>  lib/librte_ring/meson.build                  |   4 +-
>  lib/librte_stack/meson.build                 |   4 +-
>  lib/librte_table/meson.build                 |   7 +-
>  lib/meson.build                              |   3 +
>  meson.build                                  |   6 +
>  meson_options.txt                            |   2 +
>  21 files changed, 112 insertions(+), 280 deletions(-)
>  create mode 100755 buildtools/chkincs/gen_c_file_for_header.py
>  create mode 100644 buildtools/chkincs/main.c
>  create mode 100644 buildtools/chkincs/meson.build
>  delete mode 100755 devtools/check-includes.sh

- clang is not happy when enabling the check:
$ meson configure $HOME/builds/build-clang-static -Dcheck_includes=true
$ devtools/test-meson-builds.sh
...
[362/464] Compiling C object
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
FAILED: buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o
clang -Ibuildtools/chkincs/chkincs.p -Ibuildtools/chkincs
-I../../dpdk/buildtools/chkincs -Idrivers/bus/pci
-I../../dpdk/drivers/bus/pci -Idrivers/bus/vdev
-I../../dpdk/drivers/bus/vdev -I. -I../../dpdk -Iconfig
-I../../dpdk/config -Ilib/librte_eal/include
-I../../dpdk/lib/librte_eal/include -Ilib/librte_eal/linux/include
-I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
-I../../dpdk/lib/librte_eal/x86/include -Ilib/librte_kvargs
-I../../dpdk/lib/librte_kvargs -Ilib/librte_metrics
-I../../dpdk/lib/librte_metrics -Ilib/librte_telemetry
-I../../dpdk/lib/librte_telemetry -Ilib/librte_eal/common
-I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
-I../../dpdk/lib/librte_eal -Ilib/librte_ring
-I../../dpdk/lib/librte_ring -Ilib/librte_rcu
-I../../dpdk/lib/librte_rcu -Ilib/librte_mempool
-I../../dpdk/lib/librte_mempool -Ilib/librte_mbuf
-I../../dpdk/lib/librte_mbuf -Ilib/librte_net
-I../../dpdk/lib/librte_net -Ilib/librte_meter
-I../../dpdk/lib/librte_meter -Ilib/librte_ethdev
-I../../dpdk/lib/librte_ethdev -Ilib/librte_pci
-I../../dpdk/lib/librte_pci -Ilib/librte_cmdline
-I../../dpdk/lib/librte_cmdline -Ilib/librte_hash
-I../../dpdk/lib/librte_hash -Ilib/librte_timer
-I../../dpdk/lib/librte_timer -Ilib/librte_acl
-I../../dpdk/lib/librte_acl -Ilib/librte_bbdev
-I../../dpdk/lib/librte_bbdev -Ilib/librte_bitratestats
-I../../dpdk/lib/librte_bitratestats -Ilib/librte_cfgfile
-I../../dpdk/lib/librte_cfgfile -Ilib/librte_compressdev
-I../../dpdk/lib/librte_compressdev -Ilib/librte_cryptodev
-I../../dpdk/lib/librte_cryptodev -Ilib/librte_distributor
-I../../dpdk/lib/librte_distributor -Ilib/librte_efd
-I../../dpdk/lib/librte_efd -Ilib/librte_eventdev
-I../../dpdk/lib/librte_eventdev -Ilib/librte_gro
-I../../dpdk/lib/librte_gro -Ilib/librte_gso
-I../../dpdk/lib/librte_gso -Ilib/librte_ip_frag
-I../../dpdk/lib/librte_ip_frag -Ilib/librte_jobstats
-I../../dpdk/lib/librte_jobstats -Ilib/librte_kni
-I../../dpdk/lib/librte_kni -Ilib/librte_latencystats
-I../../dpdk/lib/librte_latencystats -Ilib/librte_lpm
-I../../dpdk/lib/librte_lpm -Ilib/librte_member
-I../../dpdk/lib/librte_member -Ilib/librte_power
-I../../dpdk/lib/librte_power -Ilib/librte_pdump
-I../../dpdk/lib/librte_pdump -Ilib/librte_rawdev
-I../../dpdk/lib/librte_rawdev -Ilib/librte_regexdev
-I../../dpdk/lib/librte_regexdev -Ilib/librte_rib
-I../../dpdk/lib/librte_rib -Ilib/librte_reorder
-I../../dpdk/lib/librte_reorder -Ilib/librte_sched
-I../../dpdk/lib/librte_sched -Ilib/librte_security
-I../../dpdk/lib/librte_security -Ilib/librte_stack
-I../../dpdk/lib/librte_stack -Ilib/librte_vhost
-I../../dpdk/lib/librte_vhost -Ilib/librte_ipsec
-I../../dpdk/lib/librte_ipsec -Ilib/librte_fib
-I../../dpdk/lib/librte_fib -Ilib/librte_port
-I../../dpdk/lib/librte_port -Ilib/librte_table
-I../../dpdk/lib/librte_table -Ilib/librte_pipeline
-I../../dpdk/lib/librte_pipeline -Ilib/librte_flow_classify
-I../../dpdk/lib/librte_flow_classify -Ilib/librte_bpf
-I../../dpdk/lib/librte_bpf -Ilib/librte_graph
-I../../dpdk/lib/librte_graph -Ilib/librte_node
-I../../dpdk/lib/librte_node
-I/home/dmarchan/intel-ipsec-mb/install/include -Xclang
-fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
-Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual -Wdeprecated
-Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wold-style-definition
-Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
-Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -march=native
-Wno-unused-function -DALLOW_EXPERIMENTAL_API -MD -MQ
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -MF
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o.d -o
buildtools/chkincs/chkincs.p/meson-generated_rte_ethdev_vdev.c.o -c
buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c
In file included from buildtools/chkincs/chkincs.p/rte_ethdev_vdev.c:1:
In file included from
/home/dmarchan/dpdk/lib/librte_ethdev/rte_ethdev_vdev.h:12:
../../dpdk/lib/librte_ethdev/rte_ethdev_driver.h:964:1: error: unknown
attribute 'error' ignored [-Werror,-Wunknown-attributes]
__rte_internal
^
../../dpdk/lib/librte_eal/include/rte_compat.h:25:16: note: expanded
from macro '__rte_internal'
__attribute__((error("Symbol is not public ABI"), \
               ^


- Other issues with ARM builds (arch-specific headers probably the reason):
$ meson configure $HOME/builds/build-arm64-bluefield -Dcheck_includes=true
$ devtools/test-meson-builds.sh
...
In file included from buildtools/chkincs/chkincs.p/rte_rib6.c:1:
/home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h: In function ‘get_msk_part’:
/home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:112:10: error: implicit
declaration of function ‘RTE_MIN’; did you mean ‘INT8_MIN’?
[-Werror=implicit-function-declaration]
  depth = RTE_MIN(depth, 128);
          ^~~~~~~
          INT8_MIN
/home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:112:10: error: nested
extern declaration of ‘RTE_MIN’ [-Werror=nested-externs]
/home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:113:9: error: implicit
declaration of function ‘RTE_MAX’; did you mean ‘INT8_MAX’?
[-Werror=implicit-function-declaration]
  part = RTE_MAX((int16_t)depth - (byte * 8), 0);
         ^~~~~~~
         INT8_MAX
/home/dmarchan/dpdk/lib/librte_rib/rte_rib6.h:113:9: error: nested
extern declaration of ‘RTE_MAX’ [-Werror=nested-externs]
cc1: all warnings being treated as errors


- This check should be enabled for x86 and aarch cross build in GHA.


-- 
David Marchand


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25 10:46  0%                         ` Kinsella, Ray
@ 2021-01-25 11:03  0%                           ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-25 11:03 UTC (permalink / raw)
  To: David Marchand; +Cc: Kinsella, Ray, dev, Dmitry Kozlyuk

25/01/2021 11:46, Kinsella, Ray:
> On 25/01/2021 10:29, David Marchand wrote:
> > The symbol itself can be hidden from the ABeyes.
> > It is only a placeholder for the PMD_INFO_STRING= string used by
> > usertools/dpdk-pmdinfo.py and maybe some other parsing tool.
> > 
> > I guess a static symbol would be enough:
> > 
> > diff --git a/buildtools/pmdinfogen/pmdinfogen.c
> > b/buildtools/pmdinfogen/pmdinfogen.c
> > index a68d1ea999..14bf7d9f42 100644
> > --- a/buildtools/pmdinfogen/pmdinfogen.c
> > +++ b/buildtools/pmdinfogen/pmdinfogen.c
> > @@ -393,7 +393,7 @@ static void output_pmd_info_string(struct elf_info
> > *info, char *outfile)
> >         drv = info->drivers;
> > 
> >         while (drv) {
> > -               fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
> > +               fprintf(ofd, "static const char %s_pmd_info[]
> > __attribute__((used)) = "
> >                         "\"PMD_INFO_STRING= {",
> >                         drv->name);
> >                 fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
> > 
> > 
> > We will need an exception for the v21 ABI though.
> > 
> 
> Good suggestion +1

Yes +1 for adding static on *_pmd_info




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25 10:29  3%                       ` David Marchand
@ 2021-01-25 10:46  0%                         ` Kinsella, Ray
  2021-01-25 11:03  0%                           ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-25 10:46 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon, Dmitry Kozlyuk



On 25/01/2021 10:29, David Marchand wrote:
> On Mon, Jan 25, 2021 at 11:01 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>>
>>
>>
>> On 25/01/2021 09:25, Kinsella, Ray wrote:
>>>
>>>
>>> On 23/01/2021 11:38, Thomas Monjalon wrote:
>>>> 22/01/2021 23:24, Dmitry Kozlyuk:
>>>>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
>>>>>> 22/01/2021 21:31, Dmitry Kozlyuk:
>>>>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:
>>>>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:
>>>>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:
>>>>>>>>>> This is now the right timeframe to introduce this change
>>>>>>>>>> with the new Python module dependency.
>>>>>>>>>> Unfortunately, the ABI check is returning an issue:
>>>>>>>>>>
>>>>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
>>>>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c
>>>>>>>>>
>>>>>>>>> Will investigate and fix ASAP.
>>>>>>>
>>>>>>> Now that I think of it: strings like this change every time new PCI IDs are
>>>>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
>>>>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
>>>>>>> added 2020-07-08, i.e. clearly outside of ABI change window.
>>>>>>
>>>>>> You're right.
>>>>>>
>>>>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
>>>>>>> which can be worked around easily, if the above is wrong.
>>>>>>
>>>>>> If the new format is better, please keep it.
>>>>>> What we need is an exception for the pmdinfo symbols
>>>>>> in the file devtools/libabigail.abignore.
>>>>>> You can probably use a regex for these symbols.
>>>>>
>>>>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
>>>>> variable content and it's not easy to compare. Maybe later a script can be
>>>>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
>>>>> 5494 relevant commits between 19.11 and 20.11, though.
>>>>>
>>>>> To verify there are no meaningful changes I ensured empty diff between
>>>>> results of the following command for "main" and the branch:
>>>>>
>>>>>     find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py
>>>>
>>>> For now we cannot do such check as part of the ABI checker.
>>>> And we cannot merge this patch if the ABI check fails.
>>>> I think the only solution is to allow any change in the pmdinfo variables.
>>>>
>>>
>>> So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
>>> However we are going to end up carrying this rule in libabigail.ignore indefinitely.
>>>
>>> Would it make sense to just fix the size of _pmd_info to some reasonably large value -
>>> say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?
>>>
>>> Ray K
>>
>>
>> Another point is - shouldn't _pmd_info probably live in "INTERNAL" is anycase?
> 
> The symbol itself can be hidden from the ABeyes.
> It is only a placeholder for the PMD_INFO_STRING= string used by
> usertools/dpdk-pmdinfo.py and maybe some other parsing tool.
> 
> I guess a static symbol would be enough:
> 
> diff --git a/buildtools/pmdinfogen/pmdinfogen.c
> b/buildtools/pmdinfogen/pmdinfogen.c
> index a68d1ea999..14bf7d9f42 100644
> --- a/buildtools/pmdinfogen/pmdinfogen.c
> +++ b/buildtools/pmdinfogen/pmdinfogen.c
> @@ -393,7 +393,7 @@ static void output_pmd_info_string(struct elf_info
> *info, char *outfile)
>         drv = info->drivers;
> 
>         while (drv) {
> -               fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
> +               fprintf(ofd, "static const char %s_pmd_info[]
> __attribute__((used)) = "
>                         "\"PMD_INFO_STRING= {",
>                         drv->name);
>                 fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
> 
> 
> We will need an exception for the v21 ABI though.
> 

Good suggestion +1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25 10:11  4%                       ` Kinsella, Ray
@ 2021-01-25 10:31  0%                         ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-01-25 10:31 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Thomas Monjalon, dev, Stephen Hemminger, David Marchand,
	Maxime Coquelin, Aaron Conole, Bruce Richardson, ferruh.yigit,
	ray.kinsella

On Mon, 25 Jan 2021 10:11:07 +0000, Kinsella, Ray wrote:
> On 25/01/2021 10:05, Dmitry Kozlyuk wrote:
> > On Mon, 25 Jan 2021 09:25:51 +0000, Kinsella, Ray wrote:  
> >> On 23/01/2021 11:38, Thomas Monjalon wrote:  
> >>> 22/01/2021 23:24, Dmitry Kozlyuk:    
> >>>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:    
> >>>>> 22/01/2021 21:31, Dmitry Kozlyuk:    
> >>>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:      
> >>>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:      
> >>>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:        
> >>>>>>>>> This is now the right timeframe to introduce this change
> >>>>>>>>> with the new Python module dependency.
> >>>>>>>>> Unfortunately, the ABI check is returning an issue:
> >>>>>>>>>
> >>>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
> >>>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c        
> >>>>>>>>
> >>>>>>>> Will investigate and fix ASAP.      
> >>>>>>
> >>>>>> Now that I think of it: strings like this change every time new PCI IDs are
> >>>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> >>>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> >>>>>> added 2020-07-08, i.e. clearly outside of ABI change window.      
> >>>>>
> >>>>> You're right.
> >>>>>    
> >>>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> >>>>>> which can be worked around easily, if the above is wrong.      
> >>>>>
> >>>>> If the new format is better, please keep it.
> >>>>> What we need is an exception for the pmdinfo symbols
> >>>>> in the file devtools/libabigail.abignore.
> >>>>> You can probably use a regex for these symbols.    
> >>>>
> >>>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
> >>>> variable content and it's not easy to compare. Maybe later a script can be
> >>>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
> >>>> 5494 relevant commits between 19.11 and 20.11, though.
> >>>>
> >>>> To verify there are no meaningful changes I ensured empty diff between
> >>>> results of the following command for "main" and the branch:
> >>>>
> >>>> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py    
> >>>
> >>> For now we cannot do such check as part of the ABI checker.
> >>> And we cannot merge this patch if the ABI check fails.
> >>> I think the only solution is to allow any change in the pmdinfo variables.
> >>>     
> >>
> >> So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
> >> However we are going to end up carrying this rule in libabigail.ignore indefinitely.
> >>
> >> Would it make sense to just fix the size of _pmd_info to some reasonably large value - 
> >> say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?  
> > 
> > I don't think so. This is a JSON *string to be parsed;* considering its size
> > as part of application *binary* interface is wrong in the first place.  
> 
> Right - then is belongs in INTERNAL, I would say. 
>
> > As for
> > content, checking that no PCI IDs are removed is out of scope for libabigail
> > anyway.   
> 
> Lets be clear PCI IDs - are _nothing_ to do with ABI.

Technically, yes, but they're referred to in abi_policy.rst, because DPDK
behavior depends on them. Same issue as with as return values: no formats
change, yet compatibility is broken.

> > Technically we could fix _pmd_info size, but this still allows
> > breaking changes to pass the check with no benefit.  
> 
> ABI changes or other, please explain?

Behavioral changes via PCI ID removal, see above.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25 10:01  0%                     ` Kinsella, Ray
@ 2021-01-25 10:29  3%                       ` David Marchand
  2021-01-25 10:46  0%                         ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-25 10:29 UTC (permalink / raw)
  To: Kinsella, Ray; +Cc: dev, Thomas Monjalon, Dmitry Kozlyuk

On Mon, Jan 25, 2021 at 11:01 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>
>
>
> On 25/01/2021 09:25, Kinsella, Ray wrote:
> >
> >
> > On 23/01/2021 11:38, Thomas Monjalon wrote:
> >> 22/01/2021 23:24, Dmitry Kozlyuk:
> >>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
> >>>> 22/01/2021 21:31, Dmitry Kozlyuk:
> >>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:
> >>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:
> >>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:
> >>>>>>>> This is now the right timeframe to introduce this change
> >>>>>>>> with the new Python module dependency.
> >>>>>>>> Unfortunately, the ABI check is returning an issue:
> >>>>>>>>
> >>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
> >>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c
> >>>>>>>
> >>>>>>> Will investigate and fix ASAP.
> >>>>>
> >>>>> Now that I think of it: strings like this change every time new PCI IDs are
> >>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> >>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> >>>>> added 2020-07-08, i.e. clearly outside of ABI change window.
> >>>>
> >>>> You're right.
> >>>>
> >>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> >>>>> which can be worked around easily, if the above is wrong.
> >>>>
> >>>> If the new format is better, please keep it.
> >>>> What we need is an exception for the pmdinfo symbols
> >>>> in the file devtools/libabigail.abignore.
> >>>> You can probably use a regex for these symbols.
> >>>
> >>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
> >>> variable content and it's not easy to compare. Maybe later a script can be
> >>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
> >>> 5494 relevant commits between 19.11 and 20.11, though.
> >>>
> >>> To verify there are no meaningful changes I ensured empty diff between
> >>> results of the following command for "main" and the branch:
> >>>
> >>>     find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py
> >>
> >> For now we cannot do such check as part of the ABI checker.
> >> And we cannot merge this patch if the ABI check fails.
> >> I think the only solution is to allow any change in the pmdinfo variables.
> >>
> >
> > So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
> > However we are going to end up carrying this rule in libabigail.ignore indefinitely.
> >
> > Would it make sense to just fix the size of _pmd_info to some reasonably large value -
> > say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?
> >
> > Ray K
>
>
> Another point is - shouldn't _pmd_info probably live in "INTERNAL" is anycase?

The symbol itself can be hidden from the ABeyes.
It is only a placeholder for the PMD_INFO_STRING= string used by
usertools/dpdk-pmdinfo.py and maybe some other parsing tool.

I guess a static symbol would be enough:

diff --git a/buildtools/pmdinfogen/pmdinfogen.c
b/buildtools/pmdinfogen/pmdinfogen.c
index a68d1ea999..14bf7d9f42 100644
--- a/buildtools/pmdinfogen/pmdinfogen.c
+++ b/buildtools/pmdinfogen/pmdinfogen.c
@@ -393,7 +393,7 @@ static void output_pmd_info_string(struct elf_info
*info, char *outfile)
        drv = info->drivers;

        while (drv) {
-               fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
+               fprintf(ofd, "static const char %s_pmd_info[]
__attribute__((used)) = "
                        "\"PMD_INFO_STRING= {",
                        drv->name);
                fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);


We will need an exception for the v21 ABI though.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25 10:05  0%                     ` Dmitry Kozlyuk
@ 2021-01-25 10:11  4%                       ` Kinsella, Ray
  2021-01-25 10:31  0%                         ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-25 10:11 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: Thomas Monjalon, dev, Stephen Hemminger, David Marchand,
	Maxime Coquelin, Aaron Conole, Bruce Richardson, ferruh.yigit,
	ray.kinsella



On 25/01/2021 10:05, Dmitry Kozlyuk wrote:
> On Mon, 25 Jan 2021 09:25:51 +0000, Kinsella, Ray wrote:
>> On 23/01/2021 11:38, Thomas Monjalon wrote:
>>> 22/01/2021 23:24, Dmitry Kozlyuk:  
>>>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:  
>>>>> 22/01/2021 21:31, Dmitry Kozlyuk:  
>>>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:    
>>>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:    
>>>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:      
>>>>>>>>> This is now the right timeframe to introduce this change
>>>>>>>>> with the new Python module dependency.
>>>>>>>>> Unfortunately, the ABI check is returning an issue:
>>>>>>>>>
>>>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
>>>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c      
>>>>>>>>
>>>>>>>> Will investigate and fix ASAP.    
>>>>>>
>>>>>> Now that I think of it: strings like this change every time new PCI IDs are
>>>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
>>>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
>>>>>> added 2020-07-08, i.e. clearly outside of ABI change window.    
>>>>>
>>>>> You're right.
>>>>>  
>>>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
>>>>>> which can be worked around easily, if the above is wrong.    
>>>>>
>>>>> If the new format is better, please keep it.
>>>>> What we need is an exception for the pmdinfo symbols
>>>>> in the file devtools/libabigail.abignore.
>>>>> You can probably use a regex for these symbols.  
>>>>
>>>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
>>>> variable content and it's not easy to compare. Maybe later a script can be
>>>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
>>>> 5494 relevant commits between 19.11 and 20.11, though.
>>>>
>>>> To verify there are no meaningful changes I ensured empty diff between
>>>> results of the following command for "main" and the branch:
>>>>
>>>> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py  
>>>
>>> For now we cannot do such check as part of the ABI checker.
>>> And we cannot merge this patch if the ABI check fails.
>>> I think the only solution is to allow any change in the pmdinfo variables.
>>>   
>>
>> So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
>> However we are going to end up carrying this rule in libabigail.ignore indefinitely.
>>
>> Would it make sense to just fix the size of _pmd_info to some reasonably large value - 
>> say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?
> 
> I don't think so. This is a JSON *string to be parsed;* considering its size
> as part of application *binary* interface is wrong in the first place.

Right - then is belongs in INTERNAL, I would say. 

> As for
> content, checking that no PCI IDs are removed is out of scope for libabigail
> anyway. 

Lets be clear PCI IDs - are _nothing_ to do with ABI.

> Technically we could fix _pmd_info size, but this still allows
> breaking changes to pass the check with no benefit.

ABI changes or other, please explain?


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25  9:25  3%                   ` Kinsella, Ray
  2021-01-25 10:01  0%                     ` Kinsella, Ray
@ 2021-01-25 10:05  0%                     ` Dmitry Kozlyuk
  2021-01-25 10:11  4%                       ` Kinsella, Ray
  1 sibling, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-25 10:05 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Thomas Monjalon, dev, Stephen Hemminger, David Marchand,
	Maxime Coquelin, Aaron Conole, Bruce Richardson, ferruh.yigit,
	ray.kinsella

On Mon, 25 Jan 2021 09:25:51 +0000, Kinsella, Ray wrote:
> On 23/01/2021 11:38, Thomas Monjalon wrote:
> > 22/01/2021 23:24, Dmitry Kozlyuk:  
> >> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:  
> >>> 22/01/2021 21:31, Dmitry Kozlyuk:  
> >>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:    
> >>>>> 20/01/2021 08:23, Dmitry Kozlyuk:    
> >>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:      
> >>>>>>> This is now the right timeframe to introduce this change
> >>>>>>> with the new Python module dependency.
> >>>>>>> Unfortunately, the ABI check is returning an issue:
> >>>>>>>
> >>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
> >>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c      
> >>>>>>
> >>>>>> Will investigate and fix ASAP.    
> >>>>
> >>>> Now that I think of it: strings like this change every time new PCI IDs are
> >>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> >>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> >>>> added 2020-07-08, i.e. clearly outside of ABI change window.    
> >>>
> >>> You're right.
> >>>  
> >>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> >>>> which can be worked around easily, if the above is wrong.    
> >>>
> >>> If the new format is better, please keep it.
> >>> What we need is an exception for the pmdinfo symbols
> >>> in the file devtools/libabigail.abignore.
> >>> You can probably use a regex for these symbols.  
> >>
> >> This would allow real breakages to pass ABI check, abidiff doesn't analyze
> >> variable content and it's not easy to compare. Maybe later a script can be
> >> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
> >> 5494 relevant commits between 19.11 and 20.11, though.
> >>
> >> To verify there are no meaningful changes I ensured empty diff between
> >> results of the following command for "main" and the branch:
> >>
> >> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py  
> > 
> > For now we cannot do such check as part of the ABI checker.
> > And we cannot merge this patch if the ABI check fails.
> > I think the only solution is to allow any change in the pmdinfo variables.
> >   
> 
> So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
> However we are going to end up carrying this rule in libabigail.ignore indefinitely.
> 
> Would it make sense to just fix the size of _pmd_info to some reasonably large value - 
> say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?

I don't think so. This is a JSON *string to be parsed;* considering its size
as part of application *binary* interface is wrong in the first place. As for
content, checking that no PCI IDs are removed is out of scope for libabigail
anyway. Technically we could fix _pmd_info size, but this still allows
breaking changes to pass the check with no benefit.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-25  9:25  3%                   ` Kinsella, Ray
@ 2021-01-25 10:01  0%                     ` Kinsella, Ray
  2021-01-25 10:29  3%                       ` David Marchand
  2021-01-25 10:05  0%                     ` Dmitry Kozlyuk
  1 sibling, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-25 10:01 UTC (permalink / raw)
  To: dev



On 25/01/2021 09:25, Kinsella, Ray wrote:
> 
> 
> On 23/01/2021 11:38, Thomas Monjalon wrote:
>> 22/01/2021 23:24, Dmitry Kozlyuk:
>>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
>>>> 22/01/2021 21:31, Dmitry Kozlyuk:
>>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:  
>>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:  
>>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:    
>>>>>>>> This is now the right timeframe to introduce this change
>>>>>>>> with the new Python module dependency.
>>>>>>>> Unfortunately, the ABI check is returning an issue:
>>>>>>>>
>>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
>>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c    
>>>>>>>
>>>>>>> Will investigate and fix ASAP.  
>>>>>
>>>>> Now that I think of it: strings like this change every time new PCI IDs are
>>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
>>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
>>>>> added 2020-07-08, i.e. clearly outside of ABI change window.  
>>>>
>>>> You're right.
>>>>
>>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
>>>>> which can be worked around easily, if the above is wrong.  
>>>>
>>>> If the new format is better, please keep it.
>>>> What we need is an exception for the pmdinfo symbols
>>>> in the file devtools/libabigail.abignore.
>>>> You can probably use a regex for these symbols.
>>>
>>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
>>> variable content and it's not easy to compare. Maybe later a script can be
>>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
>>> 5494 relevant commits between 19.11 and 20.11, though.
>>>
>>> To verify there are no meaningful changes I ensured empty diff between
>>> results of the following command for "main" and the branch:
>>>
>>> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py
>>
>> For now we cannot do such check as part of the ABI checker.
>> And we cannot merge this patch if the ABI check fails.
>> I think the only solution is to allow any change in the pmdinfo variables.
>>
> 
> So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
> However we are going to end up carrying this rule in libabigail.ignore indefinitely.
> 
> Would it make sense to just fix the size of _pmd_info to some reasonably large value - 
> say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?
> 
> Ray K


Another point is - shouldn't _pmd_info probably live in "INTERNAL" is anycase?

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-23 11:38  4%                 ` Thomas Monjalon
  2021-01-24 20:52  3%                   ` Dmitry Kozlyuk
@ 2021-01-25  9:25  3%                   ` Kinsella, Ray
  2021-01-25 10:01  0%                     ` Kinsella, Ray
  2021-01-25 10:05  0%                     ` Dmitry Kozlyuk
  1 sibling, 2 replies; 200+ results
From: Kinsella, Ray @ 2021-01-25  9:25 UTC (permalink / raw)
  To: Thomas Monjalon, Dmitry Kozlyuk
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit, ray.kinsella



On 23/01/2021 11:38, Thomas Monjalon wrote:
> 22/01/2021 23:24, Dmitry Kozlyuk:
>> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
>>> 22/01/2021 21:31, Dmitry Kozlyuk:
>>>> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:  
>>>>> 20/01/2021 08:23, Dmitry Kozlyuk:  
>>>>>> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:    
>>>>>>> This is now the right timeframe to introduce this change
>>>>>>> with the new Python module dependency.
>>>>>>> Unfortunately, the ABI check is returning an issue:
>>>>>>>
>>>>>>> 'const char mlx5_common_pci_pmd_info[62]' was changed
>>>>>>> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c    
>>>>>>
>>>>>> Will investigate and fix ASAP.  
>>>>
>>>> Now that I think of it: strings like this change every time new PCI IDs are
>>>> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
>>>> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
>>>> added 2020-07-08, i.e. clearly outside of ABI change window.  
>>>
>>> You're right.
>>>
>>>> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
>>>> which can be worked around easily, if the above is wrong.  
>>>
>>> If the new format is better, please keep it.
>>> What we need is an exception for the pmdinfo symbols
>>> in the file devtools/libabigail.abignore.
>>> You can probably use a regex for these symbols.
>>
>> This would allow real breakages to pass ABI check, abidiff doesn't analyze
>> variable content and it's not easy to compare. Maybe later a script can be
>> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
>> 5494 relevant commits between 19.11 and 20.11, though.
>>
>> To verify there are no meaningful changes I ensured empty diff between
>> results of the following command for "main" and the branch:
>>
>> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py
> 
> For now we cannot do such check as part of the ABI checker.
> And we cannot merge this patch if the ABI check fails.
> I think the only solution is to allow any change in the pmdinfo variables.
> 

So my 2c on this is that this is an acceptable work-around for the v21 (DPDK v20.11) ABI.
However we are going to end up carrying this rule in libabigail.ignore indefinitely.

Would it make sense to just fix the size of _pmd_info to some reasonably large value - 
say 128 bytes, to allow us to drop the rule in the DPDK 21.11 v22 release?

Ray K

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-23 11:38  4%                 ` Thomas Monjalon
@ 2021-01-24 20:52  3%                   ` Dmitry Kozlyuk
  2021-01-25  9:25  3%                   ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-01-24 20:52 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit, ray.kinsella, mdr

On Sat, 23 Jan 2021 12:38:45 +0100, Thomas Monjalon wrote:
> 22/01/2021 23:24, Dmitry Kozlyuk:
> > On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:  
> > > 22/01/2021 21:31, Dmitry Kozlyuk:  
> > > > On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:    
> > > > > 20/01/2021 08:23, Dmitry Kozlyuk:    
> > > > > > On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:      
> > > > > > > This is now the right timeframe to introduce this change
> > > > > > > with the new Python module dependency.
> > > > > > > Unfortunately, the ABI check is returning an issue:
> > > > > > > 
> > > > > > > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > > > > > > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c      
> > > > > > 
> > > > > > Will investigate and fix ASAP.    
> > > > 
> > > > Now that I think of it: strings like this change every time new PCI IDs are
> > > > added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> > > > is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> > > > added 2020-07-08, i.e. clearly outside of ABI change window.    
> > > 
> > > You're right.
> > >   
> > > > "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> > > > which can be worked around easily, if the above is wrong.    
> > > 
> > > If the new format is better, please keep it.
> > > What we need is an exception for the pmdinfo symbols
> > > in the file devtools/libabigail.abignore.
> > > You can probably use a regex for these symbols.  
> > 
> > This would allow real breakages to pass ABI check, abidiff doesn't analyze
> > variable content and it's not easy to compare. Maybe later a script can be
> > added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
> > 5494 relevant commits between 19.11 and 20.11, though.
> > 
> > To verify there are no meaningful changes I ensured empty diff between
> > results of the following command for "main" and the branch:
> > 
> > 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py  
> 
> For now we cannot do such check as part of the ABI checker.
> And we cannot merge this patch if the ABI check fails.
> I think the only solution is to allow any change in the pmdinfo variables.

Send v10 with suppression.

Such check, however, *can* be implemented: at ABI check stage we have two
install directories that dpdk-pmdinfo.py can inspect. Then a script can check
that diff contains only additions, i.e. no device support being removed.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v10 2/3] build: use Python pmdinfogen
  2021-01-24 20:51  3%     ` [dpdk-dev] [PATCH v10 " Dmitry Kozlyuk
@ 2021-01-24 20:51  2%       ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-01-24 20:51 UTC (permalink / raw)
  To: dev
  Cc: Maxime Coquelin, Bruce Richardson, Thomas Monjalon,
	Dmitry Kozlyuk, Aaron Conole, Michael Santana, Ray Kinsella,
	Neil Horman

Use the same interpreter to run pmdinfogen as for other build scripts.
Adjust wrapper script accordingly and also don't suppress stderr from ar
and pmdinfogen. Add configure-time check for elftools Python module for
Unix hosts.

Add pyelftools to CI configuration and build requirements for Linux and
FreeBSD. Windows targets are not currently using pmdinfogen.

Suppress ABI warnings about generated PMD information strings.

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 .github/workflows/build.yml           |  4 ++--
 .travis.yml                           |  2 +-
 buildtools/gen-pmdinfo-cfile.sh       |  6 +++---
 buildtools/meson.build                | 15 +++++++++++++++
 devtools/libabigail.abignore          |  4 ++++
 doc/guides/freebsd_gsg/build_dpdk.rst |  3 ++-
 doc/guides/linux_gsg/sys_reqs.rst     |  6 ++++++
 drivers/meson.build                   |  2 +-
 meson.build                           |  1 -
 9 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 0b72df0eb..a5b579add 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -91,8 +91,8 @@ jobs:
       run: sudo apt update
     - name: Install packages
       run: sudo apt install -y ccache libnuma-dev python3-setuptools
-        python3-wheel python3-pip ninja-build libbsd-dev libpcap-dev
-        libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
+        python3-wheel python3-pip python3-pyelftools ninja-build libbsd-dev
+        libpcap-dev libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
     - name: Install libabigail build dependencies if no cache is available
       if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
       run: sudo apt install -y autoconf automake libtool pkg-config libxml2-dev
diff --git a/.travis.yml b/.travis.yml
index 5aa7ad49f..4391af1d5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -14,7 +14,7 @@ addons:
   apt:
     update: true
     packages: &required_packages
-      - [libnuma-dev, python3-setuptools, python3-wheel, python3-pip, ninja-build]
+      - [libnuma-dev, python3-setuptools, python3-wheel, python3-pip, python3-pyelftools, ninja-build]
       - [libbsd-dev, libpcap-dev, libibverbs-dev, libcrypto++-dev, libfdt-dev, libjansson-dev]
 
 _aarch64_packages: &aarch64_packages
diff --git a/buildtools/gen-pmdinfo-cfile.sh b/buildtools/gen-pmdinfo-cfile.sh
index 43059cf36..109ee461e 100755
--- a/buildtools/gen-pmdinfo-cfile.sh
+++ b/buildtools/gen-pmdinfo-cfile.sh
@@ -4,11 +4,11 @@
 
 arfile=$1
 output=$2
-pmdinfogen=$3
+shift 2
+pmdinfogen=$*
 
 # The generated file must not be empty if compiled in pedantic mode
 echo 'static __attribute__((unused)) const char *generator = "'$0'";' > $output
 for ofile in `ar t $arfile` ; do
-	ar p $arfile $ofile | $pmdinfogen - - >> $output 2> /dev/null
+	ar p $arfile $ofile | $pmdinfogen - - >> $output
 done
-exit 0
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 04808dabc..dd4c0f640 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -17,3 +17,18 @@ else
 endif
 map_to_win_cmd = py3 + files('map_to_win.py')
 sphinx_wrapper = py3 + files('call-sphinx-build.py')
+pmdinfogen = py3 + files('pmdinfogen.py')
+
+# TODO: starting from Meson 0.51.0 use
+# 	python3 = import('python').find_installation('python',
+#		modules : python3_required_modules)
+python3_required_modules = []
+if host_machine.system() != 'windows'
+	python3_required_modules = ['elftools']
+endif
+foreach module : python3_required_modules
+	script = 'import importlib.util; import sys; exit(importlib.util.find_spec("@0@") is None)'
+	if run_command(py3, '-c', script.format(module)).returncode() != 0
+		error('missing python module: @0@'.format(module))
+	endif
+endforeach
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 1dc84fa74..05afccc1a 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -16,3 +16,7 @@
 [suppress_type]
         name = rte_cryptodev
         has_data_member_inserted_between = {0, 1023}
+
+; Ignore all changes in generated PMD information strings.
+[suppress_variable]
+        name_regex = _pmd_info$
diff --git a/doc/guides/freebsd_gsg/build_dpdk.rst b/doc/guides/freebsd_gsg/build_dpdk.rst
index e3005a7f3..bed353473 100644
--- a/doc/guides/freebsd_gsg/build_dpdk.rst
+++ b/doc/guides/freebsd_gsg/build_dpdk.rst
@@ -14,10 +14,11 @@ The following FreeBSD packages are required to build DPDK:
 * meson
 * ninja
 * pkgconf
+* py37-pyelftools
 
 These can be installed using (as root)::
 
-  pkg install meson pkgconf
+  pkg install meson pkgconf py37-pyelftools
 
 To compile the required kernel modules for memory management and working
 with physical NIC devices, the kernel sources for FreeBSD also
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index be714adf2..a05b5bd81 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -52,6 +52,12 @@ Compilation of the DPDK
     * If the packaged version is below the minimum version, the latest versions
       can be installed from Python's "pip" repository: ``pip3 install meson ninja``
 
+*   ``pyelftools`` (version 0.22+)
+
+    * For RHEL/Fedora systems it can be installed using ``dnf install python-pyelftools``
+
+    * For Ubuntu/Debian it can be installed using ``apt install python3-pyelftools``
+
 *   Library for handling NUMA (Non Uniform Memory Access).
 
     * ``numactl-devel`` in RHEL/Fedora;
diff --git a/drivers/meson.build b/drivers/meson.build
index 77f65fa90..ff5cdb952 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -132,7 +132,7 @@ foreach subpath:subdirs
 						command: [pmdinfo, tmp_lib.full_path(),
 							'@OUTPUT@', pmdinfogen],
 						output: out_filename,
-						depends: [pmdinfogen, tmp_lib])
+						depends: [tmp_lib])
 			endif
 
 			# now build the static driver
diff --git a/meson.build b/meson.build
index 45d974cd2..2b9c37eb4 100644
--- a/meson.build
+++ b/meson.build
@@ -45,7 +45,6 @@ subdir('buildtools')
 subdir('config')
 
 # build libs and drivers
-subdir('buildtools/pmdinfogen')
 subdir('lib')
 subdir('drivers')
 
-- 
2.29.2


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v10 0/3] pmdinfogen: rewrite in Python
  2021-01-22 22:43  3%   ` [dpdk-dev] [PATCH v9 0/3] pmdinfogen: rewrite in Python Dmitry Kozlyuk
@ 2021-01-24 20:51  3%     ` Dmitry Kozlyuk
  2021-01-24 20:51  2%       ` [dpdk-dev] [PATCH v10 2/3] build: use Python pmdinfogen Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-24 20:51 UTC (permalink / raw)
  To: dev
  Cc: Maxime Coquelin, Bruce Richardson, Thomas Monjalon,
	Dmitry Kozlyuk, Neil Horman, Jie Zhou

This patchset implements existing pmdinfogen logic in Python, replaces
and removes the old code. The goals of rewriting are:

* easier maintenance by using a more high-level language,
* simpler build process without host application and libelf,
* foundation for adding Windows support.

Identity of generated PMD information is checked by comparing
output of pmdinfo before and after the patch:

    find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py

Acked-by: Neil Horman <nhorman@tuxdriver.com>
Tested-by: Jie Zhou <jizh@linux.microsoft.com>

---
Changes in v10:

    * Suppress ABI warnings for generated strings (Thomas).

Dmitry Kozlyuk (3):
  pmdinfogen: add Python implementation
  build: use Python pmdinfogen
  pmdinfogen: remove C implementation

 .github/workflows/build.yml           |   4 +-
 .travis.yml                           |   2 +-
 MAINTAINERS                           |   3 +-
 buildtools/gen-pmdinfo-cfile.sh       |   6 +-
 buildtools/meson.build                |  15 +
 buildtools/pmdinfogen.py              | 189 +++++++++++
 buildtools/pmdinfogen/meson.build     |  14 -
 buildtools/pmdinfogen/pmdinfogen.c    | 456 --------------------------
 buildtools/pmdinfogen/pmdinfogen.h    | 119 -------
 devtools/libabigail.abignore          |   4 +
 doc/guides/freebsd_gsg/build_dpdk.rst |   3 +-
 doc/guides/linux_gsg/sys_reqs.rst     |   6 +
 drivers/meson.build                   |   2 +-
 meson.build                           |   1 -
 14 files changed, 225 insertions(+), 599 deletions(-)
 create mode 100755 buildtools/pmdinfogen.py
 delete mode 100644 buildtools/pmdinfogen/meson.build
 delete mode 100644 buildtools/pmdinfogen/pmdinfogen.c
 delete mode 100644 buildtools/pmdinfogen/pmdinfogen.h

-- 
2.29.2


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-22 13:12  4%         ` Kinsella, Ray
@ 2021-01-24 11:58  4%           ` Dodji Seketeli
  0 siblings, 0 replies; 200+ results
From: Dodji Seketeli @ 2021-01-24 11:58 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Dodji Seketeli, Thomas Monjalon, Neil Horman, Akhil Goyal,
	Konstantin Ananyev, Abhinandan Gujjar, dev, david.marchand

"Kinsella, Ray" <mdr@ashroe.eu> writes:

> On 22/01/2021 13:09, Dodji Seketeli wrote:
>> Thomas Monjalon <thomas@monjalon.net> writes:
>> 
>> [...]
>> 
>>>>> Then I've added (quickly) a libabigail exception rule:
>>>>>
>>>>> [suppress_type]
>>>>> 	name = rte_cryptodev
>>>>> 	has_data_member_inserted_between = {0, 1023}
>>>>>
>>>>> Now we want to improve this rule to restrict the offsets
>>>>> to the padding at the end of the struct only,
>>>>> so we keep forbidding changes in existing fields,
>>>>> and forbidding additions further the current struct size.
>>>>> Is this new rule good?
>>>>>
>>>>> 	has_data_member_inserted_between = {offset_after(attached), end}
>>>>
>>>>
>>>> Yes, this rule should do what you think it says.
>>>>
>>>>> Do you confirm that the keyword "end" means the old reference size?
>>>>
>>>> Yes I do.
>>>>
>>>>
>>>>> What else do we need to check for adding a new field in a padding?
>>>>
>>>> Actually, that rule will work independantly of it there is enough
>>>> padding or not.  It'll shut down the change report, even if the added
>>>> data exceeds the padding.
>>>
>>> I don't understand why.
>>> If "end" means the old reference size, then addition after the old size
>>> should be reported, isn't it?
>> 
>> Yes, you are right.
>> 
>> What I meant is that even if (in an hypothetical case, not yours) the
>> padding was so "small" that it wasn't going up to the 'end' of the
>> struct, that rule would have still shut down the change report.
>
> Understood - you are talking about padding between members.

Exactly.

Cheers,

-- 
		Dodji


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-22 22:24  3%               ` Dmitry Kozlyuk
@ 2021-01-23 11:38  4%                 ` Thomas Monjalon
  2021-01-24 20:52  3%                   ` Dmitry Kozlyuk
  2021-01-25  9:25  3%                   ` Kinsella, Ray
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-01-23 11:38 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit, ray.kinsella, mdr

22/01/2021 23:24, Dmitry Kozlyuk:
> On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
> > 22/01/2021 21:31, Dmitry Kozlyuk:
> > > On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:  
> > > > 20/01/2021 08:23, Dmitry Kozlyuk:  
> > > > > On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:    
> > > > > > This is now the right timeframe to introduce this change
> > > > > > with the new Python module dependency.
> > > > > > Unfortunately, the ABI check is returning an issue:
> > > > > > 
> > > > > > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > > > > > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c    
> > > > > 
> > > > > Will investigate and fix ASAP.  
> > > 
> > > Now that I think of it: strings like this change every time new PCI IDs are
> > > added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> > > is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> > > added 2020-07-08, i.e. clearly outside of ABI change window.  
> > 
> > You're right.
> > 
> > > "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> > > which can be worked around easily, if the above is wrong.  
> > 
> > If the new format is better, please keep it.
> > What we need is an exception for the pmdinfo symbols
> > in the file devtools/libabigail.abignore.
> > You can probably use a regex for these symbols.
> 
> This would allow real breakages to pass ABI check, abidiff doesn't analyze
> variable content and it's not easy to compare. Maybe later a script can be
> added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
> 5494 relevant commits between 19.11 and 20.11, though.
> 
> To verify there are no meaningful changes I ensured empty diff between
> results of the following command for "main" and the branch:
> 
> 	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py

For now we cannot do such check as part of the ABI checker.
And we cannot merge this patch if the ABI check fails.
I think the only solution is to allow any change in the pmdinfo variables.




^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v9 0/3] pmdinfogen: rewrite in Python
    @ 2021-01-22 22:43  3%   ` Dmitry Kozlyuk
  2021-01-24 20:51  3%     ` [dpdk-dev] [PATCH v10 " Dmitry Kozlyuk
  1 sibling, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-22 22:43 UTC (permalink / raw)
  To: dev
  Cc: Maxime Coquelin, Bruce Richardson, Thomas Monjalon,
	Dmitry Kozlyuk, Neil Horman, Jie Zhou

This patchset implements existing pmdinfogen logic in Python, replaces
and removes the old code. The goals of rewriting are:

* easier maintenance by using a more high-level language,
* simpler build process without host application and libelf,
* foundation for adding Windows support.

Canonical JSON formatting of generated strings raises ABI warnings.
There are no meaningful changes, which can be checked by comparing
output of pmdinfo before and after the patch:

    find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py

Acked-by: Neil Horman <nhorman@tuxdriver.com>
Tested-by: Jie Zhou <jizh@linux.microsoft.com>

---
Changes in v9:

    * Document pyelftools requirement for FreeBSD (Thomas).
    * Add pyelftools to GitHub workflow.

Dmitry Kozlyuk (3):
  pmdinfogen: add Python implementation
  build: use Python pmdinfogen
  pmdinfogen: remove C implementation

 .github/workflows/build.yml           |   4 +-
 .travis.yml                           |   2 +-
 MAINTAINERS                           |   3 +-
 buildtools/gen-pmdinfo-cfile.sh       |   6 +-
 buildtools/meson.build                |  15 +
 buildtools/pmdinfogen.py              | 189 +++++++++++
 buildtools/pmdinfogen/meson.build     |  14 -
 buildtools/pmdinfogen/pmdinfogen.c    | 456 --------------------------
 buildtools/pmdinfogen/pmdinfogen.h    | 119 -------
 doc/guides/freebsd_gsg/build_dpdk.rst |   3 +-
 doc/guides/linux_gsg/sys_reqs.rst     |   6 +
 drivers/meson.build                   |   2 +-
 meson.build                           |   1 -
 13 files changed, 221 insertions(+), 599 deletions(-)
 create mode 100755 buildtools/pmdinfogen.py
 delete mode 100644 buildtools/pmdinfogen/meson.build
 delete mode 100644 buildtools/pmdinfogen/pmdinfogen.c
 delete mode 100644 buildtools/pmdinfogen/pmdinfogen.h

-- 
2.29.2


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-22 20:57  0%             ` Thomas Monjalon
@ 2021-01-22 22:24  3%               ` Dmitry Kozlyuk
  2021-01-23 11:38  4%                 ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-22 22:24 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit, ray.kinsella

On Fri, 22 Jan 2021 21:57:15 +0100, Thomas Monjalon wrote:
> 22/01/2021 21:31, Dmitry Kozlyuk:
> > On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:  
> > > 20/01/2021 08:23, Dmitry Kozlyuk:  
> > > > On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:    
> > > > > This is now the right timeframe to introduce this change
> > > > > with the new Python module dependency.
> > > > > Unfortunately, the ABI check is returning an issue:
> > > > > 
> > > > > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > > > > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c    
> > > > 
> > > > Will investigate and fix ASAP.  
> > 
> > Now that I think of it: strings like this change every time new PCI IDs are
> > added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> > is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> > added 2020-07-08, i.e. clearly outside of ABI change window.  
> 
> You're right.
> 
> > "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> > which can be worked around easily, if the above is wrong.  
> 
> If the new format is better, please keep it.
> What we need is an exception for the pmdinfo symbols
> in the file devtools/libabigail.abignore.
> You can probably use a regex for these symbols.

This would allow real breakages to pass ABI check, abidiff doesn't analyze
variable content and it's not easy to compare. Maybe later a script can be
added that checks lines with RTE_DEVICE_IN in patches. There are at most 32 of
5494 relevant commits between 19.11 and 20.11, though.

To verify there are no meaningful changes I ensured empty diff between
results of the following command for "main" and the branch:

	find build/drivers -name '*.so' -exec usertools/dpdk-pmdinfo.py

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-22 20:31  4%           ` Dmitry Kozlyuk
@ 2021-01-22 20:57  0%             ` Thomas Monjalon
  2021-01-22 22:24  3%               ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-22 20:57 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit, ray.kinsella

22/01/2021 21:31, Dmitry Kozlyuk:
> On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:
> > 20/01/2021 08:23, Dmitry Kozlyuk:
> > > On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:  
> > > > This is now the right timeframe to introduce this change
> > > > with the new Python module dependency.
> > > > Unfortunately, the ABI check is returning an issue:
> > > > 
> > > > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > > > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c  
> > > 
> > > Will investigate and fix ASAP.
> 
> Now that I think of it: strings like this change every time new PCI IDs are
> added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
> is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
> added 2020-07-08, i.e. clearly outside of ABI change window.

You're right.

> "xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
> which can be worked around easily, if the above is wrong.

If the new format is better, please keep it.
What we need is an exception for the pmdinfo symbols
in the file devtools/libabigail.abignore.
You can probably use a regex for these symbols.


> > > > > --- a/meson.build
> > > > > +++ b/meson.build
> > > > > -subdir('buildtools/pmdinfogen')    
> > > > 
> > > > This could be in patch 3 (removing the code).  
> > > 
> > > It would redefine "pmdinfogen" variable to old pmdinfogen.
> > > Besides, why build what's not used at this patch already?  
> > 
> > Just trying to find the best patch split.
> > If needed, OK to keep as is.
> 
> I even don't mind squashing all three commits into one.
> The split is done to ease the review.

I think the split is good as is.





^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-20 10:24  0%         ` Thomas Monjalon
@ 2021-01-22 20:31  4%           ` Dmitry Kozlyuk
  2021-01-22 20:57  0%             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-22 20:31 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit

On Wed, 20 Jan 2021 11:24:21 +0100, Thomas Monjalon wrote:
> 20/01/2021 08:23, Dmitry Kozlyuk:
> > On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:  
> > > This is now the right timeframe to introduce this change
> > > with the new Python module dependency.
> > > Unfortunately, the ABI check is returning an issue:
> > > 
> > > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c  
> > 
> > Will investigate and fix ASAP.

Now that I think of it: strings like this change every time new PCI IDs are
added to a PMD, but AFAIK adding PCI IDs is not considered an ABI breakage,
is it? One example is 28c9a7d7b48e ("net/mlx5: add ConnectX-6 Lx device ID")
added 2020-07-08, i.e. clearly outside of ABI change window.

"xxx_pmd_info" changes are due to JSON formatting (new is more canonical),
which can be worked around easily, if the above is wrong.

> > > > --- a/meson.build
> > > > +++ b/meson.build
> > > > -subdir('buildtools/pmdinfogen')    
> > > 
> > > This could be in patch 3 (removing the code).  
> > 
> > It would redefine "pmdinfogen" variable to old pmdinfogen.
> > Besides, why build what's not used at this patch already?  
> 
> Just trying to find the best patch split.
> If needed, OK to keep as is.

I even don't mind squashing all three commits into one.
The split is done to ease the review.


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v20 0/4] Add PMD power management
  2021-01-20 11:50  3%   ` [dpdk-dev] [PATCH v19 0/4] " Anatoly Burakov
@ 2021-01-22 17:12  3%     ` Anatoly Burakov
  0 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-22 17:12 UTC (permalink / raw)
  To: dev; +Cc: thomas

This patchset proposes a simple API for Ethernet drivers to cause the  
CPU to enter a power-optimized state while waiting for packets to  
arrive. There are multiple proposed mechanisms to achieve said power
savings: simple frequency scaling, idle loop, and monitoring the Rx
queue for incoming packages. The latter is achieved through cooperation
with the NIC driver that will allow us to know address of wake up event,
and wait for writes on that address.

To achieve power savings, there is a very simple mechanism used: we're 
counting empty polls, and if a certain threshold is reached, we employ
one of the suggested power management schemes automatically, from within
a Rx callback inside the PMD. Once there's traffic again, the empty poll
counter is reset.

Why are we putting it into ethdev as opposed to leaving this up to the 
application? Our customers specifically requested a way to do it with
minimal changes to the application code. The current approach allows to 
just flip a switch and automatically have power savings.

Things of note:

- Only 1:1 core to queue mapping is supported, meaning that each lcore 
  must at most handle RX on a single queue
- Support 3 type policies. Monitor/Pause/Frequency Scaling
- Power management is enabled per-queue
- The API doesn't extend to other device types

v20:
- Moved callback removal before port close

v19:
- Renamed "data_sz" to "size" and clarified struct comments
- Clarified documentation around rte_power_monitor/pause API

v18:
- Rebase on top of latest main
- Address review comments by Thomas

v17:
- Added exception for ethdev driver-only ABI
- Added memory barriers for monitor/wakeup (Konstantin)
- Fixed compiled issues on non-x86 platforms (hopefully!)

v16:
- Implemented Konstantin's suggestions and comments
- Added return values to the API

v15:
- Fixed incorrect check in UMWAIT callback
- Fixed accidental whitespace changes

v14:
- Fixed ARM/PPC builds
- Addressed various review comments

v13:
- Reworked the librte_power code to require less locking and handle invalid
  parameters better
- Fix numerous rebase errors present in v12

v12:
- Rebase on top of 21.02
- Rework of power intrinsics code

Anatoly Burakov (2):
  eal: rename power monitor condition member
  eal: improve comments around power monitoring API

Liang Ma (2):
  power: add PMD power management API and callback
  examples/l3fwd-power: enable PMD power mgmt

 doc/guides/prog_guide/power_man.rst           |  41 ++
 doc/guides/rel_notes/release_21_02.rst        |  10 +
 .../sample_app_ug/l3_forward_power_man.rst    |  35 ++
 drivers/event/dlb/dlb.c                       |   2 +-
 drivers/event/dlb2/dlb2.c                     |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   2 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |   2 +-
 examples/l3fwd-power/main.c                   |  90 ++++-
 .../include/generic/rte_power_intrinsics.h    |  39 +-
 lib/librte_eal/x86/rte_power_intrinsics.c     |   4 +-
 lib/librte_power/meson.build                  |   5 +-
 lib/librte_power/rte_power_pmd_mgmt.c         | 365 ++++++++++++++++++
 lib/librte_power/rte_power_pmd_mgmt.h         |  91 +++++
 lib/librte_power/version.map                  |   5 +
 15 files changed, 669 insertions(+), 26 deletions(-)
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.c
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.h

-- 
2.25.1

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-22 13:09  4%       ` Dodji Seketeli
@ 2021-01-22 13:12  4%         ` Kinsella, Ray
  2021-01-24 11:58  4%           ` Dodji Seketeli
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-22 13:12 UTC (permalink / raw)
  To: Dodji Seketeli, Thomas Monjalon
  Cc: Neil Horman, Akhil Goyal, Konstantin Ananyev, Abhinandan Gujjar,
	dev, david.marchand



On 22/01/2021 13:09, Dodji Seketeli wrote:
> Thomas Monjalon <thomas@monjalon.net> writes:
> 
> [...]
> 
>>>> Then I've added (quickly) a libabigail exception rule:
>>>>
>>>> [suppress_type]
>>>> 	name = rte_cryptodev
>>>> 	has_data_member_inserted_between = {0, 1023}
>>>>
>>>> Now we want to improve this rule to restrict the offsets
>>>> to the padding at the end of the struct only,
>>>> so we keep forbidding changes in existing fields,
>>>> and forbidding additions further the current struct size.
>>>> Is this new rule good?
>>>>
>>>> 	has_data_member_inserted_between = {offset_after(attached), end}
>>>
>>>
>>> Yes, this rule should do what you think it says.
>>>
>>>> Do you confirm that the keyword "end" means the old reference size?
>>>
>>> Yes I do.
>>>
>>>
>>>> What else do we need to check for adding a new field in a padding?
>>>
>>> Actually, that rule will work independantly of it there is enough
>>> padding or not.  It'll shut down the change report, even if the added
>>> data exceeds the padding.
>>
>> I don't understand why.
>> If "end" means the old reference size, then addition after the old size
>> should be reported, isn't it?
> 
> Yes, you are right.
> 
> What I meant is that even if (in an hypothetical case, not yours) the
> padding was so "small" that it wasn't going up to the 'end' of the
> struct, that rule would have still shut down the change report.

Understood - you are talking about padding between members. 

> 
> [...]
> 
> Cheers,
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-21 15:58  4%     ` Thomas Monjalon
  2021-01-22 12:11  4%       ` Kinsella, Ray
@ 2021-01-22 13:09  4%       ` Dodji Seketeli
  2021-01-22 13:12  4%         ` Kinsella, Ray
  1 sibling, 1 reply; 200+ results
From: Dodji Seketeli @ 2021-01-22 13:09 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Ray Kinsella, Neil Horman, Akhil Goyal, Konstantin Ananyev,
	Abhinandan Gujjar, dev, david.marchand

Thomas Monjalon <thomas@monjalon.net> writes:

[...]

>> > Then I've added (quickly) a libabigail exception rule:
>> >
>> > [suppress_type]
>> > 	name = rte_cryptodev
>> > 	has_data_member_inserted_between = {0, 1023}
>> >
>> > Now we want to improve this rule to restrict the offsets
>> > to the padding at the end of the struct only,
>> > so we keep forbidding changes in existing fields,
>> > and forbidding additions further the current struct size.
>> > Is this new rule good?
>> >
>> > 	has_data_member_inserted_between = {offset_after(attached), end}
>> 
>> 
>> Yes, this rule should do what you think it says.
>> 
>> > Do you confirm that the keyword "end" means the old reference size?
>> 
>> Yes I do.
>> 
>> 
>> > What else do we need to check for adding a new field in a padding?
>> 
>> Actually, that rule will work independantly of it there is enough
>> padding or not.  It'll shut down the change report, even if the added
>> data exceeds the padding.
>
> I don't understand why.
> If "end" means the old reference size, then addition after the old size
> should be reported, isn't it?

Yes, you are right.

What I meant is that even if (in an hypothetical case, not yours) the
padding was so "small" that it wasn't going up to the 'end' of the
struct, that rule would have still shut down the change report.

[...]

Cheers,

-- 
		Dodji


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-21 15:58  4%     ` Thomas Monjalon
@ 2021-01-22 12:11  4%       ` Kinsella, Ray
  2021-01-22 13:09  4%       ` Dodji Seketeli
  1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-22 12:11 UTC (permalink / raw)
  To: Thomas Monjalon, Dodji Seketeli
  Cc: Neil Horman, Akhil Goyal, Konstantin Ananyev, Abhinandan Gujjar,
	dev, david.marchand



On 21/01/2021 15:58, Thomas Monjalon wrote:
> 21/01/2021 16:15, Dodji Seketeli:
>> Hello Thomas and others,
>>
>> Thomas Monjalon <thomas@monjalon.net> writes:
>>
>>> Question to an expert, Dodji,
>>
>> Thanks for the kind words, but I am not an expert in anything, sadly.  I
>> am just trying to keep learning about these things ;-)
>>
>>> We have this structure:
>>>
>>> struct rte_cryptodev {
>>> 	lot of fields...
>>> 	uint8_t attached : 1;
>>> } __rte_cache_aligned;
>>>
>>> Because of the cache alignment, there is enough padding in the struct
>>> (no matter the size of the cache line) for adding two more pointers:
>>>
>>> struct rte_cryptodev {
>>> 	lot of fields...
>>> 	uint8_t attached : 1;
>>> 	struct rte_cryptodev_cb_rcu *enq_cbs;
>>> 	struct rte_cryptodev_cb_rcu *deq_cbs;
>>> } __rte_cache_aligned;
>>>
>>> We checked manually that the ABI is still compatible.
>>
>> Right.
>>
>> I am curious, but normally, libabigail should raise the addition of
>> structures, but then it'll tell you that there was no size or offset
>> change between the two structures.  If it doesn't, then that's a bug.  I
>> hope it does :-)
> 
> Yes it was raising a problem, that's why we are adding a rule.
> 
> 
>>> Then I've added (quickly) a libabigail exception rule:
>>>
>>> [suppress_type]
>>> 	name = rte_cryptodev
>>> 	has_data_member_inserted_between = {0, 1023}
>>>
>>> Now we want to improve this rule to restrict the offsets
>>> to the padding at the end of the struct only,
>>> so we keep forbidding changes in existing fields,
>>> and forbidding additions further the current struct size.
>>> Is this new rule good?
>>>
>>> 	has_data_member_inserted_between = {offset_after(attached), end}
>>
>>
>> Yes, this rule should do what you think it says.
>>
>>> Do you confirm that the keyword "end" means the old reference size?
>>
>> Yes I do.
>>
>>
>>> What else do we need to check for adding a new field in a padding?
>>
>> Actually, that rule will work independantly of it there is enough
>> padding or not.  It'll shut down the change report, even if the added
>> data exceeds the padding.
> 
> I don't understand why.
> If "end" means the old reference size, then addition after the old size
> should be reported, isn't it?

yes - this comment confuses me also.

If "end" refers to the size original data-structure (position of the end), 
which in this case had some padding. If the additions fall fully within the 
padding I would expect this rule to work - as long as the data-structure size
is still the same. 

However if the additions fall beyond the size of the original data-structure,
the data-structure's size will have changed, I would not expect this rule to 
condone a change in the size of the data-structure.  

> 
> 
>> You just made me think of an idea of a new feature there.
>>
>> Maybe we'd need a new property for the [suppress_type] directive that
>> would suppress changes only if said changes don't modify the size of the
>> type or any offset of any member of the type?
>>
>> Maybe something like:
>>
>>     [suppress_type]
>>        ; lots of properties can go here.
>>
>>        ; ...
>>
>>        ; If the type has any size or offset change
>>        ; then this suppression directive will fail
>>        ; and the change report will be emitted
>>        has_no_size_or_offset_change
>>
>> Would that be useful to you in this case,
>>
>> Cheers,
> 
> 
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] DPDK Release Status Meeting 21/01/2021
  2021-01-21 12:04  4% [dpdk-dev] DPDK Release Status Meeting 21/01/2021 Ferruh Yigit
@ 2021-01-22  6:38  0% ` Ruifeng Wang
  0 siblings, 0 replies; 200+ results
From: Ruifeng Wang @ 2021-01-22  6:38 UTC (permalink / raw)
  To: dev, David Marchand
  Cc: thomas, Ferruh Yigit, Honnappa Nagarahalli, Konstantin Ananyev, nd, nd


> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> Sent: Thursday, January 21, 2021 8:04 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net
> Subject: [dpdk-dev] DPDK Release Status Meeting 21/01/2021
> 
> Meeting minutes of 21 January 2021
> ----------------------------------
> 
> Agenda:
> * Release Dates
> * Highlights
> * -rc1 status
> * Subtrees
> * LTS
> * Opens
> 
> Participants:
> * Arm
> * Broadcom
> * Debian/Microsoft
> * Intel
> * Nvidia
> * NXP
> * Red Hat
> 
> 
> Release Dates
> -------------
> 
> * v21.02 dates
>    * -rc1 is released on Tuesday, 19 January 2021
>      * http://inbox.dpdk.org/dev/4846307.Pfn0FrbqUJ@thomas/
>    * -rc2                Friday, 29 January 2021
>    * -rc3                Friday, 5 February 2021
>    * Release pushed to   *Wednesday, 17 February 2021*
> 
>    * Release date may conflict with Chinese New Year, we need to discuss and
>      define the release date offline in the mail list, please comment.
> 
> 
> Highlights
> ----------
> 
> * Need to finalize the 21.02 release date on the mail list.
> 
> * pmdinfogen will be switched to python implementation, CI / testing
>    infrastructures should prepare themselves for the 'pyelftools' dependency.
>    The patchset to verify the infrastructure in advance:
>    * https://patches.dpdk.org/project/dpdk/list/?series=13153
> 
> 
> -rc1 status
> -----------
> 
> * No testing result received yet.
> 
> * Two build errors detected, virtio for Arm and mingw cross build.
> 
> 
> Subtrees
> --------
> 
> * main
>    * There are built errors with -rc1
>      * Arm virtio build error, asked for help
>      * Mingw cross builds, with older versions of compiler
>    * Build related updates can continue for -rc2
>      * Applied changes were mostly for Arm
>      * New build options can be added
>    * pmdinfogen python rewrite not merged for -rc1, but planned for -rc2
>      * This may break the CI / test infrastructures because of 'pyelftools'
>        dependency
>        * This has been called out many times, will merge at this point
>    * Intel power management series
>      * Partially merged, ethdev & eal part merged, power library part is
>        remaining
>        * power library get a new version
>          * Thomas has concern about the power library design, it looks like
>            designed for a specific case and not generic
>            * Currently there is not better suggestion, will proceed if no
>              there is no objection
>    * Header check patchset merged partially
>    * ABI checks, some exceptions added
>      * Exceptions should be reviewed carefully
>      * We lost Travis automated ABI checks
>        * There is github actions checks but it is not sending reports back to
>          patchwork
>          * There is a work going on for reporting
>        * Authors either check ABI themselves or explicitly check the github
>          actions test results for it
>          * Can check automated test from:
>            https://github.com/ovsrobot/dpdk/actions
>    * Is ring library refactoring work stalled? Arm will check.
>      * https://patches.dpdk.org/project/dpdk/list/?series=14405

IMO, this series is in good shape.
There is no comments from community on this series. 
But this work is based on discussion between ring library maintainers (Honnappa, Konstantin):
https://mails.dpdk.org/archives/dev/2020-May/166803.html
Honnappa has already reviewed this series. It will be good if Konstantin can also review and add a tag.
So I think this series can be queued for merge.

PS: The ring test comment mentioned in the meeting is about another patch series:
http://patches.dpdk.org/patch/85641/

> 
> * next-net
>    * Following ehtdev patches not able to make the -rc1 and postponed to
> next
>      release:
>      * ethdev: introduce representor type
>        * last version sent late for -rc1
>      * add apistats function
>        * Not clear if this is right approach, more comments required
>      * Also there are some ethdev patches from previous releases, they need
> to be
>        cleaned up, most probably will be done in next release.
>    * For -rc2, there are
>      * octeon_tx endpoint driver
>      * ionic set
>      * various driver and testpmd fixes
>      * patchsets that first version sent after -rc1 will get less priority
> 
> * next-crypto
>    * There is new compressdev PMD for the -rc2
>    * Also an ABI break discussion is going on
> 
> * next-eventdev
>    * no update
> 
> * next-virtio
>    * The big refactor set work is going on
>      * Plan is to merge it for -rc2 if it is ready
>    * Intel vhost example review is going on, planned for -rc2
>    * There are some concerns on Alibaba's PIO mapping patch
>      * Not able to test but there is potential issues
>    * Struct packing series has less priority against the refactoring sets,
>      and can wait the refactoring sets to be merged first.
> 
> * next-net-mlx
>    * -rc1 looks OK
>    * A couple of patches already merged for the -rc2
>    * A few more is expected
> 
> * next-net-brcm
>    * A few fixes in the backlog
> 
> * next-net-intel
>    * Progressing
> 
> * next-net-mrvl
>    * mvpp2 is expected for the -rc2
> 
> 
> LTS
> ---
> 
> * v18.11.11 is released
>    * http://inbox.dpdk.org/dev/20210120155818.388598-1-
> ktraynor@redhat.com
>    * This is the last release of the 18.11 LTS, thanks to all contributors
> 
> * v19.11.7
>    * Luca will start working on patches
> 
> * v20.11.x
>    * Kevin will step down from the 20.11 LTS maintainership, volunteers are
>      welcome.
> 
> 
> Opens
> -----
> 
> * Coverity scans are automated but not able to assign defects
> 
> * Milestone doc is still pending
>    * https://patches.dpdk.org/patch/86455/
> 
> 
> 
> DPDK Release Status Meetings
> ============================
> 
> The DPDK Release Status Meeting is intended for DPDK Committers to
> discuss the status of the master tree and sub-trees, and for project
> managers to track progress or milestone dates.
> 
> The meeting occurs on every Thursdays at 8:30 UTC. on
> https://meet.jit.si/DPDK
> 
> If you wish to attend just send an email to "John McNamara
> <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v11 3/4] raw/ifpga: add OPAE API for OpenStack Cyborg
  2021-01-21 16:30  3%   ` Ferruh Yigit
@ 2021-01-22  3:16  3%     ` Huang, Wei
  0 siblings, 0 replies; 200+ results
From: Huang, Wei @ 2021-01-22  3:16 UTC (permalink / raw)
  To: Yigit, Ferruh, dev, Xu, Rosen, Zhang, Qi Z
  Cc: stable, Zhang, Tianfei, Ray Kinsella

Hi Ferruh,

That's a good question.
The answer is YES, we do need all these functions. We have complete the integrated test with Cyborg, there is no redundant function. Let me show you a use case in Cyborg.
Cyborg will update FPGA flash and reboot it to make it effective, so they will call functions like below.
1. opae_enumerate() to find the target FPGA
2. opae_get_property() to check FPGA version
3. opae_get_image_info() to check the update image
4. opae_update_flash() to update FPGA flash
5. opae_reboot_device() to reboot FPGA
6. After reboot, FPGA kernel driver will not be vfio-pci by default, opae_bind_driver() is used to bind vfio-pci kernel driver.
7. opae_probe_device() to attach ifpga PMD to FPGA, then this FPGA can be managed by Cyborg again.

These functions will be wrapped in Python package. Cyborg require this Python package can be downloaded from PyPI and compiled without DPDK installed.
So there is an independent project which will create a static library file from target DPDK. This library is part of Python package and will be used when compiling Python module. That's why I didn't export these function in map file.
BTW, the header file ifpga_opae_api.h is also integrated into Python package from target DPDK.

Thanks,
Wei

-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@intel.com> 
Sent: Friday, January 22, 2021 00:30
To: Huang, Wei <wei.huang@intel.com>; dev@dpdk.org; Xu, Rosen <rosen.xu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
Cc: stable@dpdk.org; Zhang, Tianfei <tianfei.zhang@intel.com>; Ray Kinsella <mdr@ashroe.eu>
Subject: Re: [dpdk-dev] [PATCH v11 3/4] raw/ifpga: add OPAE API for OpenStack Cyborg

On 1/21/2021 6:03 AM, Wei Huang wrote:
> Cyborg is an OpenStack project that aims to provide a general purpose 
> management framework for acceleration resources (i.e. various types of 
> accelerators such as GPU, FPGA, NP, ODP, DPDK/SPDK and so on).
> It needs some OPAE type APIs to manage PACs (Programmable Acceleration
> Card) with Intel FPGA. Below major functions are added to meets Cyborg 
> requirements.
> 1. opae_init() set up OPAE environment.
> 2. opae_cleanup() clean up OPAE environment.
> 3. opae_enumerate() searches PAC with specific FPGA.
> 4. opae_get_property() gets properties of FPGA.
> 5. opae_partial_reconfigure() perform partial configuration on FPGA.
> 6. opae_get_image_info() gets information of image file.
> 7. opae_update_flash() updates FPGA flash with specific image file.
> 8. opae_cancel_flash_update() cancel process of FPGA flash update.
> 9. opae_probe_device() manually probe specific FPGA with ifpga driver.
> 10. opae_remove_device() manually remove specific FPGA from ifpga driver.
> 11. opae_bind_driver() binds specific FPGA with specified kernel driver.
> 12. opae_unbind_driver() unbinds specific FPGA from kernel driver.
> 13. opae_reboot_device() reboots specific FPGA (do reconfiguration).
> 

Hi Wei,

As far as I understand you are adding above public functions which are on top of raw/ifpga driver functions, so they are like PMD specific APIs, I think there are a few problems with it:

1) Do we really need/want this much PMD specific API? Can't we have them through the rawdev abstraction layer?

2) DPDK public APIs are part of API/ABI policy, so there are a few rules they have to follow, like:
- They should start with 'rte_' prefix, and the PMD specific APIs should start with 'rte_pmd_' prefix
- They should be in the .map file
- They should be experimental at least one release
- They should be fully documented in a doxygen format
   - Header file should be added to index file for API documentation

Please don't update above before 1) is clearified and we are sure new APIs are required.

<...>

> @@ -13,8 +13,10 @@ objs = [base_objs]
>   deps += ['ethdev', 'rawdev', 'pci', 'bus_pci', 'kvargs',
>   	'bus_vdev', 'bus_ifpga', 'net', 'net_i40e', 'net_ipn3ke']
>   
> -sources = files('ifpga_rawdev.c')
> +sources = files('ifpga_rawdev.c', 'ifpga_opae_api.c')
>   
>   includes += include_directories('base')
>   includes += include_directories('../../net/ipn3ke')
>   includes += include_directories('../../net/i40e')
> +
> +install_headers('ifpga_opae_api.h')
> 

There is a 'headers' helper that you can use for meson. Also the header file name should start with 'rte_pmd_'.

Even before this patch, isn't application has to include the rawdev PMD header? 
Why that header was not installed?

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v11 3/4] raw/ifpga: add OPAE API for OpenStack Cyborg
  @ 2021-01-21 16:30  3%   ` Ferruh Yigit
  2021-01-22  3:16  3%     ` Huang, Wei
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-21 16:30 UTC (permalink / raw)
  To: Wei Huang, dev, rosen.xu, qi.z.zhang; +Cc: stable, tianfei.zhang, Ray Kinsella

On 1/21/2021 6:03 AM, Wei Huang wrote:
> Cyborg is an OpenStack project that aims to provide a general purpose
> management framework for acceleration resources (i.e. various types
> of accelerators such as GPU, FPGA, NP, ODP, DPDK/SPDK and so on).
> It needs some OPAE type APIs to manage PACs (Programmable Acceleration
> Card) with Intel FPGA. Below major functions are added to meets
> Cyborg requirements.
> 1. opae_init() set up OPAE environment.
> 2. opae_cleanup() clean up OPAE environment.
> 3. opae_enumerate() searches PAC with specific FPGA.
> 4. opae_get_property() gets properties of FPGA.
> 5. opae_partial_reconfigure() perform partial configuration on FPGA.
> 6. opae_get_image_info() gets information of image file.
> 7. opae_update_flash() updates FPGA flash with specific image file.
> 8. opae_cancel_flash_update() cancel process of FPGA flash update.
> 9. opae_probe_device() manually probe specific FPGA with ifpga driver.
> 10. opae_remove_device() manually remove specific FPGA from ifpga driver.
> 11. opae_bind_driver() binds specific FPGA with specified kernel driver.
> 12. opae_unbind_driver() unbinds specific FPGA from kernel driver.
> 13. opae_reboot_device() reboots specific FPGA (do reconfiguration).
> 

Hi Wei,

As far as I understand you are adding above public functions which are on top of 
raw/ifpga driver functions, so they are like PMD specific APIs, I think there 
are a few problems with it:

1) Do we really need/want this much PMD specific API? Can't we have them through 
the rawdev abstraction layer?

2) DPDK public APIs are part of API/ABI policy, so there are a few rules they 
have to follow, like:
- They should start with 'rte_' prefix, and the PMD specific APIs should start 
with 'rte_pmd_' prefix
- They should be in the .map file
- They should be experimental at least one release
- They should be fully documented in a doxygen format
   - Header file should be added to index file for API documentation

Please don't update above before 1) is clearified and we are sure new APIs are 
required.

<...>

> @@ -13,8 +13,10 @@ objs = [base_objs]
>   deps += ['ethdev', 'rawdev', 'pci', 'bus_pci', 'kvargs',
>   	'bus_vdev', 'bus_ifpga', 'net', 'net_i40e', 'net_ipn3ke']
>   
> -sources = files('ifpga_rawdev.c')
> +sources = files('ifpga_rawdev.c', 'ifpga_opae_api.c')
>   
>   includes += include_directories('base')
>   includes += include_directories('../../net/ipn3ke')
>   includes += include_directories('../../net/i40e')
> +
> +install_headers('ifpga_opae_api.h')
> 

There is a 'headers' helper that you can use for meson. Also the header file 
name should start with 'rte_pmd_'.

Even before this patch, isn't application has to include the rawdev PMD header? 
Why that header was not installed?

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-21 15:15  4%   ` Dodji Seketeli
@ 2021-01-21 15:58  4%     ` Thomas Monjalon
  2021-01-22 12:11  4%       ` Kinsella, Ray
  2021-01-22 13:09  4%       ` Dodji Seketeli
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-01-21 15:58 UTC (permalink / raw)
  To: Dodji Seketeli
  Cc: Ray Kinsella, Neil Horman, Akhil Goyal, Konstantin Ananyev,
	Abhinandan Gujjar, dev, david.marchand

21/01/2021 16:15, Dodji Seketeli:
> Hello Thomas and others,
> 
> Thomas Monjalon <thomas@monjalon.net> writes:
> 
> > Question to an expert, Dodji,
> 
> Thanks for the kind words, but I am not an expert in anything, sadly.  I
> am just trying to keep learning about these things ;-)
> 
> > We have this structure:
> >
> > struct rte_cryptodev {
> > 	lot of fields...
> > 	uint8_t attached : 1;
> > } __rte_cache_aligned;
> >
> > Because of the cache alignment, there is enough padding in the struct
> > (no matter the size of the cache line) for adding two more pointers:
> >
> > struct rte_cryptodev {
> > 	lot of fields...
> > 	uint8_t attached : 1;
> > 	struct rte_cryptodev_cb_rcu *enq_cbs;
> > 	struct rte_cryptodev_cb_rcu *deq_cbs;
> > } __rte_cache_aligned;
> >
> > We checked manually that the ABI is still compatible.
> 
> Right.
> 
> I am curious, but normally, libabigail should raise the addition of
> structures, but then it'll tell you that there was no size or offset
> change between the two structures.  If it doesn't, then that's a bug.  I
> hope it does :-)

Yes it was raising a problem, that's why we are adding a rule.


> > Then I've added (quickly) a libabigail exception rule:
> >
> > [suppress_type]
> > 	name = rte_cryptodev
> > 	has_data_member_inserted_between = {0, 1023}
> >
> > Now we want to improve this rule to restrict the offsets
> > to the padding at the end of the struct only,
> > so we keep forbidding changes in existing fields,
> > and forbidding additions further the current struct size.
> > Is this new rule good?
> >
> > 	has_data_member_inserted_between = {offset_after(attached), end}
> 
> 
> Yes, this rule should do what you think it says.
> 
> > Do you confirm that the keyword "end" means the old reference size?
> 
> Yes I do.
> 
> 
> > What else do we need to check for adding a new field in a padding?
> 
> Actually, that rule will work independantly of it there is enough
> padding or not.  It'll shut down the change report, even if the added
> data exceeds the padding.

I don't understand why.
If "end" means the old reference size, then addition after the old size
should be reported, isn't it?


> You just made me think of an idea of a new feature there.
> 
> Maybe we'd need a new property for the [suppress_type] directive that
> would suppress changes only if said changes don't modify the size of the
> type or any offset of any member of the type?
> 
> Maybe something like:
> 
>     [suppress_type]
>        ; lots of properties can go here.
> 
>        ; ...
> 
>        ; If the type has any size or offset change
>        ; then this suppression directive will fail
>        ; and the change report will be emitted
>        has_no_size_or_offset_change
> 
> Would that be useful to you in this case,
> 
> Cheers,




^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-20 15:41  7% ` Thomas Monjalon
@ 2021-01-21 15:15  4%   ` Dodji Seketeli
  2021-01-21 15:58  4%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Dodji Seketeli @ 2021-01-21 15:15 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Ray Kinsella, Neil Horman, Akhil Goyal, Konstantin Ananyev,
	Abhinandan Gujjar, dev, david.marchand

Hello Thomas and others,

Thomas Monjalon <thomas@monjalon.net> writes:

> Question to an expert, Dodji,

Thanks for the kind words, but I am not an expert in anything, sadly.  I
am just trying to keep learning about these things ;-)

> We have this structure:
>
> struct rte_cryptodev {
> 	lot of fields...
> 	uint8_t attached : 1;
> } __rte_cache_aligned;
>
> Because of the cache alignment, there is enough padding in the struct
> (no matter the size of the cache line) for adding two more pointers:
>
> struct rte_cryptodev {
> 	lot of fields...
> 	uint8_t attached : 1;
> 	struct rte_cryptodev_cb_rcu *enq_cbs;
> 	struct rte_cryptodev_cb_rcu *deq_cbs;
> } __rte_cache_aligned;
>
> We checked manually that the ABI is still compatible.

Right.

I am curious, but normally, libabigail should raise the addition of
structures, but then it'll tell you that there was no size or offset
change between the two structures.  If it doesn't, then that's a bug.  I
hope it does :-)


> Then I've added (quickly) a libabigail exception rule:
>
> [suppress_type]
> 	name = rte_cryptodev
> 	has_data_member_inserted_between = {0, 1023}
>
> Now we want to improve this rule to restrict the offsets
> to the padding at the end of the struct only,
> so we keep forbidding changes in existing fields,
> and forbidding additions further the current struct size.
> Is this new rule good?
>
> 	has_data_member_inserted_between = {offset_after(attached), end}


Yes, this rule should do what you think it says.

> Do you confirm that the keyword "end" means the old reference size?

Yes I do.


> What else do we need to check for adding a new field in a padding?

Actually, that rule will work independantly of it there is enough
padding or not.  It'll shut down the change report, even if the added
data exceeds the padding.

You just made me think of an idea of a new feature there.

Maybe we'd need a new property for the [suppress_type] directive that
would suppress changes only if said changes don't modify the size of the
type or any offset of any member of the type?

Maybe something like:

    [suppress_type]
       ; lots of properties can go here.

       ; ...

       ; If the type has any size or offset change
       ; then this suppression directive will fail
       ; and the change report will be emitted
       has_no_size_or_offset_change

Would that be useful to you in this case,

Cheers,

-- 
		Dodji


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] DPDK Release Status Meeting 21/01/2021
@ 2021-01-21 12:04  4% Ferruh Yigit
  2021-01-22  6:38  0% ` Ruifeng Wang
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-21 12:04 UTC (permalink / raw)
  To: dev; +Cc: Thomas Monjalon

Meeting minutes of 21 January 2021
----------------------------------

Agenda:
* Release Dates
* Highlights
* -rc1 status
* Subtrees
* LTS
* Opens

Participants:
* Arm
* Broadcom
* Debian/Microsoft
* Intel
* Nvidia
* NXP
* Red Hat


Release Dates
-------------

* v21.02 dates
   * -rc1 is released on Tuesday, 19 January 2021
     * http://inbox.dpdk.org/dev/4846307.Pfn0FrbqUJ@thomas/
   * -rc2                Friday, 29 January 2021
   * -rc3                Friday, 5 February 2021
   * Release pushed to   *Wednesday, 17 February 2021*

   * Release date may conflict with Chinese New Year, we need to discuss and
     define the release date offline in the mail list, please comment.


Highlights
----------

* Need to finalize the 21.02 release date on the mail list.

* pmdinfogen will be switched to python implementation, CI / testing
   infrastructures should prepare themselves for the 'pyelftools' dependency.
   The patchset to verify the infrastructure in advance:
   * https://patches.dpdk.org/project/dpdk/list/?series=13153


-rc1 status
-----------

* No testing result received yet.

* Two build errors detected, virtio for Arm and mingw cross build.


Subtrees
--------

* main
   * There are built errors with -rc1
     * Arm virtio build error, asked for help
     * Mingw cross builds, with older versions of compiler
   * Build related updates can continue for -rc2
     * Applied changes were mostly for Arm
     * New build options can be added
   * pmdinfogen python rewrite not merged for -rc1, but planned for -rc2
     * This may break the CI / test infrastructures because of 'pyelftools'
       dependency
       * This has been called out many times, will merge at this point
   * Intel power management series
     * Partially merged, ethdev & eal part merged, power library part is
       remaining
       * power library get a new version
         * Thomas has concern about the power library design, it looks like
           designed for a specific case and not generic
           * Currently there is not better suggestion, will proceed if no
             there is no objection
   * Header check patchset merged partially
   * ABI checks, some exceptions added
     * Exceptions should be reviewed carefully
     * We lost Travis automated ABI checks
       * There is github actions checks but it is not sending reports back to
         patchwork
         * There is a work going on for reporting
       * Authors either check ABI themselves or explicitly check the github
         actions test results for it
         * Can check automated test from:
           https://github.com/ovsrobot/dpdk/actions
   * Is ring library refactoring work stalled? Arm will check.
     * https://patches.dpdk.org/project/dpdk/list/?series=14405

* next-net
   * Following ehtdev patches not able to make the -rc1 and postponed to next
     release:
     * ethdev: introduce representor type
       * last version sent late for -rc1
     * add apistats function
       * Not clear if this is right approach, more comments required
     * Also there are some ethdev patches from previous releases, they need to be
       cleaned up, most probably will be done in next release.
   * For -rc2, there are
     * octeon_tx endpoint driver
     * ionic set
     * various driver and testpmd fixes
     * patchsets that first version sent after -rc1 will get less priority

* next-crypto
   * There is new compressdev PMD for the -rc2
   * Also an ABI break discussion is going on

* next-eventdev
   * no update

* next-virtio
   * The big refactor set work is going on
     * Plan is to merge it for -rc2 if it is ready
   * Intel vhost example review is going on, planned for -rc2
   * There are some concerns on Alibaba's PIO mapping patch
     * Not able to test but there is potential issues
   * Struct packing series has less priority against the refactoring sets,
     and can wait the refactoring sets to be merged first.

* next-net-mlx
   * -rc1 looks OK
   * A couple of patches already merged for the -rc2
   * A few more is expected

* next-net-brcm
   * A few fixes in the backlog

* next-net-intel
   * Progressing

* next-net-mrvl
   * mvpp2 is expected for the -rc2


LTS
---

* v18.11.11 is released
   * http://inbox.dpdk.org/dev/20210120155818.388598-1-ktraynor@redhat.com
   * This is the last release of the 18.11 LTS, thanks to all contributors

* v19.11.7
   * Luca will start working on patches

* v20.11.x
   * Kevin will step down from the 20.11 LTS maintainership, volunteers are
     welcome.


Opens
-----

* Coverity scans are automated but not able to assign defects

* Milestone doc is still pending
   * https://patches.dpdk.org/patch/86455/



DPDK Release Status Meetings
============================

The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to
"John McNamara <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  2021-01-20 13:15  0%         ` Thomas Monjalon
@ 2021-01-20 14:09  0%           ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-20 14:09 UTC (permalink / raw)
  To: Thomas Monjalon, Gujjar, Abhinandan S, mdr
  Cc: dev, Ananyev, Konstantin, Akhil Goyal, aconole, david.marchand



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday 20 January 2021 13:16
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Kinsella, Ray
> <ray.kinsella@intel.com>; mdr@ashroe.eu
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; aconole@redhat.com;
> david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and
> dequeue callback functions
> 
> 20/01/2021 14:01, Kinsella, Ray:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 15/01/2021 17:01, Akhil Goyal:
> > > > > This patch adds APIs to add/remove callback functions on crypto
> > > > > enqueue/dequeue burst. The callback function will be called for
> > > each
> > > > > burst of crypto ops received/sent on a given crypto device
> queue
> > > > > pair.
> > > > >
> > > > > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > > ---
> > > > Series applied to dpdk-next-crypto
> > >
> > >
> > > It is missing a rule to ignore the false positive ABI break:
> > >
> > > --- a/devtools/libabigail.abignore
> > > +++ b/devtools/libabigail.abignore
> > > @@ -11,3 +11,8 @@
> > >  ; Explicit ignore for driver-only ABI  [suppress_type]
> > >          name = eth_dev_ops
> > > +
> > > +; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > > +[suppress_type]
> > > +        name = rte_cryptodev
> > > +        has_data_member_inserted_between = {0, 1023}
> > >
> >
> > This is a bit of a blunt instrument as the range quiet large?
> 
> The range is in bits. It matches the actual size of the struct for 64B
> cacheline.

Ok

> 
> > {offset_after(attached), end} instead works better - I will send a
> patch.
> 
> Yes that's exactly what David told me earlier today.

Makes sense, I think.

> 
> > > I'll add this change while pulling in the main tree.
> 
> Yes please.
> Note: we missed requiring this exception rule in the original patch.

Ok, in the next 20 minutes or so.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  2021-01-19 18:31  8%     ` Thomas Monjalon
@ 2021-01-20 13:01  3%       ` Kinsella, Ray
  2021-01-20 13:12  0%         ` David Marchand
  2021-01-20 13:15  0%         ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Kinsella, Ray @ 2021-01-20 13:01 UTC (permalink / raw)
  To: Thomas Monjalon, Gujjar, Abhinandan S
  Cc: dev, Ananyev, Konstantin, Akhil Goyal, aconole, david.marchand

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday 19 January 2021 18:32
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Kinsella, Ray
> <ray.kinsella@intel.com>; aconole@redhat.com; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and
> dequeue callback functions
> 
> 15/01/2021 17:01, Akhil Goyal:
> > > This patch adds APIs to add/remove callback functions on crypto
> > > enqueue/dequeue burst. The callback function will be called for
> each
> > > burst of crypto ops received/sent on a given crypto device queue
> > > pair.
> > >
> > > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > Series applied to dpdk-next-crypto
> 
> 
> It is missing a rule to ignore the false positive ABI break:
> 
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,3 +11,8 @@
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = eth_dev_ops
> +
> +; Ignore fields inserted in cacheline boundary of rte_cryptodev
> +[suppress_type]
> +        name = rte_cryptodev
> +        has_data_member_inserted_between = {0, 1023}
> 

This is a bit of a blunt instrument as the range quiet large?
{offset_after(attached), end} instead works better - I will send a patch.

> I'll add this change while pulling in the main tree.
> 

BTW - can people use ashroe.eu, not intel.com for ABI stuff. 

Ray K

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 02/44] bus/vdev: add driver IOVA VA mode requirement
  2021-01-20 15:32  8%   ` David Marchand
@ 2021-01-20 17:47  0%     ` Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-01-20 17:47 UTC (permalink / raw)
  To: David Marchand, Ray Kinsella
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata, Thomas Monjalon



On 1/20/21 4:32 PM, David Marchand wrote:
> On Tue, Jan 19, 2021 at 10:25 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch adds driver flag in vdev bus driver so that
>> vdev drivers can require VA IOVA mode to be used, which
>> for example the case of Virtio-user PMD.
>>
>> The patch implements the .get_iommu_class() callback, that
>> is called before devices probing to determine the IOVA mode
>> to be used.
>>
>> It also adds a check right before the device is probed to
>> ensure compatible IOVa mode has been selected.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>  drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
>>  drivers/bus/vdev/vdev.c         | 31 +++++++++++++++++++++++++++++++
>>  2 files changed, 35 insertions(+)
>>
>> diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
>> index f99a41f825..c8b41e649c 100644
>> --- a/drivers/bus/vdev/rte_bus_vdev.h
>> +++ b/drivers/bus/vdev/rte_bus_vdev.h
>> @@ -113,8 +113,12 @@ struct rte_vdev_driver {
>>         rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
>>         rte_vdev_dma_map_t *dma_map;     /**< Virtual device DMA map function. */
>>         rte_vdev_dma_unmap_t *dma_unmap; /**< Virtual device DMA unmap function. */
>> +       uint32_t drv_flags;                /**< Flags RTE_VDEV_DRV_*. */
> 
> This will probably get broken in the future, but for now, can you
> indent the comment in the same way as earlier lines?
> 
> 
> The ABI check will complain about this change so we need an exception.
> 
> rte_vdev_driver is exposed only through driver API.
> We could flag the whole structure like we did for ethdev.
> But there is also the alternative of just flagging the required
> symbols so that we won't miss later the inclusion of this structure in
> an API used by final users.
> How about:
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 1dc84fa74b..435913d908 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,6 +11,8 @@
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = eth_dev_ops
> +[suppress_function]
> +        name_regexp = rte_vdev_(|un)register
> 
>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>  [suppress_type]
> 
> 

This is fine by me.

>>  };
>>
>> +/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
>> +#define RTE_VDEV_DRV_NEED_IOVA_AS_VA 0x0001
>> +
>>  /**
>>   * Register a virtual device driver.
>>   *
>> diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
>> index acfd78828f..56f15e8201 100644
>> --- a/drivers/bus/vdev/vdev.c
>> +++ b/drivers/bus/vdev/vdev.c
>> @@ -189,6 +189,7 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
>>  {
>>         const char *name;
>>         struct rte_vdev_driver *driver;
>> +       enum rte_iova_mode iova_mode;
>>         int ret;
>>
>>         if (rte_dev_is_probed(&dev->device))
>> @@ -199,6 +200,14 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
>>
>>         if (vdev_parse(name, &driver))
>>                 return -1;
>> +
>> +       iova_mode = rte_eal_iova_mode();
>> +       if ((driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA) && (iova_mode == RTE_IOVA_PA)) {
>> +               VDEV_LOG(ERR, "%s requires VA IOVA mode but current mode is PA, not initializing",
>> +                               name);
>> +               return -1;
>> +       }
>> +
>>         ret = driver->probe(dev);
>>         if (ret == 0)
>>                 dev->device.driver = &driver->driver;
>> @@ -594,6 +603,27 @@ vdev_unplug(struct rte_device *dev)
>>         return rte_vdev_uninit(dev->name);
>>  }
>>
>> +static enum rte_iova_mode
>> +vdev_get_iommu_class(void)
>> +{
>> +       const char *name;
>> +       struct rte_vdev_device *dev;
>> +       struct rte_vdev_driver *driver;
>> +
>> +       TAILQ_FOREACH(dev, &vdev_device_list, next) {
>> +               name = rte_vdev_device_name(dev);
>> +               if (!name)
>> +                       continue;
> 
> Afaics, a device in vdev_device_list always has a name.

Indeed, I will remove the check in next revision.

Thanks,
Maxime

> 
>> +               if (vdev_parse(name, &driver))
>> +                       continue;
>> +
>> +               if (driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA)
>> +                       return RTE_IOVA_VA;
>> +       }
>> +
>> +       return RTE_IOVA_DC;
>> +}
>> +
>>  static struct rte_bus rte_vdev_bus = {
>>         .scan = vdev_scan,
>>         .probe = vdev_probe,
>> @@ -603,6 +633,7 @@ static struct rte_bus rte_vdev_bus = {
>>         .parse = vdev_parse,
>>         .dma_map = vdev_dma_map,
>>         .dma_unmap = vdev_dma_unmap,
>> +       .get_iommu_class = vdev_get_iommu_class,
>>         .dev_iterate = rte_vdev_dev_iterate,
>>  };
>>
>> --
>> 2.29.2
>>
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
  2021-01-20 14:25  4% [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev Ray Kinsella
@ 2021-01-20 15:41  7% ` Thomas Monjalon
  2021-01-21 15:15  4%   ` Dodji Seketeli
  2021-01-26 11:55  8% ` Thomas Monjalon
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-20 15:41 UTC (permalink / raw)
  To: dodji
  Cc: Ray Kinsella, Neil Horman, Akhil Goyal, Konstantin Ananyev,
	Abhinandan Gujjar, dev, david.marchand

Question to an expert, Dodji,

We have this structure:

struct rte_cryptodev {
	lot of fields...
	uint8_t attached : 1;
} __rte_cache_aligned;

Because of the cache alignment, there is enough padding in the struct
(no matter the size of the cache line) for adding two more pointers:

struct rte_cryptodev {
	lot of fields...
	uint8_t attached : 1;
	struct rte_cryptodev_cb_rcu *enq_cbs;
	struct rte_cryptodev_cb_rcu *deq_cbs;
} __rte_cache_aligned;

We checked manually that the ABI is still compatible.
Then I've added (quickly) a libabigail exception rule:

[suppress_type]
	name = rte_cryptodev
	has_data_member_inserted_between = {0, 1023}

Now we want to improve this rule to restrict the offsets
to the padding at the end of the struct only,
so we keep forbidding changes in existing fields,
and forbidding additions further the current struct size.
Is this new rule good?

	has_data_member_inserted_between = {offset_after(attached), end}

Do you confirm that the keyword "end" means the old reference size?

What else do we need to check for adding a new field in a padding?

Thank you


20/01/2021 15:25, Ray Kinsella:
> Update the ignore entry for crytodev to use named fields instead of
> bit positions.
> 
> Fixes: 1c3ffb9559
> 
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
>  devtools/libabigail.abignore | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 1dc84fa74b..1f17fbed58 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -15,4 +15,4 @@
>  ; Ignore fields inserted in cacheline boundary of rte_cryptodev
>  [suppress_type]
>          name = rte_cryptodev
> -        has_data_member_inserted_between = {0, 1023}
> +        has_data_member_inserted_between = {offset_after(attached), end}





^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH v2 02/44] bus/vdev: add driver IOVA VA mode requirement
  @ 2021-01-20 15:32  8%   ` David Marchand
  2021-01-20 17:47  0%     ` Maxime Coquelin
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-20 15:32 UTC (permalink / raw)
  To: Maxime Coquelin, Ray Kinsella
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata, Thomas Monjalon

On Tue, Jan 19, 2021 at 10:25 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch adds driver flag in vdev bus driver so that
> vdev drivers can require VA IOVA mode to be used, which
> for example the case of Virtio-user PMD.
>
> The patch implements the .get_iommu_class() callback, that
> is called before devices probing to determine the IOVA mode
> to be used.
>
> It also adds a check right before the device is probed to
> ensure compatible IOVa mode has been selected.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  drivers/bus/vdev/rte_bus_vdev.h |  4 ++++
>  drivers/bus/vdev/vdev.c         | 31 +++++++++++++++++++++++++++++++
>  2 files changed, 35 insertions(+)
>
> diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
> index f99a41f825..c8b41e649c 100644
> --- a/drivers/bus/vdev/rte_bus_vdev.h
> +++ b/drivers/bus/vdev/rte_bus_vdev.h
> @@ -113,8 +113,12 @@ struct rte_vdev_driver {
>         rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
>         rte_vdev_dma_map_t *dma_map;     /**< Virtual device DMA map function. */
>         rte_vdev_dma_unmap_t *dma_unmap; /**< Virtual device DMA unmap function. */
> +       uint32_t drv_flags;                /**< Flags RTE_VDEV_DRV_*. */

This will probably get broken in the future, but for now, can you
indent the comment in the same way as earlier lines?


The ABI check will complain about this change so we need an exception.

rte_vdev_driver is exposed only through driver API.
We could flag the whole structure like we did for ethdev.
But there is also the alternative of just flagging the required
symbols so that we won't miss later the inclusion of this structure in
an API used by final users.
How about:

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 1dc84fa74b..435913d908 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,6 +11,8 @@
 ; Explicit ignore for driver-only ABI
 [suppress_type]
         name = eth_dev_ops
+[suppress_function]
+        name_regexp = rte_vdev_(|un)register

 ; Ignore fields inserted in cacheline boundary of rte_cryptodev
 [suppress_type]


>  };
>
> +/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
> +#define RTE_VDEV_DRV_NEED_IOVA_AS_VA 0x0001
> +
>  /**
>   * Register a virtual device driver.
>   *
> diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
> index acfd78828f..56f15e8201 100644
> --- a/drivers/bus/vdev/vdev.c
> +++ b/drivers/bus/vdev/vdev.c
> @@ -189,6 +189,7 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
>  {
>         const char *name;
>         struct rte_vdev_driver *driver;
> +       enum rte_iova_mode iova_mode;
>         int ret;
>
>         if (rte_dev_is_probed(&dev->device))
> @@ -199,6 +200,14 @@ vdev_probe_all_drivers(struct rte_vdev_device *dev)
>
>         if (vdev_parse(name, &driver))
>                 return -1;
> +
> +       iova_mode = rte_eal_iova_mode();
> +       if ((driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA) && (iova_mode == RTE_IOVA_PA)) {
> +               VDEV_LOG(ERR, "%s requires VA IOVA mode but current mode is PA, not initializing",
> +                               name);
> +               return -1;
> +       }
> +
>         ret = driver->probe(dev);
>         if (ret == 0)
>                 dev->device.driver = &driver->driver;
> @@ -594,6 +603,27 @@ vdev_unplug(struct rte_device *dev)
>         return rte_vdev_uninit(dev->name);
>  }
>
> +static enum rte_iova_mode
> +vdev_get_iommu_class(void)
> +{
> +       const char *name;
> +       struct rte_vdev_device *dev;
> +       struct rte_vdev_driver *driver;
> +
> +       TAILQ_FOREACH(dev, &vdev_device_list, next) {
> +               name = rte_vdev_device_name(dev);
> +               if (!name)
> +                       continue;

Afaics, a device in vdev_device_list always has a name.


> +               if (vdev_parse(name, &driver))
> +                       continue;
> +
> +               if (driver->drv_flags & RTE_VDEV_DRV_NEED_IOVA_AS_VA)
> +                       return RTE_IOVA_VA;
> +       }
> +
> +       return RTE_IOVA_DC;
> +}
> +
>  static struct rte_bus rte_vdev_bus = {
>         .scan = vdev_scan,
>         .probe = vdev_probe,
> @@ -603,6 +633,7 @@ static struct rte_bus rte_vdev_bus = {
>         .parse = vdev_parse,
>         .dma_map = vdev_dma_map,
>         .dma_unmap = vdev_dma_unmap,
> +       .get_iommu_class = vdev_get_iommu_class,
>         .dev_iterate = rte_vdev_dev_iterate,
>  };
>
> --
> 2.29.2
>


-- 
David Marchand


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev
@ 2021-01-20 14:25  4% Ray Kinsella
  2021-01-20 15:41  7% ` Thomas Monjalon
  2021-01-26 11:55  8% ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Ray Kinsella @ 2021-01-20 14:25 UTC (permalink / raw)
  To: Ray Kinsella, Neil Horman, Akhil Goyal, Konstantin Ananyev,
	Abhinandan Gujjar
  Cc: thomas, david.marchand, dev

Update the ignore entry for crytodev to use named fields instead of
bit positions.

Fixes: 1c3ffb9559

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/libabigail.abignore | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 1dc84fa74b..1f17fbed58 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -15,4 +15,4 @@
 ; Ignore fields inserted in cacheline boundary of rte_cryptodev
 [suppress_type]
         name = rte_cryptodev
-        has_data_member_inserted_between = {0, 1023}
+        has_data_member_inserted_between = {offset_after(attached), end}
\ No newline at end of file
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  2021-01-20 13:01  3%       ` Kinsella, Ray
  2021-01-20 13:12  0%         ` David Marchand
@ 2021-01-20 13:15  0%         ` Thomas Monjalon
  2021-01-20 14:09  0%           ` Kinsella, Ray
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-20 13:15 UTC (permalink / raw)
  To: Gujjar, Abhinandan S, Kinsella, Ray, mdr
  Cc: dev, Ananyev, Konstantin, Akhil Goyal, aconole, david.marchand

20/01/2021 14:01, Kinsella, Ray:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 15/01/2021 17:01, Akhil Goyal:
> > > > This patch adds APIs to add/remove callback functions on crypto
> > > > enqueue/dequeue burst. The callback function will be called for
> > each
> > > > burst of crypto ops received/sent on a given crypto device queue
> > > > pair.
> > > >
> > > > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > ---
> > > Series applied to dpdk-next-crypto
> > 
> > 
> > It is missing a rule to ignore the false positive ABI break:
> > 
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -11,3 +11,8 @@
> >  ; Explicit ignore for driver-only ABI
> >  [suppress_type]
> >          name = eth_dev_ops
> > +
> > +; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > +[suppress_type]
> > +        name = rte_cryptodev
> > +        has_data_member_inserted_between = {0, 1023}
> > 
> 
> This is a bit of a blunt instrument as the range quiet large?

The range is in bits. It matches the actual size of the struct
for 64B cacheline.

> {offset_after(attached), end} instead works better - I will send a patch.

Yes that's exactly what David told me earlier today.

> > I'll add this change while pulling in the main tree.

Yes please.
Note: we missed requiring this exception rule in the original patch.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  2021-01-20 13:01  3%       ` Kinsella, Ray
@ 2021-01-20 13:12  0%         ` David Marchand
  2021-01-20 13:15  0%         ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-01-20 13:12 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Thomas Monjalon, Gujjar, Abhinandan S, dev, Ananyev, Konstantin,
	Akhil Goyal, aconole

On Wed, Jan 20, 2021 at 2:01 PM Kinsella, Ray <ray.kinsella@intel.com> wrote:
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -11,3 +11,8 @@
> >  ; Explicit ignore for driver-only ABI
> >  [suppress_type]
> >          name = eth_dev_ops
> > +
> > +; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > +[suppress_type]
> > +        name = rte_cryptodev
> > +        has_data_member_inserted_between = {0, 1023}
> >
>
> This is a bit of a blunt instrument as the range quiet large?
> {offset_after(attached), end} instead works better - I will send a patch.

This is what I suggested to Thomas offlist.
A drawback I see is that we are now blind for any later changes
occurring in this range.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v19 0/4] Add PMD power management
  2021-01-19 16:45  3% ` [dpdk-dev] [PATCH v18 0/2] Add PMD power management Anatoly Burakov
@ 2021-01-20 11:50  3%   ` Anatoly Burakov
  2021-01-22 17:12  3%     ` [dpdk-dev] [PATCH v20 " Anatoly Burakov
  0 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-01-20 11:50 UTC (permalink / raw)
  To: dev; +Cc: thomas

This patchset proposes a simple API for Ethernet drivers to cause the  
CPU to enter a power-optimized state while waiting for packets to  
arrive. There are multiple proposed mechanisms to achieve said power
savings: simple frequency scaling, idle loop, and monitoring the Rx
queue for incoming packages. The latter is achieved through cooperation
with the NIC driver that will allow us to know address of wake up event,
and wait for writes on that address.

To achieve power savings, there is a very simple mechanism used: we're 
counting empty polls, and if a certain threshold is reached, we employ
one of the suggested power management schemes automatically, from within
a Rx callback inside the PMD. Once there's traffic again, the empty poll
counter is reset.

Why are we putting it into ethdev as opposed to leaving this up to the 
application? Our customers specifically requested a way to do it with
minimal changes to the application code. The current approach allows to 
just flip a switch and automatically have power savings.

Things of note:

- Only 1:1 core to queue mapping is supported, meaning that each lcore 
  must at most handle RX on a single queue
- Support 3 type policies. Monitor/Pause/Frequency Scaling
- Power management is enabled per-queue
- The API doesn't extend to other device types

v19:
- Renamed "data_sz" to "size" and clarified struct comments
- Clarified documentation around rte_power_monitor/pause API

v18:
- Rebase on top of latest main
- Address review comments by Thomas

v17:
- Added exception for ethdev driver-only ABI
- Added memory barriers for monitor/wakeup (Konstantin)
- Fixed compiled issues on non-x86 platforms (hopefully!)

v16:
- Implemented Konstantin's suggestions and comments
- Added return values to the API

v15:
- Fixed incorrect check in UMWAIT callback
- Fixed accidental whitespace changes

v14:
- Fixed ARM/PPC builds
- Addressed various review comments

v13:
- Reworked the librte_power code to require less locking and handle invalid
  parameters better
- Fix numerous rebase errors present in v12

v12:
- Rebase on top of 21.02
- Rework of power intrinsics code

Anatoly Burakov (2):
  eal: rename power monitor condition member
  eal: improve comments around power monitoring API

Liang Ma (2):
  power: add PMD power management API and callback
  examples/l3fwd-power: enable PMD power mgmt

 doc/guides/prog_guide/power_man.rst           |  41 ++
 doc/guides/rel_notes/release_21_02.rst        |  10 +
 .../sample_app_ug/l3_forward_power_man.rst    |  35 ++
 drivers/event/dlb/dlb.c                       |   2 +-
 drivers/event/dlb2/dlb2.c                     |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   2 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |   2 +-
 examples/l3fwd-power/main.c                   |  90 ++++-
 .../include/generic/rte_power_intrinsics.h    |  39 +-
 lib/librte_eal/x86/rte_power_intrinsics.c     |   4 +-
 lib/librte_power/meson.build                  |   5 +-
 lib/librte_power/rte_power_pmd_mgmt.c         | 365 ++++++++++++++++++
 lib/librte_power/rte_power_pmd_mgmt.h         |  91 +++++
 lib/librte_power/version.map                  |   5 +
 15 files changed, 669 insertions(+), 26 deletions(-)
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.c
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.h

-- 
2.25.1

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-20  7:23  0%       ` Dmitry Kozlyuk
@ 2021-01-20 10:24  0%         ` Thomas Monjalon
  2021-01-22 20:31  4%           ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-20 10:24 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit

20/01/2021 08:23, Dmitry Kozlyuk:
> On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:
> > This is now the right timeframe to introduce this change
> > with the new Python module dependency.
> > Unfortunately, the ABI check is returning an issue:
> > 
> > 'const char mlx5_common_pci_pmd_info[62]' was changed
> > to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c
> 
> Will investigate and fix ASAP.
>  
> > Few more comments below:
> > 
> > 20/10/2020 19:44, Dmitry Kozlyuk:
> > > --- a/buildtools/meson.build
> > > +++ b/buildtools/meson.build
> > > +if host_machine.system() != 'windows'  
> > 
> > You can use "is_windows".
> 
> It's defined by config/meson.build, which is processed after
> buidtools/meson.build, because of the dependency, if swapped:
> 
> 	config/x86/meson.build:6:1: ERROR: Unknown variable
> 	"binutils_avx512_check".

OK

> > > --- a/doc/guides/linux_gsg/sys_reqs.rst
> > > +++ b/doc/guides/linux_gsg/sys_reqs.rst
> > > +*   ``pyelftools`` (version 0.22+)  
> > 
> > This requirement is missing in doc/guides/freebsd_gsg/build_dpdk.rst
> 
> OK.
> 
> > > --- a/meson.build
> > > +++ b/meson.build
> > > -subdir('buildtools/pmdinfogen')  
> > 
> > This could be in patch 3 (removing the code).
> 
> It would redefine "pmdinfogen" variable to old pmdinfogen.
> Besides, why build what's not used at this patch already?

Just trying to find the best patch split.
If needed, OK to keep as is.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  2021-01-20  0:05  3%     ` Thomas Monjalon
@ 2021-01-20  7:23  0%       ` Dmitry Kozlyuk
  2021-01-20 10:24  0%         ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-01-20  7:23 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, ci, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit

On Wed, 20 Jan 2021 01:05:59 +0100, Thomas Monjalon wrote:
> This is now the right timeframe to introduce this change
> with the new Python module dependency.
> Unfortunately, the ABI check is returning an issue:
> 
> 'const char mlx5_common_pci_pmd_info[62]' was changed
> to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c

Will investigate and fix ASAP.
 
> Few more comments below:
> 
> 20/10/2020 19:44, Dmitry Kozlyuk:
> > --- a/buildtools/meson.build
> > +++ b/buildtools/meson.build
> > +if host_machine.system() != 'windows'  
> 
> You can use "is_windows".

It's defined by config/meson.build, which is processed after
buidtools/meson.build, because of the dependency, if swapped:

	config/x86/meson.build:6:1: ERROR: Unknown variable
	"binutils_avx512_check".

> > --- a/doc/guides/linux_gsg/sys_reqs.rst
> > +++ b/doc/guides/linux_gsg/sys_reqs.rst
> > +*   ``pyelftools`` (version 0.22+)  
> 
> This requirement is missing in doc/guides/freebsd_gsg/build_dpdk.rst

OK.

> > --- a/meson.build
> > +++ b/meson.build
> > -subdir('buildtools/pmdinfogen')  
> 
> This could be in patch 3 (removing the code).

It would redefine "pmdinfogen" variable to old pmdinfogen.
Besides, why build what's not used at this patch already?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen
  @ 2021-01-20  0:05  3%     ` Thomas Monjalon
  2021-01-20  7:23  0%       ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-20  0:05 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, ci, Stephen Hemminger, David Marchand, Maxime Coquelin,
	Aaron Conole, Bruce Richardson, ferruh.yigit

This is now the right timeframe to introduce this change
with the new Python module dependency.
Unfortunately, the ABI check is returning an issue:

'const char mlx5_common_pci_pmd_info[62]' was changed
to 'const char mlx5_common_pci_pmd_info[60]' at rte_common_mlx5.pmd.c


Few more comments below:

20/10/2020 19:44, Dmitry Kozlyuk:
> --- a/buildtools/meson.build
> +++ b/buildtools/meson.build
> +if host_machine.system() != 'windows'

You can use "is_windows".


> --- a/doc/guides/linux_gsg/sys_reqs.rst
> +++ b/doc/guides/linux_gsg/sys_reqs.rst
> +*   ``pyelftools`` (version 0.22+)

This requirement is missing in doc/guides/freebsd_gsg/build_dpdk.rst


> --- a/meson.build
> +++ b/meson.build
> -subdir('buildtools/pmdinfogen')

This could be in patch 3 (removing the code).





^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  @ 2021-01-19 18:31  8%     ` Thomas Monjalon
  2021-01-20 13:01  3%       ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-19 18:31 UTC (permalink / raw)
  To: Abhinandan Gujjar
  Cc: dev, konstantin.ananyev, Akhil Goyal, ray.kinsella, aconole,
	david.marchand

15/01/2021 17:01, Akhil Goyal:
> > This patch adds APIs to add/remove callback functions on crypto
> > enqueue/dequeue burst. The callback function will be called for
> > each burst of crypto ops received/sent on a given crypto device
> > queue pair.
> > 
> > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> Series applied to dpdk-next-crypto


It is missing a rule to ignore the false positive ABI break:

--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,8 @@
 ; Explicit ignore for driver-only ABI
 [suppress_type]
         name = eth_dev_ops
+
+; Ignore fields inserted in cacheline boundary of rte_cryptodev
+[suppress_type]
+        name = rte_cryptodev
+        has_data_member_inserted_between = {0, 1023}


I'll add this change while pulling in the main tree.



^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v18 0/2] Add PMD power management
  @ 2021-01-19 16:45  3% ` Anatoly Burakov
  2021-01-20 11:50  3%   ` [dpdk-dev] [PATCH v19 0/4] " Anatoly Burakov
  0 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-01-19 16:45 UTC (permalink / raw)
  To: dev; +Cc: thomas

This patchset proposes a simple API for Ethernet drivers to cause the  
CPU to enter a power-optimized state while waiting for packets to  
arrive. There are multiple proposed mechanisms to achieve said power
savings: simple frequency scaling, idle loop, and monitoring the Rx
queue for incoming packages. The latter is achieved through cooperation
with the NIC driver that will allow us to know address of wake up event,
and wait for writes on that address.

To achieve power savings, there is a very simple mechanism used: we're 
counting empty polls, and if a certain threshold is reached, we employ
one of the suggested power management schemes automatically, from within
a Rx callback inside the PMD. Once there's traffic again, the empty poll
counter is reset.

Why are we putting it into ethdev as opposed to leaving this up to the 
application? Our customers specifically requested a way to do it with
minimal changes to the application code. The current approach allows to 
just flip a switch and automatically have power savings.

Things of note:

- Only 1:1 core to queue mapping is supported, meaning that each lcore 
  must at most handle RX on a single queue
- Support 3 type policies. Monitor/Pause/Frequency Scaling
- Power management is enabled per-queue
- The API doesn't extend to other device types

v18:
- Rebase on top of latest main
- Address review comments by Thomas

v17:
- Added exception for ethdev driver-only ABI
- Added memory barriers for monitor/wakeup (Konstantin)
- Fixed compiled issues on non-x86 platforms (hopefully!)

v16:
- Implemented Konstantin's suggestions and comments
- Added return values to the API

v15:
- Fixed incorrect check in UMWAIT callback
- Fixed accidental whitespace changes

v14:
- Fixed ARM/PPC builds
- Addressed various review comments

v13:
- Reworked the librte_power code to require less locking and handle invalid
  parameters better
- Fix numerous rebase errors present in v12

v12:
- Rebase on top of 21.02
- Rework of power intrinsics code

Liang Ma (2):
  power: add PMD power management API and callback
  examples/l3fwd-power: enable PMD power mgmt

 doc/guides/prog_guide/power_man.rst           |  41 ++
 doc/guides/rel_notes/release_21_02.rst        |  10 +
 .../sample_app_ug/l3_forward_power_man.rst    |  35 ++
 examples/l3fwd-power/main.c                   |  90 ++++-
 lib/librte_power/meson.build                  |   5 +-
 lib/librte_power/rte_power_pmd_mgmt.c         | 365 ++++++++++++++++++
 lib/librte_power/rte_power_pmd_mgmt.h         |  91 +++++
 lib/librte_power/version.map                  |   5 +
 8 files changed, 638 insertions(+), 4 deletions(-)
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.c
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.h

-- 
2.25.1

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v17 00/11] Add PMD power management
  2021-01-18 17:02  3%                 ` Burakov, Anatoly
@ 2021-01-18 17:54  3%                   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-01-18 17:54 UTC (permalink / raw)
  To: Burakov, Anatoly
  Cc: Thomas Monjalon, David Hunt, chris.macnamara, dev, Ananyev,
	Konstantin, Timothy McDaniel, Bruce Richardson, Andrew Rybchenko,
	Yigit, Ferruh, Ajit Khaparde, Jerin Jacob Kollanukkaran

On Mon, Jan 18, 2021 at 6:02 PM Burakov, Anatoly
<anatoly.burakov@intel.com> wrote:
> >>> SPDK build is still broken.
> >>> http://mails.dpdk.org/archives/test-report/2021-January/174840.html
> > [...]
> >>> I guess this is because of the added dependency of rte_ethdev to rte_power.
> >>> Afaics, SPDK does not use pkg-config:
> >>> https://github.com/spdk/spdk/blob/master/lib/env_dpdk/env.mk#L53
> >>
> >> Sooo... this is an SPDK issue then? Because i can't see any way of
> >> fixing the issue on DPDK side.
> >
> > Yes SPDK should not skip pkg-config.
> > But it raises 2 question:
> >       - are we breaking ABI compatibility?
>
> Good question. Does including an extra intra-DPDK dependency count as
> ABI break? I was under impression that we didn't want DPDK to be
> distributed as individual libraries but rather would like it to be used
> as a whole, so if internal dependencies between components change, it's
> not a big deal (unless a third-party build system is used that
> explicitly specifies dependencies rather than using pkg-config).

I don't get where an ABI breakage would be.

What I reported is an issue with static link.

For shared link, I would expect librte_power would expose its
dependency on rte_ethdev via a DT_NEEDED entry.
The final binary does not have to be aware of it.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v17 00/11] Add PMD power management
  2021-01-18 16:06  3%               ` Thomas Monjalon
@ 2021-01-18 17:02  3%                 ` Burakov, Anatoly
  2021-01-18 17:54  3%                   ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-01-18 17:02 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand, David Hunt, chris.macnamara
  Cc: dev, Ananyev, Konstantin, Timothy McDaniel, Bruce Richardson,
	andrew.rybchenko, ferruh.yigit, ajit.khaparde, jerinj

On 18-Jan-21 4:06 PM, Thomas Monjalon wrote:
> 18/01/2021 16:45, Burakov, Anatoly:
>> On 18-Jan-21 3:24 PM, David Marchand wrote:
>>> On Thu, Jan 14, 2021 at 3:46 PM Anatoly Burakov
>>> <anatoly.burakov@intel.com> wrote:
>>>>
>>>> This patchset proposes a simple API for Ethernet drivers to cause the
>>>> CPU to enter a power-optimized state while waiting for packets to
>>>> arrive. There are multiple proposed mechanisms to achieve said power
>>>> savings: simple frequency scaling, idle loop, and monitoring the Rx
>>>> queue for incoming packages. The latter is achieved through cooperation
>>>> with the NIC driver that will allow us to know address of wake up event,
>>>> and wait for writes on that address.
> [...]
>>>> Why are we putting it into ethdev as opposed to leaving this up to the
>>>> application? Our customers specifically requested a way to do it with
>>>> minimal changes to the application code. The current approach allows to
>>>> just flip a switch and automatically have power savings.
> 
> The customer laziness is usually a bad justification :)
> I think we could achieve the same with not too much code
> on application side.

Yes, we could. Customers could basically take this patch and reimplement 
it inside their application, and get the same benefits (with also added 
benefit of having knowledge about their queue/core mapping, and so being 
able to use the PAUSE or SCALE schemes for more than one queue).

However, i still think it's a valid use case - if we can do it that way 
and have a ready-made power management story, why not?

> And I'm not sure hiding queue management is sane.
> Remember this rule: application must remain in control.
>

The application can still be in control by just not using the API and 
implementing things manually instead. Nothing is being taken away from 
the ability of application to be in control.

> [...]
>>> SPDK build is still broken.
>>> http://mails.dpdk.org/archives/test-report/2021-January/174840.html
> [...]
>>> I guess this is because of the added dependency of rte_ethdev to rte_power.
>>> Afaics, SPDK does not use pkg-config:
>>> https://github.com/spdk/spdk/blob/master/lib/env_dpdk/env.mk#L53
>>
>> Sooo... this is an SPDK issue then? Because i can't see any way of
>> fixing the issue on DPDK side.
> 
> Yes SPDK should not skip pkg-config.
> But it raises 2 question:
> 	- are we breaking ABI compatibility?

Good question. Does including an extra intra-DPDK dependency count as 
ABI break? I was under impression that we didn't want DPDK to be 
distributed as individual libraries but rather would like it to be used 
as a whole, so if internal dependencies between components change, it's 
not a big deal (unless a third-party build system is used that 
explicitly specifies dependencies rather than using pkg-config).

> 	- is ethdev management expected for librte_power?
> 
> It makes me wonder whether we should host the few functions mixing
> librte_ethdev and librte_power somewhere else.
> The question is where?
> 

That could be another possibility. We could put this into a separate 
library, but IMO it would serve no purpose other than avoiding adding a 
dependency on *internal* component to librte_power. I'm not sure it's a 
worthy trade off.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v17 00/11] Add PMD power management
  2021-01-18 15:45  0%             ` Burakov, Anatoly
@ 2021-01-18 16:06  3%               ` Thomas Monjalon
  2021-01-18 17:02  3%                 ` Burakov, Anatoly
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-18 16:06 UTC (permalink / raw)
  To: David Marchand, Burakov, Anatoly, David Hunt, chris.macnamara
  Cc: dev, Ananyev, Konstantin, Timothy McDaniel, Bruce Richardson,
	andrew.rybchenko, ferruh.yigit, ajit.khaparde, jerinj

18/01/2021 16:45, Burakov, Anatoly:
> On 18-Jan-21 3:24 PM, David Marchand wrote:
> > On Thu, Jan 14, 2021 at 3:46 PM Anatoly Burakov
> > <anatoly.burakov@intel.com> wrote:
> >>
> >> This patchset proposes a simple API for Ethernet drivers to cause the
> >> CPU to enter a power-optimized state while waiting for packets to
> >> arrive. There are multiple proposed mechanisms to achieve said power
> >> savings: simple frequency scaling, idle loop, and monitoring the Rx
> >> queue for incoming packages. The latter is achieved through cooperation
> >> with the NIC driver that will allow us to know address of wake up event,
> >> and wait for writes on that address.
[...]
> >> Why are we putting it into ethdev as opposed to leaving this up to the
> >> application? Our customers specifically requested a way to do it with
> >> minimal changes to the application code. The current approach allows to
> >> just flip a switch and automatically have power savings.

The customer laziness is usually a bad justification :)
I think we could achieve the same with not too much code
on application side.
And I'm not sure hiding queue management is sane.
Remember this rule: application must remain in control.

[...]
> > SPDK build is still broken.
> > http://mails.dpdk.org/archives/test-report/2021-January/174840.html
[...]
> > I guess this is because of the added dependency of rte_ethdev to rte_power.
> > Afaics, SPDK does not use pkg-config:
> > https://github.com/spdk/spdk/blob/master/lib/env_dpdk/env.mk#L53
> 
> Sooo... this is an SPDK issue then? Because i can't see any way of 
> fixing the issue on DPDK side.

Yes SPDK should not skip pkg-config.
But it raises 2 question:
	- are we breaking ABI compatibility?
	- is ethdev management expected for librte_power?

It makes me wonder whether we should host the few functions mixing
librte_ethdev and librte_power somewhere else.
The question is where?



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v17 00/11] Add PMD power management
  2021-01-18 15:24  0%           ` [dpdk-dev] [PATCH v17 00/11] Add PMD power management David Marchand
@ 2021-01-18 15:45  0%             ` Burakov, Anatoly
  2021-01-18 16:06  3%               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-01-18 15:45 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Timothy McDaniel,
	David Hunt, Bruce Richardson, chris.macnamara

On 18-Jan-21 3:24 PM, David Marchand wrote:
> On Thu, Jan 14, 2021 at 3:46 PM Anatoly Burakov
> <anatoly.burakov@intel.com> wrote:
>>
>> This patchset proposes a simple API for Ethernet drivers to cause the
>> CPU to enter a power-optimized state while waiting for packets to
>> arrive. There are multiple proposed mechanisms to achieve said power
>> savings: simple frequency scaling, idle loop, and monitoring the Rx
>> queue for incoming packages. The latter is achieved through cooperation
>> with the NIC driver that will allow us to know address of wake up event,
>> and wait for writes on that address.
>>
>> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
>> are used in their raw opcode form because there is no widespread
>> compiler support for them yet. Still, the API is made generic enough to
>> hopefully support other architectures, if they happen to implement
>> similar instructions.
>>
>> To achieve power savings, there is a very simple mechanism used: we're
>> counting empty polls, and if a certain threshold is reached, we employ
>> one of the suggested power management schemes automatically, from within
>> a Rx callback inside the PMD. Once there's traffic again, the empty poll
>> counter is reset.
>>
>> This patchset also introduces a few changes into existing power
>> management-related intrinsics, namely to provide a native way of waking
>> up a sleeping core without application being responsible for it, as well
>> as general robustness improvements. There's quite a bit of locking going
>> on, but these locks are per-thread and very little (if any) contention
>> is expected, so the performance impact shouldn't be that bad (and in any
>> case the locking happens when we're about to sleep anyway).
>>
>> Why are we putting it into ethdev as opposed to leaving this up to the
>> application? Our customers specifically requested a way to do it with
>> minimal changes to the application code. The current approach allows to
>> just flip a switch and automatically have power savings.
>>
>> Things of note:
>>
>> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>>    must at most handle RX on a single queue
>> - Support 3 type policies. Monitor/Pause/Frequency Scaling
>> - Power management is enabled per-queue
>> - The API doesn't extend to other device types
>>
>> v17:
>> - Added exception for ethdev driver-only ABI
>> - Added memory barriers for monitor/wakeup (Konstantin)
>> - Fixed compiled issues on non-x86 platforms (hopefully!)
> 
> SPDK build is still broken.
> http://mails.dpdk.org/archives/test-report/2021-January/174840.html
> 
> ==== 20 line log output for Ubuntu 18.04 (dpdk_compile_spdk): ====
> rte_power_pmd_mgmt.c:(.text.experimental+0x1cc): undefined reference
> to `rte_eth_add_rx_callback'
> rte_power_pmd_mgmt.c:(.text.experimental+0x1f8): undefined reference
> to `rte_eth_get_monitor_addr'
> rte_power_pmd_mgmt.c:(.text.experimental+0x37f): undefined reference
> to `rte_eth_dev_logtype'
> /dpdk/build/lib/librte_power.a(librte_power_rte_power_pmd_mgmt.c.o):
> In function `rte_power_pmd_mgmt_queue_disable':
> rte_power_pmd_mgmt.c:(.text.experimental+0x42a): undefined reference
> to `rte_eth_dev_is_valid_port'
> rte_power_pmd_mgmt.c:(.text.experimental+0x4e7): undefined reference
> to `rte_eth_remove_rx_callback'
> rte_power_pmd_mgmt.c:(.text.experimental+0x536): undefined reference
> to `rte_eth_remove_rx_callback'
> rte_power_pmd_mgmt.c:(.text.experimental+0x54d): undefined reference
> to `rte_eth_dev_logtype'
> collect2: error: ld returned 1 exit status
> /spdk/mk/spdk.app.mk:65: recipe for target 'iscsi_fuzz' failed
> /spdk/mk/spdk.subdirs.mk:44: recipe for target 'iscsi_fuzz' failed
> /spdk/mk/spdk.subdirs.mk:44: recipe for target 'fuzz' failed
> make[4]: *** [iscsi_fuzz] Error 1
> make[3]: *** [iscsi_fuzz] Error 2
> make[2]: *** [fuzz] Error 2
> /spdk/mk/spdk.subdirs.mk:44: recipe for target 'app' failed
> make[1]: *** [app] Error 2
> /spdk/mk/spdk.subdirs.mk:44: recipe for target 'test' failed
> make: *** [test] Error 2
> [2] Error running command.
> 
> 
> I guess this is because of the added dependency of rte_ethdev to rte_power.
> Afaics, SPDK does not use pkg-config:
> https://github.com/spdk/spdk/blob/master/lib/env_dpdk/env.mk#L53
> 
> 

Sooo... this is an SPDK issue then? Because i can't see any way of 
fixing the issue on DPDK side.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v17 00/11] Add PMD power management
  2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
  2021-01-14 14:46  2%           ` [dpdk-dev] [PATCH v17 01/11] eal: uninline power intrinsics Anatoly Burakov
  2021-01-14 14:46  7%           ` [dpdk-dev] [PATCH v17 06/11] ethdev: add simple power management API Anatoly Burakov
@ 2021-01-18 15:24  0%           ` David Marchand
  2021-01-18 15:45  0%             ` Burakov, Anatoly
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-18 15:24 UTC (permalink / raw)
  To: Anatoly Burakov
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Timothy McDaniel,
	David Hunt, Bruce Richardson, chris.macnamara

On Thu, Jan 14, 2021 at 3:46 PM Anatoly Burakov
<anatoly.burakov@intel.com> wrote:
>
> This patchset proposes a simple API for Ethernet drivers to cause the
> CPU to enter a power-optimized state while waiting for packets to
> arrive. There are multiple proposed mechanisms to achieve said power
> savings: simple frequency scaling, idle loop, and monitoring the Rx
> queue for incoming packages. The latter is achieved through cooperation
> with the NIC driver that will allow us to know address of wake up event,
> and wait for writes on that address.
>
> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
> are used in their raw opcode form because there is no widespread
> compiler support for them yet. Still, the API is made generic enough to
> hopefully support other architectures, if they happen to implement
> similar instructions.
>
> To achieve power savings, there is a very simple mechanism used: we're
> counting empty polls, and if a certain threshold is reached, we employ
> one of the suggested power management schemes automatically, from within
> a Rx callback inside the PMD. Once there's traffic again, the empty poll
> counter is reset.
>
> This patchset also introduces a few changes into existing power
> management-related intrinsics, namely to provide a native way of waking
> up a sleeping core without application being responsible for it, as well
> as general robustness improvements. There's quite a bit of locking going
> on, but these locks are per-thread and very little (if any) contention
> is expected, so the performance impact shouldn't be that bad (and in any
> case the locking happens when we're about to sleep anyway).
>
> Why are we putting it into ethdev as opposed to leaving this up to the
> application? Our customers specifically requested a way to do it with
> minimal changes to the application code. The current approach allows to
> just flip a switch and automatically have power savings.
>
> Things of note:
>
> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>   must at most handle RX on a single queue
> - Support 3 type policies. Monitor/Pause/Frequency Scaling
> - Power management is enabled per-queue
> - The API doesn't extend to other device types
>
> v17:
> - Added exception for ethdev driver-only ABI
> - Added memory barriers for monitor/wakeup (Konstantin)
> - Fixed compiled issues on non-x86 platforms (hopefully!)

SPDK build is still broken.
http://mails.dpdk.org/archives/test-report/2021-January/174840.html

==== 20 line log output for Ubuntu 18.04 (dpdk_compile_spdk): ====
rte_power_pmd_mgmt.c:(.text.experimental+0x1cc): undefined reference
to `rte_eth_add_rx_callback'
rte_power_pmd_mgmt.c:(.text.experimental+0x1f8): undefined reference
to `rte_eth_get_monitor_addr'
rte_power_pmd_mgmt.c:(.text.experimental+0x37f): undefined reference
to `rte_eth_dev_logtype'
/dpdk/build/lib/librte_power.a(librte_power_rte_power_pmd_mgmt.c.o):
In function `rte_power_pmd_mgmt_queue_disable':
rte_power_pmd_mgmt.c:(.text.experimental+0x42a): undefined reference
to `rte_eth_dev_is_valid_port'
rte_power_pmd_mgmt.c:(.text.experimental+0x4e7): undefined reference
to `rte_eth_remove_rx_callback'
rte_power_pmd_mgmt.c:(.text.experimental+0x536): undefined reference
to `rte_eth_remove_rx_callback'
rte_power_pmd_mgmt.c:(.text.experimental+0x54d): undefined reference
to `rte_eth_dev_logtype'
collect2: error: ld returned 1 exit status
/spdk/mk/spdk.app.mk:65: recipe for target 'iscsi_fuzz' failed
/spdk/mk/spdk.subdirs.mk:44: recipe for target 'iscsi_fuzz' failed
/spdk/mk/spdk.subdirs.mk:44: recipe for target 'fuzz' failed
make[4]: *** [iscsi_fuzz] Error 1
make[3]: *** [iscsi_fuzz] Error 2
make[2]: *** [fuzz] Error 2
/spdk/mk/spdk.subdirs.mk:44: recipe for target 'app' failed
make[1]: *** [app] Error 2
/spdk/mk/spdk.subdirs.mk:44: recipe for target 'test' failed
make: *** [test] Error 2
[2] Error running command.


I guess this is because of the added dependency of rte_ethdev to rte_power.
Afaics, SPDK does not use pkg-config:
https://github.com/spdk/spdk/blob/master/lib/env_dpdk/env.mk#L53


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v5 1/2] ethdev: add new tunnel type for eCPRI
  2021-01-15 12:23  3%     ` Ferruh Yigit
@ 2021-01-18  2:40  0%       ` Guo, Jia
  0 siblings, 0 replies; 200+ results
From: Guo, Jia @ 2021-01-18  2:40 UTC (permalink / raw)
  To: Yigit, Ferruh, Zhang, Qi Z, thomas, andrew.rybchenko, Iremonger,
	Bernard, Lu, Wenzhuo, Xing, Beilei
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, Su, Simei, orika,
	getelson, maxime.coquelin, jerinj, ajit.khaparde, bingz,
	Kinsella, Ray, dodji, david.marchand


> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, January 15, 2021 8:23 PM
> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Iremonger,
> Bernard <bernard.iremonger@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; Su, Simei <simei.su@intel.com>; orika@nvidia.com;
> getelson@nvidia.com; maxime.coquelin@redhat.com; jerinj@marvell.com;
> ajit.khaparde@broadcom.com; bingz@nvidia.com; Kinsella, Ray
> <ray.kinsella@intel.com>; dodji@redhat.com; david.marchand@redhat.com
> Subject: Re: [dpdk-dev v5 1/2] ethdev: add new tunnel type for eCPRI
> 
> On 1/15/2021 5:15 AM, Jeff Guo wrote:
> > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
> type.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> >   doc/guides/rel_notes/release_21_02.rst | 15 ++++++++++++++-
> >   lib/librte_ethdev/rte_ethdev.h         |  1 +
> >   2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/doc/guides/rel_notes/release_21_02.rst
> > b/doc/guides/rel_notes/release_21_02.rst
> > index b1bb2d8679..80f71be8e6 100644
> > --- a/doc/guides/rel_notes/release_21_02.rst
> > +++ b/doc/guides/rel_notes/release_21_02.rst
> > @@ -61,6 +61,18 @@ New Features
> >
> >     * Added support for Stingray2 device.
> >
> > +* **Updated the Intel ice driver.**
> > +
> > +  Updated the Intel ice driver with new features and improvements,
> including:
> > +
> > +  * Added support for UDP dynamic port assignment for eCPRI protocol
> configure feature.
> > +
> > +* **Updated Intel iavf driver.**
> > +
> > +  Updated iavf PMD with new features and improvements, including:
> > +
> > +  * Added support for FDIR/RSS packet steering for flow type eCPRI
> protocol features.
> > +
> 
> These are not related to the patch, so dropping from the patch.
> 

Ok.

> >
> >   Removed Items
> >   -------------
> > @@ -110,7 +122,8 @@ ABI Changes
> >      Also, make sure to start the actual text at the margin.
> >      =======================================================
> >
> > -* No ABI change that would break compatibility with 20.11.
> > +* ethdev: the structure ``rte_eth_tunnel_type`` has added one
> > +parameter
> > +  ``RTE_TUNNEL_TYPE_ECPRI`` for eCPRI UDP port configuration.
> >
> 
> This is not an ABI break, so should not be in this section, also not an API cange,
> we can't put there too.
> And this change is not big enough to add the new features, perhaps better to
> remove this and add the PMD feature updates as you did above for the
> relevant sets, so I am dropping this as well.
> 

Ok, I will use another coming patch to update the doc. Thanks.

> >
> >   Known Issues
> > diff --git a/lib/librte_ethdev/rte_ethdev.h
> > b/lib/librte_ethdev/rte_ethdev.h index f5f8919186..2cbce958cf 100644
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> >   	RTE_TUNNEL_TYPE_IP_IN_GRE,
> >   	RTE_L2_TUNNEL_TYPE_E_TAG,
> >   	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > +	RTE_TUNNEL_TYPE_ECPRI,
> >   	RTE_TUNNEL_TYPE_MAX,
> >   };
> >
> >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal/headers: explicitly cast void * to type *
  @ 2021-01-17 17:13  3%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-17 17:13 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: Bruce Richardson, Dmitry Kozlyuk, dev, navasile, stable

15/01/2021 20:21, Tyler Retzlaff:
> would you also like a patch submitted that stops installing the header. the
> change will be breaking if any other consumers have made the same mistake as
> we did. i'm not sure what dpdk's stance is on pulling headers back out of
> public space.

That's a good question.
If it is described enough that it is not part of the API,
I think we can stop installing the header.
Anyway, our only commitment is on ABI compatibility, so it should be OK.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 1/1] devtools: avoid installing static binaries
  2021-01-15 15:24  3%     ` David Marchand
@ 2021-01-15 16:02  4%       ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-15 16:02 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Bruce Richardson

15/01/2021 16:24, David Marchand:
> On Wed, Jan 13, 2021 at 11:01 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > 13/01/2021 20:05, Thomas Monjalon:
> > > When testing compilation and checking ABI compatibility,
> > > there is no real need of static binaries eating disks.
> > >
> > > The static linkage of applications was already well tested,
> > > though the static examples tested with meson were limited to "l3fwd" only.
> > > The static build test with make is limited to "helloworld" example.
> > >
> > > The ABI compatibility is checked on shared libraries,
> > > and there is no need to test again on similar builds.
> > > A new parameter is added to the function "build",
> > > so the ABI check is enabled only for native gcc and clang shared builds,
> > > 32-bit, generic armv8 and ppc cross compilations.
> > > In other words, it is disabled for some static builds and some Arm ones.
> > >
> > > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> > > v2:
> > > - separate ABI check enablement from default library
> > > - disable ABI check in specific Arm builds
> > > ---
> > [...]
> > > -build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
> > > +build build-x86-default cc ABI \
> > > +     -Dlibdir=lib -Dmachine=$default_machine $use_shared
> >
> > After a second thought, I think this one should be "skipABI".
> 
> No opinion on this one.
> 
> The title might need some tweak, since you also disabled the ABI check
> on some ARM targets.

Yes, you're right.
Disabling some ABI checks is a way to reduce the number
of static binaries, but it should be visible in the title.

> The rest lgtm.
> 
> Acked-by: David Marchand <david.marchand@redhat.com>

Applied with title "devtools: reduce ABI checks and static binaries"



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 1/1] devtools: avoid installing static binaries
  2021-01-13 22:01  0%   ` Thomas Monjalon
@ 2021-01-15 15:24  3%     ` David Marchand
  2021-01-15 16:02  4%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-15 15:24 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Bruce Richardson

On Wed, Jan 13, 2021 at 11:01 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 13/01/2021 20:05, Thomas Monjalon:
> > When testing compilation and checking ABI compatibility,
> > there is no real need of static binaries eating disks.
> >
> > The static linkage of applications was already well tested,
> > though the static examples tested with meson were limited to "l3fwd" only.
> > The static build test with make is limited to "helloworld" example.
> >
> > The ABI compatibility is checked on shared libraries,
> > and there is no need to test again on similar builds.
> > A new parameter is added to the function "build",
> > so the ABI check is enabled only for native gcc and clang shared builds,
> > 32-bit, generic armv8 and ppc cross compilations.
> > In other words, it is disabled for some static builds and some Arm ones.
> >
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> > v2:
> > - separate ABI check enablement from default library
> > - disable ABI check in specific Arm builds
> > ---
> [...]
> > -build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
> > +build build-x86-default cc ABI \
> > +     -Dlibdir=lib -Dmachine=$default_machine $use_shared
>
> After a second thought, I think this one should be "skipABI".

No opinion on this one.

The title might need some tweak, since you also disabled the ABI check
on some ARM targets.
The rest lgtm.

Acked-by: David Marchand <david.marchand@redhat.com>

-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v5 1/2] ethdev: add new tunnel type for eCPRI
  2021-01-15  5:15 11%   ` [dpdk-dev] [dpdk-dev v5 1/2] ethdev: add new tunnel type " Jeff Guo
@ 2021-01-15 12:23  3%     ` Ferruh Yigit
  2021-01-18  2:40  0%       ` Guo, Jia
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-15 12:23 UTC (permalink / raw)
  To: Jeff Guo, qi.z.zhang, thomas, andrew.rybchenko,
	bernard.iremonger, wenzhuo.lu, beilei.xing
  Cc: jingjing.wu, qiming.yang, haiyue.wang, dev, simei.su, orika,
	getelson, maxime.coquelin, jerinj, ajit.khaparde, bingz,
	ray.kinsella, dodji, david.marchand

On 1/15/2021 5:15 AM, Jeff Guo wrote:
> Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel type.
> 
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
>   doc/guides/rel_notes/release_21_02.rst | 15 ++++++++++++++-
>   lib/librte_ethdev/rte_ethdev.h         |  1 +
>   2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
> index b1bb2d8679..80f71be8e6 100644
> --- a/doc/guides/rel_notes/release_21_02.rst
> +++ b/doc/guides/rel_notes/release_21_02.rst
> @@ -61,6 +61,18 @@ New Features
>   
>     * Added support for Stingray2 device.
>   
> +* **Updated the Intel ice driver.**
> +
> +  Updated the Intel ice driver with new features and improvements, including:
> +
> +  * Added support for UDP dynamic port assignment for eCPRI protocol configure feature.
> +
> +* **Updated Intel iavf driver.**
> +
> +  Updated iavf PMD with new features and improvements, including:
> +
> +  * Added support for FDIR/RSS packet steering for flow type eCPRI protocol features.
> +

These are not related to the patch, so dropping from the patch.

>   
>   Removed Items
>   -------------
> @@ -110,7 +122,8 @@ ABI Changes
>      Also, make sure to start the actual text at the margin.
>      =======================================================
>   
> -* No ABI change that would break compatibility with 20.11.
> +* ethdev: the structure ``rte_eth_tunnel_type`` has added one parameter
> +  ``RTE_TUNNEL_TYPE_ECPRI`` for eCPRI UDP port configuration.
>   

This is not an ABI break, so should not be in this section, also not an API 
cange, we can't put there too.
And this change is not big enough to add the new features, perhaps better to 
remove this and add the PMD feature updates as you did above for the relevant 
sets, so I am dropping this as well.

>   
>   Known Issues
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index f5f8919186..2cbce958cf 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
>   	RTE_TUNNEL_TYPE_IP_IN_GRE,
>   	RTE_L2_TUNNEL_TYPE_E_TAG,
>   	RTE_TUNNEL_TYPE_VXLAN_GPE,
> +	RTE_TUNNEL_TYPE_ECPRI,
>   	RTE_TUNNEL_TYPE_MAX,
>   };
>   
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3] pci/windows: fix build with SDK >= 10.0.20253
  @ 2021-01-15  5:34  3%     ` Tyler Retzlaff
  0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-01-15  5:34 UTC (permalink / raw)
  To: Ranjit Menon; +Cc: dev

On Thu, Jan 14, 2021 at 02:59:44PM -0800, Ranjit Menon wrote:

> Quick q: Do you know when this new SDK will be available publicly?

there are periodic release of the sdk [2] that match the versions of windows
available through the windows insider program [1].

i can see the latest available appears to be 20279 (so later) than the
kit i referenced in the change. so to answer your question a newer kit
is available now. just remember preview kits do not provide a compatibility
guarantee i.e. api and abi may change

[1] https://insider.windows.com/en-us/
[2] https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewSDK


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [dpdk-dev v5 1/2] ethdev: add new tunnel type for eCPRI
  @ 2021-01-15  5:15 11%   ` Jeff Guo
  2021-01-15 12:23  3%     ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Jeff Guo @ 2021-01-15  5:15 UTC (permalink / raw)
  To: qi.z.zhang, thomas, ferruh.yigit, andrew.rybchenko,
	bernard.iremonger, wenzhuo.lu, beilei.xing
  Cc: jingjing.wu, qiming.yang, haiyue.wang, dev, jia.guo, simei.su,
	orika, getelson, maxime.coquelin, jerinj, ajit.khaparde, bingz,
	ray.kinsella, dodji, david.marchand

Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel type.

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 doc/guides/rel_notes/release_21_02.rst | 15 ++++++++++++++-
 lib/librte_ethdev/rte_ethdev.h         |  1 +
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index b1bb2d8679..80f71be8e6 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -61,6 +61,18 @@ New Features
 
   * Added support for Stingray2 device.
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for UDP dynamic port assignment for eCPRI protocol configure feature.
+
+* **Updated Intel iavf driver.**
+
+  Updated iavf PMD with new features and improvements, including:
+
+  * Added support for FDIR/RSS packet steering for flow type eCPRI protocol features.
+
 
 Removed Items
 -------------
@@ -110,7 +122,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* No ABI change that would break compatibility with 20.11.
+* ethdev: the structure ``rte_eth_tunnel_type`` has added one parameter
+  ``RTE_TUNNEL_TYPE_ECPRI`` for eCPRI UDP port configuration.
 
 
 Known Issues
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f8919186..2cbce958cf 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
 	RTE_TUNNEL_TYPE_IP_IN_GRE,
 	RTE_L2_TUNNEL_TYPE_E_TAG,
 	RTE_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_TUNNEL_TYPE_ECPRI,
 	RTE_TUNNEL_TYPE_MAX,
 };
 
-- 
2.20.1


^ permalink raw reply	[relevance 11%]

* [dpdk-dev] [dpdk-dev v4 1/2] ethdev: add new tunnel type for eCPRI
  @ 2021-01-15  4:35 11%   ` Jeff Guo
  0 siblings, 0 replies; 200+ results
From: Jeff Guo @ 2021-01-15  4:35 UTC (permalink / raw)
  To: qi.z.zhang, thomas, ferruh.yigit, andrew.rybchenko,
	bernard.iremonger, wenzhuo.lu, beilei.xing
  Cc: jingjing.wu, qiming.yang, haiyue.wang, dev, jia.guo, simei.su,
	orika, getelson, maxime.coquelin, jerinj, ajit.khaparde, bingz,
	ray.kinsella, dodji, david.marchand

Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel type.

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 doc/guides/rel_notes/release_21_02.rst | 15 ++++++++++++++-
 lib/librte_ethdev/rte_ethdev.h         |  1 +
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index b1bb2d8679..2de6afdb85 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -61,6 +61,18 @@ New Features
 
   * Added support for Stingray2 device.
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for UDP dynamic port assignment for eCPRI protocol configure feature. 
+
+* **Updated Intel iavf driver.**
+
+  Updated iavf PMD with new features and improvements, including:
+
+  * Added support for FDIR/RSS packet steering for flow type eCPRI protocol features.
+
 
 Removed Items
 -------------
@@ -110,7 +122,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* No ABI change that would break compatibility with 20.11.
+* ethdev: the structure ``rte_eth_tunnel_type`` has added one parameter
+  ``RTE_TUNNEL_TYPE_ECPRI`` for eCPRI UDP port configuration.
 
 
 Known Issues
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f8919186..2cbce958cf 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
 	RTE_TUNNEL_TYPE_IP_IN_GRE,
 	RTE_L2_TUNNEL_TYPE_E_TAG,
 	RTE_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_TUNNEL_TYPE_ECPRI,
 	RTE_TUNNEL_TYPE_MAX,
 };
 
-- 
2.20.1


^ permalink raw reply	[relevance 11%]

* [dpdk-dev] [dpdk-dev v3 1/2] ethdev: add new tunnel type for ecpri
  @ 2021-01-15  2:42 11%   ` Jeff Guo
  0 siblings, 0 replies; 200+ results
From: Jeff Guo @ 2021-01-15  2:42 UTC (permalink / raw)
  To: qi.z.zhang, thomas, ferruh.yigit, andrew.rybchenko,
	bernard.iremonger, wenzhuo.lu, beilei.xing
  Cc: jingjing.wu, qiming.yang, haiyue.wang, dev, jia.guo, simei.su,
	orika, getelson, maxime.coquelin, jerinj, ajit.khaparde, bingz,
	ray.kinsella, dodji, david.marchand

Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel type.

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 doc/guides/rel_notes/release_21_02.rst | 3 ++-
 lib/librte_ethdev/rte_ethdev.h         | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index b1bb2d8679..e5168d1312 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -110,7 +110,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* No ABI change that would break compatibility with 20.11.
+* ethdev: the structure ``rte_eth_tunnel_type`` has added one parameter
+  ``RTE_TUNNEL_TYPE_ECPRI`` for ecpri UDP port configuration.
 
 
 Known Issues
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f8919186..2cbce958cf 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
 	RTE_TUNNEL_TYPE_IP_IN_GRE,
 	RTE_L2_TUNNEL_TYPE_E_TAG,
 	RTE_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_TUNNEL_TYPE_ECPRI,
 	RTE_TUNNEL_TYPE_MAX,
 };
 
-- 
2.20.1


^ permalink raw reply	[relevance 11%]

* Re: [dpdk-dev] [PATCH v3 01/22] ethdev: fix MTU size exceeds max rx packet length
  @ 2021-01-14 20:44  3%         ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-01-14 20:44 UTC (permalink / raw)
  To: Andrew Boyer
  Cc: Steve Yang, dev, thomas, andrew.rybchenko, oulijun, Konstantin Ananyev

On 1/14/2021 5:29 PM, Andrew Boyer wrote:
> 
> 
>> On Jan 14, 2021, at 12:13 PM, Ferruh Yigit <ferruh.yigit@intel.com 
>> <mailto:ferruh.yigit@intel.com>> wrote:
>>
>> On 1/14/2021 4:36 PM, Ferruh Yigit wrote:
>>> On 1/14/2021 9:45 AM, Steve Yang wrote:
>>>> Ethdev is using default Ethernet overhead to decide if provided
>>>> 'max_rx_pkt_len' value is bigger than max (non jumbo) MTU value,
>>>> and limits it to MAX if it is.
>>>>
>>>> Since the application/driver used Ethernet overhead is different than
>>>> the ethdev one, check result is wrong.
>>>>
>>>> If the driver is using Ethernet overhead bigger than the default one,
>>>> the provided 'max_rx_pkt_len' is trimmed down, and in the driver when
>>>> correct Ethernet overhead is used to convert back, the resulting MTU is
>>>> less than the intended one, causing some packets to be dropped.
>>>>
>>>> Like,
>>>> app     -> max_rx_pkt_len = 1500/*mtu*/ + 22/*overhead*/ = 1522
>>>> ethdev  -> 1522 > 1518/*MAX*/; max_rx_pkt_len = 1518
>>>> driver  -> MTU = 1518 - 22 = 1496
>>>> Packets with size 1497-1500 are dropped although intention is to be able
>>>> to send/receive them.
>>>>
>>>> The fix is to make ethdev use the correct Ethernet overhead for port,
>>>> instead of default one.
>>>>
>>>> Fixes: 59d0ecdbf0e1 ("ethdev: MTU accessors")
>>>>
>>>> Signed-off-by: Steve Yang <stevex.yang@intel.com <mailto:stevex.yang@intel.com>>
>>> <...>
>>>> @@ -1410,11 +1422,18 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t 
>>>> nb_rx_q, uint16_t nb_tx_q,
>>>> goto rollback;
>>>> }
>>>> } else {
>>>> -        if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN ||
>>>> -            dev_conf->rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN)
>>>> +        uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
>>>> +        if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
>>>> +            pktlen > RTE_ETHER_MTU + overhead_len)
>>>> /* Use default value */
>>>> dev->data->dev_conf.rxmode.max_rx_pkt_len =
>>>> -                            RTE_ETHER_MAX_LEN;
>>>> +                        RTE_ETHER_MTU + overhead_len;
>>> What do you think removing the above check, the else block, completely?
>>> Since the 'max_rx_pkt_len' should not be used when jumbo frame is not set.
>>
>> As I tested removing this check is causing problem because some PMDs are using 
>> the 'max_rx_pkt_len' even jumbo frame is not set.
>>
>> Perhaps better to keep it, and make a separate patch later to remove this 
>> check, after PMDs fixed.
> 
> Hello Ferruh -
> Working on fixing our PMD here. Do you want PMDs to update the JUMBO_FRAME flag 
> based on the mtu value in dev_set_mtu(), or do you want the application to be 
> solely responsible for it?
> 

Hi Andrew,

Technically JUMBO_FRAME flag is an user config and application should set it. It 
is application's responsibility to check the capability and set the flag when 
necessary.

But, after above said, many PMDs set it based on provided MTU value, if the 
explicitly requested MTU is bigger than the RTE_ETHER_MTU, this means user 
implied the JUMBO_FRAME support, for this case PMDs set the flag implicitly 
instead of failing.

In another thread Andrew R. & Konstantin suggested to remove the JUMBO_FRAME 
flag, since it is redundant and causing this kind of confusion, instead driver 
can decide based on requested MTU value, and driver reported 'max_mtu' value can 
be used by application to detect the capability. We will probably do this 
change, but it can be done only in the ABI break release, v21.11.

For now, PMD can set the flag itself if requested MTU > RTE_ETHER_MTU and driver 
supports jumbo frames.

> Thanks,
> Andrew
> 
>>>> +    }
>>>> +
>>>> +    /* Scale the MTU size to adapt max_rx_pkt_len */
>>>> +    if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>>> +        dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
>>>> +                overhead_len;
>>>> }
>>> Above if block has exact same check, why not move it above block?
>>
>> Can you still send a new version for above change please?
> 


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2] eal/rwlock: add note about writer starvation
  2021-01-12  1:04  3% ` [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
@ 2021-01-14 16:55  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-01-14 16:55 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The implementation of reader/writer locks in DPDK (from first release)
is simple and fast. But it can lead to writer starvation issues.

It is not easy to fix this without changing ABI and potentially
breaking customer applications that are expect the unfair behavior.

The wikipedia page on reader-writer problem has a similar example
which summarizes the problem pretty well.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - fix wording and spelling

 lib/librte_eal/include/generic/rte_rwlock.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_eal/include/generic/rte_rwlock.h b/lib/librte_eal/include/generic/rte_rwlock.h
index da9bc3e9c0e2..15980e2d93e5 100644
--- a/lib/librte_eal/include/generic/rte_rwlock.h
+++ b/lib/librte_eal/include/generic/rte_rwlock.h
@@ -15,6 +15,12 @@
  * one writer. All readers are blocked until the writer is finished
  * writing.
  *
+ * Note: This version of reader/writer locks is not fair because
+ * readers do not block for pending writers. A stream of readers can
+ * subsequently lock out all potential writers and starve them.
+ * This is because after the first reader locks the resource,
+ * no writer can lock it. The writer will only be able to get the lock
+ * when it will only be released by the last reader.
  */
 
 #ifdef __cplusplus
-- 
2.29.2


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v17 06/11] ethdev: add simple power management API
  2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
  2021-01-14 14:46  2%           ` [dpdk-dev] [PATCH v17 01/11] eal: uninline power intrinsics Anatoly Burakov
@ 2021-01-14 14:46  7%           ` Anatoly Burakov
  2021-01-18 15:24  0%           ` [dpdk-dev] [PATCH v17 00/11] Add PMD power management David Marchand
  2 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-14 14:46 UTC (permalink / raw)
  To: dev
  Cc: Liang Ma, Ray Kinsella, Neil Horman, Thomas Monjalon,
	Ferruh Yigit, Andrew Rybchenko, konstantin.ananyev,
	timothy.mcdaniel, david.hunt, bruce.richardson, chris.macnamara

From: Liang Ma <liang.j.ma@intel.com>

Add a simple API to allow getting the monitor conditions for
power-optimized monitoring of the Rx queues from the PMD, as well as
release notes information.

Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---

Notes:
    v17:
    - Added libabigail ignore for driver-only ABI in ethdev as suggested by David
    
    v13:
    - Fix typos and issues raised by Andrew

 devtools/libabigail.abignore           |  3 +++
 doc/guides/rel_notes/release_21_02.rst |  5 +++++
 lib/librte_ethdev/rte_ethdev.c         | 28 ++++++++++++++++++++++++++
 lib/librte_ethdev/rte_ethdev.h         | 25 +++++++++++++++++++++++
 lib/librte_ethdev/rte_ethdev_driver.h  | 22 ++++++++++++++++++++
 lib/librte_ethdev/version.map          |  3 +++
 6 files changed, 86 insertions(+)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 025f2c01bc..1c16114dce 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -7,3 +7,6 @@
         symbol_version = INTERNAL
 [suppress_variable]
         symbol_version = INTERNAL
+; Explicit ignore for driver-only ABI
+[suppress_type]
+        name = eth_dev_ops
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 706cbf8f0c..ec9958a141 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -55,6 +55,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **ethdev: added new API for PMD power management**
+
+  * ``rte_eth_get_monitor_addr()``, to be used in conjunction with
+    ``rte_power_monitor()`` to enable automatic power management for PMD's.
+
 
 Removed Items
 -------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 17ddacc78d..e19dbd838b 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -5115,6 +5115,34 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 		       dev->dev_ops->tx_burst_mode_get(dev, queue_id, mode));
 }
 
+int
+rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
+		struct rte_power_monitor_cond *pmc)
+{
+	struct rte_eth_dev *dev;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+	dev = &rte_eth_devices[port_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_monitor_addr, -ENOTSUP);
+
+	if (queue_id >= dev->data->nb_rx_queues) {
+		RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (pmc == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid power monitor condition=%p\n",
+				pmc);
+		return -EINVAL;
+	}
+
+	return eth_err(port_id,
+		dev->dev_ops->get_monitor_addr(dev->data->rx_queues[queue_id],
+			pmc));
+}
+
 int
 rte_eth_dev_set_mc_addr_list(uint16_t port_id,
 			     struct rte_ether_addr *mc_addr_set,
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f8919186..ca0f91312e 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -157,6 +157,7 @@ extern "C" {
 #include <rte_common.h>
 #include <rte_config.h>
 #include <rte_ether.h>
+#include <rte_power_intrinsics.h>
 
 #include "rte_ethdev_trace_fp.h"
 #include "rte_dev_info.h"
@@ -4334,6 +4335,30 @@ __rte_experimental
 int rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 	struct rte_eth_burst_mode *mode);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the monitor condition for a given receive queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The Rx queue on the Ethernet device for which information
+ *   will be retrieved.
+ * @param pmc
+ *   The pointer point to power-optimized monitoring condition structure.
+ *
+ * @return
+ *   - 0: Success.
+ *   -ENOTSUP: Operation not supported.
+ *   -EINVAL: Invalid parameters.
+ *   -ENODEV: Invalid port ID.
+ */
+__rte_experimental
+int rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
+		struct rte_power_monitor_cond *pmc);
+
 /**
  * Retrieve device registers and register attributes (number of registers and
  * register size)
diff --git a/lib/librte_ethdev/rte_ethdev_driver.h b/lib/librte_ethdev/rte_ethdev_driver.h
index 0eacfd8425..3b3b0ec1a0 100644
--- a/lib/librte_ethdev/rte_ethdev_driver.h
+++ b/lib/librte_ethdev/rte_ethdev_driver.h
@@ -763,6 +763,26 @@ typedef int (*eth_hairpin_queue_peer_unbind_t)
 	(struct rte_eth_dev *dev, uint16_t cur_queue, uint32_t direction);
 /**< @internal Unbind peer queue from the current queue. */
 
+/**
+ * @internal
+ * Get address of memory location whose contents will change whenever there is
+ * new data to be received on an Rx queue.
+ *
+ * @param rxq
+ *   Ethdev queue pointer.
+ * @param pmc
+ *   The pointer to power-optimized monitoring condition structure.
+ * @return
+ *   Negative errno value on error, 0 on success.
+ *
+ * @retval 0
+ *   Success
+ * @retval -EINVAL
+ *   Invalid parameters
+ */
+typedef int (*eth_get_monitor_addr_t)(void *rxq,
+		struct rte_power_monitor_cond *pmc);
+
 /**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
@@ -917,6 +937,8 @@ struct eth_dev_ops {
 	/**< Set up the connection between the pair of hairpin queues. */
 	eth_hairpin_queue_peer_unbind_t hairpin_queue_peer_unbind;
 	/**< Disconnect the hairpin queues of a pair from each other. */
+	eth_get_monitor_addr_t get_monitor_addr;
+	/**< Get power monitoring condition for Rx queue. */
 };
 
 /**
diff --git a/lib/librte_ethdev/version.map b/lib/librte_ethdev/version.map
index d3f5410806..a124e1e370 100644
--- a/lib/librte_ethdev/version.map
+++ b/lib/librte_ethdev/version.map
@@ -240,6 +240,9 @@ EXPERIMENTAL {
 	rte_flow_get_restore_info;
 	rte_flow_tunnel_action_decap_release;
 	rte_flow_tunnel_item_release;
+
+	# added in 21.02
+	rte_eth_get_monitor_addr;
 };
 
 INTERNAL {
-- 
2.25.1

^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH v17 01/11] eal: uninline power intrinsics
  2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
@ 2021-01-14 14:46  2%           ` Anatoly Burakov
  2021-01-14 14:46  7%           ` [dpdk-dev] [PATCH v17 06/11] ethdev: add simple power management API Anatoly Burakov
  2021-01-18 15:24  0%           ` [dpdk-dev] [PATCH v17 00/11] Add PMD power management David Marchand
  2 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-14 14:46 UTC (permalink / raw)
  To: dev
  Cc: Jerin Jacob, Ruifeng Wang, Jan Viktorin, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---

Notes:
    v14:
    - Fix compile issues on ARM and PPC64 by moving implementations to .c files

 .../arm/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/arm/meson.build                |   1 +
 lib/librte_eal/arm/rte_power_intrinsics.c     |  45 +++++++
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/ppc/meson.build                |   1 +
 lib/librte_eal/ppc/rte_power_intrinsics.c     |  45 +++++++
 lib/librte_eal/version.map                    |   3 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 11 files changed, 219 insertions(+), 198 deletions(-)
 create mode 100644 lib/librte_eal/arm/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/ppc/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..9e498e9ebf 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/arm/meson.build b/lib/librte_eal/arm/meson.build
index d62875ebae..6ec53ea03a 100644
--- a/lib/librte_eal/arm/meson.build
+++ b/lib/librte_eal/arm/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/arm/rte_power_intrinsics.c b/lib/librte_eal/arm/rte_power_intrinsics.c
new file mode 100644
index 0000000000..ab1f44f611
--- /dev/null
+++ b/lib/librte_eal/arm/rte_power_intrinsics.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..c0e9ac279f 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/ppc/meson.build b/lib/librte_eal/ppc/meson.build
index f4b6d95c42..43c46542fb 100644
--- a/lib/librte_eal/ppc/meson.build
+++ b/lib/librte_eal/ppc/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/ppc/rte_power_intrinsics.c b/lib/librte_eal/ppc/rte_power_intrinsics.c
new file mode 100644
index 0000000000..84340ca2a4
--- /dev/null
+++ b/lib/librte_eal/ppc/rte_power_intrinsics.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index b1db7ec795..32eceb8869 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -405,6 +405,9 @@ EXPERIMENTAL {
 	rte_vect_set_max_simd_bitwidth;
 
 	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 	rte_thread_tls_key_create;
 	rte_thread_tls_key_delete;
 	rte_thread_tls_value_get;
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v17 00/11] Add PMD power management
    2021-01-12 17:37  2%         ` [dpdk-dev] [PATCH v16 01/11] eal: uninline power intrinsics Anatoly Burakov
  2021-01-14  9:36  9%         ` [dpdk-dev] [PATCH v16 00/11] Add PMD power management David Marchand
@ 2021-01-14 14:46  2%         ` Anatoly Burakov
  2021-01-14 14:46  2%           ` [dpdk-dev] [PATCH v17 01/11] eal: uninline power intrinsics Anatoly Burakov
                             ` (2 more replies)
  2 siblings, 3 replies; 200+ results
From: Anatoly Burakov @ 2021-01-14 14:46 UTC (permalink / raw)
  To: dev
  Cc: thomas, konstantin.ananyev, timothy.mcdaniel, david.hunt,
	bruce.richardson, chris.macnamara

This patchset proposes a simple API for Ethernet drivers to cause the  
CPU to enter a power-optimized state while waiting for packets to  
arrive. There are multiple proposed mechanisms to achieve said power
savings: simple frequency scaling, idle loop, and monitoring the Rx
queue for incoming packages. The latter is achieved through cooperation
with the NIC driver that will allow us to know address of wake up event,
and wait for writes on that address.

On IA, this is achieved through using UMONITOR/UMWAIT instructions. They 
are used in their raw opcode form because there is no widespread 
compiler support for them yet. Still, the API is made generic enough to
hopefully support other architectures, if they happen to implement 
similar instructions.

To achieve power savings, there is a very simple mechanism used: we're 
counting empty polls, and if a certain threshold is reached, we employ
one of the suggested power management schemes automatically, from within
a Rx callback inside the PMD. Once there's traffic again, the empty poll
counter is reset.

This patchset also introduces a few changes into existing power 
management-related intrinsics, namely to provide a native way of waking 
up a sleeping core without application being responsible for it, as well 
as general robustness improvements. There's quite a bit of locking going 
on, but these locks are per-thread and very little (if any) contention 
is expected, so the performance impact shouldn't be that bad (and in any 
case the locking happens when we're about to sleep anyway).

Why are we putting it into ethdev as opposed to leaving this up to the 
application? Our customers specifically requested a way to do it with
minimal changes to the application code. The current approach allows to 
just flip a switch and automatically have power savings.

Things of note:

- Only 1:1 core to queue mapping is supported, meaning that each lcore 
  must at most handle RX on a single queue
- Support 3 type policies. Monitor/Pause/Frequency Scaling
- Power management is enabled per-queue
- The API doesn't extend to other device types

v17:
- Added exception for ethdev driver-only ABI
- Added memory barriers for monitor/wakeup (Konstantin)
- Fixed compiled issues on non-x86 platforms (hopefully!)

v16:
- Implemented Konstantin's suggestions and comments
- Added return values to the API

v15:
- Fixed incorrect check in UMWAIT callback
- Fixed accidental whitespace changes

v14:
- Fixed ARM/PPC builds
- Addressed various review comments

v13:
- Reworked the librte_power code to require less locking and handle invalid
  parameters better
- Fix numerous rebase errors present in v12

v12:
- Rebase on top of 21.02
- Rework of power intrinsics code

Anatoly Burakov (5):
  eal: uninline power intrinsics
  eal: avoid invalid API usage in power intrinsics
  eal: change API of power intrinsics
  eal: remove sync version of power monitor
  eal: add monitor wakeup function

Liang Ma (6):
  ethdev: add simple power management API
  power: add PMD power management API and callback
  net/ixgbe: implement power management API
  net/i40e: implement power management API
  net/ice: implement power management API
  examples/l3fwd-power: enable PMD power mgmt

 devtools/libabigail.abignore                  |   3 +
 doc/guides/prog_guide/power_man.rst           |  44 +++
 doc/guides/rel_notes/release_21_02.rst        |  15 +
 .../sample_app_ug/l3_forward_power_man.rst    |  35 ++
 drivers/event/dlb/dlb.c                       |  10 +-
 drivers/event/dlb2/dlb2.c                     |  10 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 +
 drivers/net/i40e/i40e_rxtx.c                  |  25 ++
 drivers/net/i40e/i40e_rxtx.h                  |   1 +
 drivers/net/ice/ice_ethdev.c                  |   1 +
 drivers/net/ice/ice_rxtx.c                    |  26 ++
 drivers/net/ice/ice_rxtx.h                    |   1 +
 drivers/net/ixgbe/ixgbe_ethdev.c              |   1 +
 drivers/net/ixgbe/ixgbe_rxtx.c                |  25 ++
 drivers/net/ixgbe/ixgbe_rxtx.h                |   1 +
 examples/l3fwd-power/main.c                   |  89 ++++-
 .../arm/include/rte_power_intrinsics.h        |  40 --
 lib/librte_eal/arm/meson.build                |   1 +
 lib/librte_eal/arm/rte_power_intrinsics.c     |  40 ++
 .../include/generic/rte_power_intrinsics.h    |  88 ++---
 .../ppc/include/rte_power_intrinsics.h        |  40 --
 lib/librte_eal/ppc/meson.build                |   1 +
 lib/librte_eal/ppc/rte_power_intrinsics.c     |  40 ++
 lib/librte_eal/version.map                    |   3 +
 .../x86/include/rte_power_intrinsics.h        | 115 ------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 215 +++++++++++
 lib/librte_ethdev/rte_ethdev.c                |  28 ++
 lib/librte_ethdev/rte_ethdev.h                |  25 ++
 lib/librte_ethdev/rte_ethdev_driver.h         |  22 ++
 lib/librte_ethdev/version.map                 |   3 +
 lib/librte_power/meson.build                  |   5 +-
 lib/librte_power/rte_power_pmd_mgmt.c         | 364 ++++++++++++++++++
 lib/librte_power/rte_power_pmd_mgmt.h         |  90 +++++
 lib/librte_power/version.map                  |   5 +
 35 files changed, 1155 insertions(+), 259 deletions(-)
 create mode 100644 lib/librte_eal/arm/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/ppc/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.c
 create mode 100644 lib/librte_power/rte_power_pmd_mgmt.h

-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v16 00/11] Add PMD power management
  2021-01-14  9:36  9%         ` [dpdk-dev] [PATCH v16 00/11] Add PMD power management David Marchand
@ 2021-01-14 10:25  0%           ` Burakov, Anatoly
  0 siblings, 0 replies; 200+ results
From: Burakov, Anatoly @ 2021-01-14 10:25 UTC (permalink / raw)
  To: David Marchand, Ray Kinsella
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Timothy McDaniel,
	David Hunt, Bruce Richardson, chris.macnamara, Kevin Traynor

On 14-Jan-21 9:36 AM, David Marchand wrote:
> On Tue, Jan 12, 2021 at 6:37 PM Anatoly Burakov
> <anatoly.burakov@intel.com> wrote:
>>
>> This patchset proposes a simple API for Ethernet drivers to cause the
>> CPU to enter a power-optimized state while waiting for packets to
>> arrive. There are multiple proposed mechanisms to achieve said power
>> savings: simple frequency scaling, idle loop, and monitoring the Rx
>> queue for incoming packages. The latter is achieved through cooperation
>> with the NIC driver that will allow us to know address of wake up event,
>> and wait for writes on that address.
>>
>> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
>> are used in their raw opcode form because there is no widespread
>> compiler support for them yet. Still, the API is made generic enough to
>> hopefully support other architectures, if they happen to implement
>> similar instructions.
>>
>> To achieve power savings, there is a very simple mechanism used: we're
>> counting empty polls, and if a certain threshold is reached, we employ
>> one of the suggested power management schemes automatically, from within
>> a Rx callback inside the PMD. Once there's traffic again, the empty poll
>> counter is reset.
>>
>> This patchset also introduces a few changes into existing power
>> management-related intrinsics, namely to provide a native way of waking
>> up a sleeping core without application being responsible for it, as well
>> as general robustness improvements. There's quite a bit of locking going
>> on, but these locks are per-thread and very little (if any) contention
>> is expected, so the performance impact shouldn't be that bad (and in any
>> case the locking happens when we're about to sleep anyway).
>>
>> Why are we putting it into ethdev as opposed to leaving this up to the
>> application? Our customers specifically requested a way to do it with
>> minimal changes to the application code. The current approach allows to
>> just flip a switch and automatically have power savings.
>>
>> Things of note:
>>
>> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>>    must at most handle RX on a single queue
> 
> If we want to save power, it is likely we would poll more rxqs on a thread.

We are investigating possibilities to make that happen, but for this 
patchset, this is the limitation.

> 
> 
>> - Support 3 type policies. Monitor/Pause/Frequency Scaling
>> - Power management is enabled per-queue
>> - The API doesn't extend to other device types
>>
>> v16:
>> - Implemented Konstantin's suggestions and comments
>> - Added return values to the API
> 
> - This revision breaks SPDK build (reported by UNH):
> http://mails.dpdk.org/archives/test-report/2021-January/174069.html
> 
> 
> - Build is broken for ARM and PPC at patch:
> 86491d5bd4 - (HEAD) eal: add monitor wakeup function (25 minutes ago)
> <Anatoly Burakov>
> 
> Only pasting the ARM failure:
> ninja: Entering directory `/home/dmarchan/builds/build-arm64-host-clang'
> [1/297] Compiling C object
> 'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o'.
> FAILED: lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o
> aarch64-linux-gnu-gcc -Ilib/76b5a35@@rte_eal@sta -Ilib
> -I../../dpdk/lib -I. -I../../dpdk/ -Iconfig -I../../dpdk/config
> -Ilib/librte_eal/include -I../../dpdk/lib/librte_eal/include
> -Ilib/librte_eal/linux/include
> -I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/arm/include
> -I../../dpdk/lib/librte_eal/arm/include -Ilib/librte_eal/common
> -I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
> -I../../dpdk/lib/librte_eal -Ilib/librte_kvargs
> -I../../dpdk/lib/librte_kvargs
> -Ilib/librte_telemetry/../librte_metrics
> -I../../dpdk/lib/librte_telemetry/../librte_metrics
> -Ilib/librte_telemetry -I../../dpdk/lib/librte_telemetry
> -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall
> -Winvalid-pch -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual
> -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security
> -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
> -Wold-style-definition -Wpointer-arith -Wsign-compare
> -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-packed-not-aligned
> -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=armv8-a+crc
> -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation
> '-DABI_VERSION="21.1"' -DRTE_LIBEAL_USE_GETENTROPY -MD -MQ
> 'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o' -MF
> 'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o.d'
> -o 'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o'
> -c ../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c
> ../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c:35:1: error:
> conflicting types for ‘rte_power_monitor_wakeup’
>   rte_power_monitor_wakeup(const unsigned int lcore_id)
>   ^~~~~~~~~~~~~~~~~~~~~~~~
> In file included from
> ../../dpdk/lib/librte_eal/arm/include/rte_power_intrinsics.h:14,
>                   from ../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c:5:
> ../../dpdk/lib/librte_eal/include/generic/rte_power_intrinsics.h:79:5:
> note: previous declaration of ‘rte_power_monitor_wakeup’ was here
>   int rte_power_monitor_wakeup(const unsigned int lcore_id);
>       ^~~~~~~~~~~~~~~~~~~~~~~~
> ninja: build stopped: subcommand failed.

Woops, wrong return value in the .c files. Will fix!

> 
> 
> 
> - The ABI check is still not happy as I reported earlier.
> Reproduced on v16 (GHA had a hiccup on this revision, but previous
> ones had the failure too):
> 
> 1 Changed variable:
> 
>    [C] 'rte_eth_dev rte_eth_devices[]' was changed at rte_ethdev_core.h:196:1:
>      type of variable changed:
>        array element type 'struct rte_eth_dev' changed:
>          type size hasn't changed
>          1 data member change:
>            type of 'const eth_dev_ops* rte_eth_dev::dev_ops' changed:
>              in pointed to type 'const eth_dev_ops':
>                in unqualified underlying type 'struct eth_dev_ops' at
> rte_ethdev_driver.h:789:1:
>                  type size changed from 6208 to 6272 (in bits)
>                  1 data member insertion:
>                    'eth_get_monitor_addr_t
> eth_dev_ops::get_monitor_addr', at offset 6208 (in bits) at
> rte_ethdev_driver.h:940:1
>                  no data member changes (94 filtered);
>        type size hasn't changed
> 
> Error: ABI issue reported for 'abidiff --suppr
> /home/dmarchan/dpdk/devtools/../devtools/libabigail.abignore
> --no-added-syms --headers-dir1
> /home/dmarchan/abi/v20.11/build-gcc-static/usr/local/include
> --headers-dir2 /home/dmarchan/builds/build-gcc-static/install/usr/local/include
> /home/dmarchan/abi/v20.11/build-gcc-static/dump/librte_ethdev.dump
> /home/dmarchan/builds/build-gcc-static/install/dump/librte_ethdev.dump'
> 
> ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged
> this as a potential issue).
> 
> One solution is to add an exception on the eth_dev_ops structure.
> 
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -7,3 +7,7 @@
>           symbol_version = INTERNAL
>   [suppress_variable]
>           symbol_version = INTERNAL
> +
> +; Explicit ignore for driver-only ABI
> +[suppress_type]
> +        name = eth_dev_ops
> 
> 

Right, OK. I didn't realize an "exception" is something you actually do 
in code, not an ad-hoc community process type thing :) I'll add this in 
the next revision.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v16 00/11] Add PMD power management
    2021-01-12 17:37  2%         ` [dpdk-dev] [PATCH v16 01/11] eal: uninline power intrinsics Anatoly Burakov
@ 2021-01-14  9:36  9%         ` David Marchand
  2021-01-14 10:25  0%           ` Burakov, Anatoly
  2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-14  9:36 UTC (permalink / raw)
  To: Anatoly Burakov, Ray Kinsella
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Timothy McDaniel,
	David Hunt, Bruce Richardson, chris.macnamara, Kevin Traynor

On Tue, Jan 12, 2021 at 6:37 PM Anatoly Burakov
<anatoly.burakov@intel.com> wrote:
>
> This patchset proposes a simple API for Ethernet drivers to cause the
> CPU to enter a power-optimized state while waiting for packets to
> arrive. There are multiple proposed mechanisms to achieve said power
> savings: simple frequency scaling, idle loop, and monitoring the Rx
> queue for incoming packages. The latter is achieved through cooperation
> with the NIC driver that will allow us to know address of wake up event,
> and wait for writes on that address.
>
> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
> are used in their raw opcode form because there is no widespread
> compiler support for them yet. Still, the API is made generic enough to
> hopefully support other architectures, if they happen to implement
> similar instructions.
>
> To achieve power savings, there is a very simple mechanism used: we're
> counting empty polls, and if a certain threshold is reached, we employ
> one of the suggested power management schemes automatically, from within
> a Rx callback inside the PMD. Once there's traffic again, the empty poll
> counter is reset.
>
> This patchset also introduces a few changes into existing power
> management-related intrinsics, namely to provide a native way of waking
> up a sleeping core without application being responsible for it, as well
> as general robustness improvements. There's quite a bit of locking going
> on, but these locks are per-thread and very little (if any) contention
> is expected, so the performance impact shouldn't be that bad (and in any
> case the locking happens when we're about to sleep anyway).
>
> Why are we putting it into ethdev as opposed to leaving this up to the
> application? Our customers specifically requested a way to do it with
> minimal changes to the application code. The current approach allows to
> just flip a switch and automatically have power savings.
>
> Things of note:
>
> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>   must at most handle RX on a single queue

If we want to save power, it is likely we would poll more rxqs on a thread.


> - Support 3 type policies. Monitor/Pause/Frequency Scaling
> - Power management is enabled per-queue
> - The API doesn't extend to other device types
>
> v16:
> - Implemented Konstantin's suggestions and comments
> - Added return values to the API

- This revision breaks SPDK build (reported by UNH):
http://mails.dpdk.org/archives/test-report/2021-January/174069.html


- Build is broken for ARM and PPC at patch:
86491d5bd4 - (HEAD) eal: add monitor wakeup function (25 minutes ago)
<Anatoly Burakov>

Only pasting the ARM failure:
ninja: Entering directory `/home/dmarchan/builds/build-arm64-host-clang'
[1/297] Compiling C object
'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o'.
FAILED: lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o
aarch64-linux-gnu-gcc -Ilib/76b5a35@@rte_eal@sta -Ilib
-I../../dpdk/lib -I. -I../../dpdk/ -Iconfig -I../../dpdk/config
-Ilib/librte_eal/include -I../../dpdk/lib/librte_eal/include
-Ilib/librte_eal/linux/include
-I../../dpdk/lib/librte_eal/linux/include -Ilib/librte_eal/arm/include
-I../../dpdk/lib/librte_eal/arm/include -Ilib/librte_eal/common
-I../../dpdk/lib/librte_eal/common -Ilib/librte_eal
-I../../dpdk/lib/librte_eal -Ilib/librte_kvargs
-I../../dpdk/lib/librte_kvargs
-Ilib/librte_telemetry/../librte_metrics
-I../../dpdk/lib/librte_telemetry/../librte_metrics
-Ilib/librte_telemetry -I../../dpdk/lib/librte_telemetry
-fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall
-Winvalid-pch -Werror -O2 -g -include rte_config.h -Wextra -Wcast-qual
-Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare
-Wstrict-prototypes -Wundef -Wwrite-strings -Wno-packed-not-aligned
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=armv8-a+crc
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation
'-DABI_VERSION="21.1"' -DRTE_LIBEAL_USE_GETENTROPY -MD -MQ
'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o' -MF
'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o.d'
-o 'lib/76b5a35@@rte_eal@sta/librte_eal_arm_rte_power_intrinsics.c.o'
-c ../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c
../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c:35:1: error:
conflicting types for ‘rte_power_monitor_wakeup’
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 ^~~~~~~~~~~~~~~~~~~~~~~~
In file included from
../../dpdk/lib/librte_eal/arm/include/rte_power_intrinsics.h:14,
                 from ../../dpdk/lib/librte_eal/arm/rte_power_intrinsics.c:5:
../../dpdk/lib/librte_eal/include/generic/rte_power_intrinsics.h:79:5:
note: previous declaration of ‘rte_power_monitor_wakeup’ was here
 int rte_power_monitor_wakeup(const unsigned int lcore_id);
     ^~~~~~~~~~~~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.



- The ABI check is still not happy as I reported earlier.
Reproduced on v16 (GHA had a hiccup on this revision, but previous
ones had the failure too):

1 Changed variable:

  [C] 'rte_eth_dev rte_eth_devices[]' was changed at rte_ethdev_core.h:196:1:
    type of variable changed:
      array element type 'struct rte_eth_dev' changed:
        type size hasn't changed
        1 data member change:
          type of 'const eth_dev_ops* rte_eth_dev::dev_ops' changed:
            in pointed to type 'const eth_dev_ops':
              in unqualified underlying type 'struct eth_dev_ops' at
rte_ethdev_driver.h:789:1:
                type size changed from 6208 to 6272 (in bits)
                1 data member insertion:
                  'eth_get_monitor_addr_t
eth_dev_ops::get_monitor_addr', at offset 6208 (in bits) at
rte_ethdev_driver.h:940:1
                no data member changes (94 filtered);
      type size hasn't changed

Error: ABI issue reported for 'abidiff --suppr
/home/dmarchan/dpdk/devtools/../devtools/libabigail.abignore
--no-added-syms --headers-dir1
/home/dmarchan/abi/v20.11/build-gcc-static/usr/local/include
--headers-dir2 /home/dmarchan/builds/build-gcc-static/install/usr/local/include
/home/dmarchan/abi/v20.11/build-gcc-static/dump/librte_ethdev.dump
/home/dmarchan/builds/build-gcc-static/install/dump/librte_ethdev.dump'

ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged
this as a potential issue).

One solution is to add an exception on the eth_dev_ops structure.

--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -7,3 +7,7 @@
         symbol_version = INTERNAL
 [suppress_variable]
         symbol_version = INTERNAL
+
+; Explicit ignore for driver-only ABI
+[suppress_type]
+        name = eth_dev_ops


-- 
David marchand


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v2 1/1] devtools: avoid installing static binaries
  2021-01-13 19:05 13% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
@ 2021-01-13 22:01  0%   ` Thomas Monjalon
  2021-01-15 15:24  3%     ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-13 22:01 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson

13/01/2021 20:05, Thomas Monjalon:
> When testing compilation and checking ABI compatibility,
> there is no real need of static binaries eating disks.
> 
> The static linkage of applications was already well tested,
> though the static examples tested with meson were limited to "l3fwd" only.
> The static build test with make is limited to "helloworld" example.
> 
> The ABI compatibility is checked on shared libraries,
> and there is no need to test again on similar builds.
> A new parameter is added to the function "build",
> so the ABI check is enabled only for native gcc and clang shared builds,
> 32-bit, generic armv8 and ppc cross compilations.
> In other words, it is disabled for some static builds and some Arm ones.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> v2:
> - separate ABI check enablement from default library
> - disable ABI check in specific Arm builds
> ---
[...]
> -build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
> +build build-x86-default cc ABI \
> +	-Dlibdir=lib -Dmachine=$default_machine $use_shared

After a second thought, I think this one should be "skipABI".




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 1/1] devtools: avoid installing static binaries
  2020-12-07 17:33 10% [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries Thomas Monjalon
  2020-12-07 17:47  3% ` Bruce Richardson
  2020-12-08 15:37  4% ` David Marchand
@ 2021-01-13 19:05 13% ` Thomas Monjalon
  2021-01-13 22:01  0%   ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-13 19:05 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson

When testing compilation and checking ABI compatibility,
there is no real need of static binaries eating disks.

The static linkage of applications was already well tested,
though the static examples tested with meson were limited to "l3fwd" only.
The static build test with make is limited to "helloworld" example.

The ABI compatibility is checked on shared libraries,
and there is no need to test again on similar builds.
A new parameter is added to the function "build",
so the ABI check is enabled only for native gcc and clang shared builds,
32-bit, generic armv8 and ppc cross compilations.
In other words, it is disabled for some static builds and some Arm ones.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v2:
- separate ABI check enablement from default library
- disable ABI check in specific Arm builds
---
 devtools/test-meson-builds.sh | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 00e3d0b443..0e79e1b2bd 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -146,13 +146,15 @@ install_target () # <builddir> <installdir>
 	DESTDIR=$2 $ninja_cmd -C $1 install >&$veryverbose
 }
 
-build () # <directory> <target compiler | cross file> <meson options>
+build () # <directory> <target cc | cross file> <ABI check> [meson options]
 {
 	targetdir=$1
 	shift
 	crossfile=
 	[ -r $1 ] && crossfile=$1 || targetcc=$1
 	shift
+	abicheck=$1
+	shift
 	# skip build if compiler not available
 	command -v ${CC##* } >/dev/null 2>&1 || return 0
 	if [ -n "$crossfile" ] ; then
@@ -165,7 +167,7 @@ build () # <directory> <target compiler | cross file> <meson options>
 	load_env $targetcc || return 0
 	config $srcdir $builds_dir/$targetdir $cross --werror $*
 	compile $builds_dir/$targetdir
-	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+	if [ -n "$DPDK_ABI_REF_VERSION" -a "$abicheck" = ABI ] ; then
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
 		if [ ! -d $abirefdir/$targetdir ]; then
 			# clone current sources
@@ -207,8 +209,13 @@ build () # <directory> <target compiler | cross file> <meson options>
 for c in gcc clang ; do
 	command -v $c >/dev/null 2>&1 || continue
 	for s in static shared ; do
+		if [ $s = shared ] ; then
+			abicheck=ABI
+		else
+			abicheck=skipABI # save time and disk space
+		fi
 		export CC="$CCACHE $c"
-		build build-$c-$s $c --default-library=$s
+		build build-$c-$s $c $abicheck --default-library=$s
 		unset CC
 	done
 done
@@ -220,7 +227,8 @@ default_machine='nehalem'
 if ! check_cc_flags "-march=$default_machine" ; then
 	default_machine='corei7'
 fi
-build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
+build build-x86-default cc ABI \
+	-Dlibdir=lib -Dmachine=$default_machine $use_shared
 
 # 32-bit with default compiler
 if check_cc_flags '-m32' ; then
@@ -235,29 +243,32 @@ if check_cc_flags '-m32' ; then
 		export PKG_CONFIG_LIBDIR='/usr/lib/pkgconfig'
 	fi
 	target_override='i386-pc-linux-gnu'
-	build build-32b cc -Dc_args='-m32' -Dc_link_args='-m32'
+	build build-32b cc ABI -Dc_args='-m32' -Dc_link_args='-m32'
 	target_override=
 	unset PKG_CONFIG_LIBDIR
 fi
 
 # x86 MinGW
-build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
+build build-x86-mingw $srcdir/config/x86/cross-mingw skipABI \
+	-Dexamples=helloworld
 
 # generic armv8a with clang as host compiler
 f=$srcdir/config/arm/arm64_armv8_linux_gcc
 export CC="clang"
-build build-arm64-host-clang $f $use_shared
+build build-arm64-host-clang $f ABI $use_shared
 unset CC
 # some gcc/arm configurations
 for f in $srcdir/config/arm/arm64_[bdo]*gcc ; do
 	export CC="$CCACHE gcc"
-	build build-$(basename $f | tr '_' '-' | cut -d'-' -f-2) $f $use_shared
+	targetdir=build-$(basename $f | tr '_' '-' | cut -d'-' -f-2)
+	build $targetdir $f skipABI $use_shared
 	unset CC
 done
 
 # ppc configurations
 for f in $srcdir/config/ppc/ppc* ; do
-	build build-$(basename $f | cut -d'-' -f-2) $f $use_shared
+	targetdir=build-$(basename $f | cut -d'-' -f-2)
+	build $targetdir $f ABI $use_shared
 done
 
 # Test installation of the x86-default target, to be used for checking
@@ -279,7 +290,8 @@ if pkg-config --define-prefix libdpdk >/dev/null 2>&1; then
 	export PKGCONF="pkg-config --define-prefix"
 	for example in $examples; do
 		echo "## Building $example"
+		[ $example = helloworld ] && static=static || static= # save disk space
 		$MAKE -C $DESTDIR/usr/local/share/dpdk/examples/$example \
-			clean shared static >&$veryverbose
+			clean shared $static >&$veryverbose
 	done
 fi
-- 
2.29.2


^ permalink raw reply	[relevance 13%]

* Re: [dpdk-dev] [PATCH 0/6] power: fix make build for power apps
  @ 2021-01-13 17:30  3%           ` Burakov, Anatoly
  0 siblings, 0 replies; 200+ results
From: Burakov, Anatoly @ 2021-01-13 17:30 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: stable

On 13-Jan-21 1:25 PM, David Hunt wrote:
> 
> On 13/1/2021 11:18 AM, Burakov, Anatoly wrote:
>> On 13-Jan-21 11:14 AM, David Hunt wrote:
>>> Hi Anatoly,
>>>
>>> On 13/1/2021 11:08 AM, Burakov, Anatoly wrote:
>>>> On 08-Jan-21 2:30 PM, David Hunt wrote:
>>>>> The power example applications that uses the virtio-serial method of
>>>>> communication cannot currently be built with make, and can only be 
>>>>> built
>>>>> using meson/ninja.
>>>>>
>>>>> The guest channel message definitions and functions in guest_channel.h
>>>>> are needed by applications and need to be made public.
>>>>>
>>>>> This worked pre-20.11, but now with all the meson/ninja changes, 
>>>>> making
>>>>> these apps externally no longer works. To fix, we need to move the 
>>>>> header
>>>>> file with the API definitions for the channel commands public, and 
>>>>> rename
>>>>> the functions accordingly.
>>>>>
>>>>> The main change is to rename channel_commands.h to
>>>>> rte_power_guest_channel.h so that it gets picked up by the 
>>>>> installer and
>>>>> copied to /usr/local/include. Other changes include renaming 
>>>>> #defines to
>>>>> have RTE_ at the beginning instead of CPU_. Finally we refactor the 
>>>>> code
>>>>> to work with those changes.
>>>>>
>>>>> ---
>>>>> v2 changes
>>>>>    - re-worked from monolithic patch to a 6 patch patchset for 
>>>>> easier review
>>>>>
>>>>> [PATCH v2 1/6] power: create guest channel public header file
>>>>> [PATCH v2 2/6] power: make channel msg functions public
>>>>> [PATCH v2 3/6] power: rename public structs
>>>>> [PATCH v2 4/6] power: rename defines
>>>>> [PATCH v2 5/6] power: add new header file to export list
>>>>> [PATCH v2 6/6] power: clean up includes
>>>>>
>>>>
>>>> Just a general question: wouldn't it be better to move this stuff 
>>>> entirely into sample app and not bother with keeping it in the 
>>>> library? There is precedent - ethtool app has a "library" and an 
>>>> "application" part, i think you should be able to move it out of the 
>>>> library and into the sample app entirely without too much trouble, 
>>>> as code looks to be fairly self-contained.
>>>>
>>>
>>> Agreed, that's a great idea. I could have a common lib under 
>>> examples/vm_power_manager, then two apps, vm_power_manager and 
>>> guest_cli. That would keep everything nicely local, and not expose 
>>> the channel API publicly. The only reason we were making it public 
>>> was to allow "make" to work, so that's not a good enought reason, 
>>> tbh. I'll throw a prototype together today.
>>
>> Yep, IIRC Make works perfectly fine with ethtool, so i don't see why 
>> it wouldn't work for your sample app as well. Thanks!
> 
> 
> Hi Anatoly,
> 
> OK, so I was investigating this, and have come across a blocker on this 
> method.
> 
> There are three methods for managing frequency, acpi, pstate and vm. 
> It's the third method that's causing the problem with moving the channel 
> functionality out into a sample library alongside vm_power_manger. VM's 
> can call channel functions in the power library to affect frequency for 
> their cores, and these functions use api function calls and several 
> structures and #defines in their code, which is currently part of the 
> power management library. These function declarations, structs and 
> #defines ere needed in both the examples lib/apps and the power library 
> itself, and the prototype changes I made ended up looking very much like 
> the patch set that's already on the mailing list.
> 
> So, while I would have liked to have a solution along the lines of what 
> you've proposed, it's not possible based on the dependencies on common 
> structues and #defines.
> 
> Thanks for the suggestion, though.
> 
> Rgds,
> Dave.
> 

OK, i guess we can live with that. I wonder if there's another way to do 
this, but since i can't think of anything that doesn't involve serious 
API/ABI breakages, this is OK IMO.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v12 06/11] ethdev: add simple power management API
  @ 2021-01-13 13:25  3%       ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-01-13 13:25 UTC (permalink / raw)
  To: Burakov, Anatoly, Lance Richardson
  Cc: dev, Ma, Liang J, Thomas Monjalon, Yigit, Ferruh,
	Andrew Rybchenko, Ray Kinsella, Neil Horman, gage.eads, McDaniel,
	Timothy, Hunt, David, Richardson, Bruce, Macnamara, Chris


> 
> On 12-Jan-21 8:32 PM, Lance Richardson wrote:
> > On Thu, Dec 17, 2020 at 9:08 AM Anatoly Burakov
> > <anatoly.burakov@intel.com> wrote:
> >>
> >> From: Liang Ma <liang.j.ma@intel.com>
> >>
> >> Add a simple API to allow getting the monitor conditions for
> >> power-optimized monitoring of the RX queues from the PMD, as well as
> >> release notes information.
> >>
> >> Signed-off-by: Liang Ma <liang.j.ma@intel.com>
> >> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> >> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >> ---
> > <snip>
> >>   /**
> >>    * @internal A structure containing the functions exported by an Ethernet driver.
> >>    */
> >> @@ -917,6 +937,8 @@ struct eth_dev_ops {
> >>          /**< Set up the connection between the pair of hairpin queues. */
> >>          eth_hairpin_queue_peer_unbind_t hairpin_queue_peer_unbind;
> >>          /**< Disconnect the hairpin queues of a pair from each other. */
> >> +       eth_get_monitor_addr_t get_monitor_addr;
> >> +       /**< Get next RX queue ring entry address. */
> >>   };
> >>
> >
> > The implementation of get_monitor_addr will have much in common with
> > the rx_descriptor_status API in struct rte_eth_dev, including the property
> > that it will likely not make sense for it to be called concurrently with
> > rx_pkt_burst on a given queue. Might it make more sense to have this
> > API in struct rte_eth_dev instead of struct eth_dev_ops?
> >
> 
> I don't have an opinion on this as this code isn't really my area of
> expertise. I'm fine with wherever the community thinks this code should
> be. Any other opinions?
> 

I don't think it is a good idea to  push new members into rte_eth_dev.
It either means an ABI breakage or wasting of one of our reserved fields.
IMO this function is not that performance critical to justify such insersion.
In fact, I think we should look in different direction -
remove rx/tx_descriptor_status() functions from rte_eth_dev,
or even better make rte_eth_dev an opaque pointer. 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 1/1] devtools: adjust verbosity of ABI check
  2020-12-17  9:05 36% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
@ 2021-01-13  9:21  4%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-13  9:21 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson, Ray Kinsella, Neil Horman

17/12/2020 10:05, Thomas Monjalon:
> The scripts gen-abi.sh and check-abi.sh are updated
> to print error messages to stderr so they are likely never ignored.
> 
> When called from test-meson-builds.sh, the standard messages on stdout
> can be more quiet depending on the verbosity settings.
> The beginning of the ABI check is announced in verbose mode.
> The commands are printed in very verbose mode.
> The check result details are available in verbose mode.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> v2: remove abidiff command from stdout (already printed on error)

Applied





^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH] doc: recommend GitHub Actions for CI
@ 2021-01-13  9:03  5% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-01-13  9:03 UTC (permalink / raw)
  To: dev; +Cc: thomas, Aaron Conole

Update the contributing guidelines to describe GitHub Actions first and
add a warning about Travis usage.

Fixes: 87009585e293 ("ci: hook to GitHub Actions")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/contributing/patches.rst | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 4e9140bca4..6dbbd5f8d1 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -32,9 +32,12 @@ The mailing list for DPDK development is `dev@dpdk.org <https://mails.dpdk.org/a
 Contributors will need to `register for the mailing list <https://mails.dpdk.org/listinfo/dev>`_ in order to submit patches.
 It is also worth registering for the DPDK `Patchwork <https://patches.dpdk.org/project/dpdk/list/>`_
 
-If you are using the GitHub service, you can link your repository to
-the ``travis-ci.org`` build service.  When you push patches to your GitHub
-repository, the travis service will automatically build your changes.
+If you are using the GitHub service, pushing to a branch will trigger GitHub
+Actions to automatically build your changes and run unit tests and ABI checks.
+
+Additionally, a Travis configuration is available in DPDK but Travis free usage
+is limited to a few builds.
+You can link your repository to the ``travis-ci.com`` build service.
 
 The development process requires some familiarity with the ``git`` version control system.
 Refer to the `Pro Git Book <http://www.git-scm.com/book/>`_ for further information.
-- 
2.23.0


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v16 01/11] eal: uninline power intrinsics
  @ 2021-01-12 17:37  2%         ` Anatoly Burakov
  2021-01-14  9:36  9%         ` [dpdk-dev] [PATCH v16 00/11] Add PMD power management David Marchand
  2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
  2 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-12 17:37 UTC (permalink / raw)
  To: dev
  Cc: Jan Viktorin, Ruifeng Wang, Jerin Jacob, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---

Notes:
    v14:
    - Fix compile issues on ARM and PPC64 by moving implementations to .c files

 .../arm/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/arm/meson.build                |   1 +
 lib/librte_eal/arm/rte_power_intrinsics.c     |  45 +++++++
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/ppc/meson.build                |   1 +
 lib/librte_eal/ppc/rte_power_intrinsics.c     |  45 +++++++
 lib/librte_eal/version.map                    |   3 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 11 files changed, 219 insertions(+), 198 deletions(-)
 create mode 100644 lib/librte_eal/arm/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/ppc/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..9e498e9ebf 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/arm/meson.build b/lib/librte_eal/arm/meson.build
index d62875ebae..6ec53ea03a 100644
--- a/lib/librte_eal/arm/meson.build
+++ b/lib/librte_eal/arm/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/arm/rte_power_intrinsics.c b/lib/librte_eal/arm/rte_power_intrinsics.c
new file mode 100644
index 0000000000..ab1f44f611
--- /dev/null
+++ b/lib/librte_eal/arm/rte_power_intrinsics.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..c0e9ac279f 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/ppc/meson.build b/lib/librte_eal/ppc/meson.build
index f4b6d95c42..43c46542fb 100644
--- a/lib/librte_eal/ppc/meson.build
+++ b/lib/librte_eal/ppc/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/ppc/rte_power_intrinsics.c b/lib/librte_eal/ppc/rte_power_intrinsics.c
new file mode 100644
index 0000000000..84340ca2a4
--- /dev/null
+++ b/lib/librte_eal/ppc/rte_power_intrinsics.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index b1db7ec795..32eceb8869 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -405,6 +405,9 @@ EXPERIMENTAL {
 	rte_vect_set_max_simd_bitwidth;
 
 	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 	rte_thread_tls_key_create;
 	rte_thread_tls_key_delete;
 	rte_thread_tls_value_get;
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v13 01/11] eal: uninline power intrinsics
  2021-01-08 17:42  2%   ` [dpdk-dev] [PATCH v13 01/11] eal: uninline power intrinsics Anatoly Burakov
@ 2021-01-12 15:54  0%     ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-01-12 15:54 UTC (permalink / raw)
  To: Burakov, Anatoly, dev
  Cc: Jan Viktorin, Ruifeng Wang, Jerin Jacob, David Christensen,
	Ray Kinsella, Neil Horman, Richardson, Bruce, thomas, McDaniel,
	Timothy, Hunt, David, Macnamara, Chris



> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Friday, January 8, 2021 5:42 PM
> To: dev@dpdk.org
> Cc: Jan Viktorin <viktorin@rehivetech.com>; Ruifeng Wang <ruifeng.wang@arm.com>; Jerin Jacob <jerinj@marvell.com>; David
> Christensen <drc@linux.vnet.ibm.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net; gage.eads@intel.com;
> McDaniel, Timothy <timothy.mcdaniel@intel.com>; Hunt, David <david.hunt@intel.com>; Macnamara, Chris
> <chris.macnamara@intel.com>
> Subject: [PATCH v13 01/11] eal: uninline power intrinsics
> 
> Currently, power intrinsics are inline functions. Make them part of the
> ABI so that we can have various internal data associated with them
> without exposing said data to the outside world.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> --
> 2.25.1

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation
  2021-01-08 19:13  4% [dpdk-dev] Reader-Writer lock starvation issues Stephen Hemminger
  2021-01-08 21:27  0% ` Honnappa Nagarahalli
@ 2021-01-12  1:04  3% ` Stephen Hemminger
  2021-01-14 16:55  3%   ` [dpdk-dev] [PATCH v2] " Stephen Hemminger
  1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-01-12  1:04 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, stable

The implementation of reader/writer locks in DPDK (from first release)
is simple and fast. But it can lead to writer starvation issues.

It is not easy to fix this without changing ABI and potentially
breaking customer applications that are expect the unfair behavior.
Therfore this patch just changes the documentation.

The wikipedia page on reader-writer problem has a similar example
which summarizes the problem pretty well.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Cc: stable@dpdk.org
---
 lib/librte_eal/include/generic/rte_rwlock.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/librte_eal/include/generic/rte_rwlock.h b/lib/librte_eal/include/generic/rte_rwlock.h
index da9bc3e9c0e2..0b30c780fc34 100644
--- a/lib/librte_eal/include/generic/rte_rwlock.h
+++ b/lib/librte_eal/include/generic/rte_rwlock.h
@@ -15,6 +15,14 @@
  * one writer. All readers are blocked until the writer is finished
  * writing.
  *
+ * Note: This version of reader/writer locks is not fair because
+ * readers do not block for pending writers. A stream of readers can
+ * subsequently lock all potential writers out and starve them.
+ * This is because after the first reader locks the resource,
+ * no writer can lock it, before it gets released.
+ * And it will only be released by the last reader.
+ *
+ * See also: https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem
  */
 
 #ifdef __cplusplus
-- 
2.29.2


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v15 01/11] eal: uninline power intrinsics
  @ 2021-01-11 14:58  2%       ` Anatoly Burakov
    1 sibling, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-11 14:58 UTC (permalink / raw)
  To: dev
  Cc: Jan Viktorin, Ruifeng Wang, Jerin Jacob, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v14:
    - Fix compile issues on ARM and PPC64 by moving implementations to .c files

 .../arm/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/arm/meson.build                |   1 +
 lib/librte_eal/arm/rte_power_intrinsics.c     |  42 ++++++
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/ppc/meson.build                |   1 +
 lib/librte_eal/ppc/rte_power_intrinsics.c     |  42 ++++++
 lib/librte_eal/version.map                    |   5 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 11 files changed, 215 insertions(+), 198 deletions(-)
 create mode 100644 lib/librte_eal/arm/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/ppc/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..9e498e9ebf 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/arm/meson.build b/lib/librte_eal/arm/meson.build
index d62875ebae..6ec53ea03a 100644
--- a/lib/librte_eal/arm/meson.build
+++ b/lib/librte_eal/arm/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/arm/rte_power_intrinsics.c b/lib/librte_eal/arm/rte_power_intrinsics.c
new file mode 100644
index 0000000000..e5a49facb4
--- /dev/null
+++ b/lib/librte_eal/arm/rte_power_intrinsics.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..c0e9ac279f 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/ppc/meson.build b/lib/librte_eal/ppc/meson.build
index f4b6d95c42..43c46542fb 100644
--- a/lib/librte_eal/ppc/meson.build
+++ b/lib/librte_eal/ppc/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/ppc/rte_power_intrinsics.c b/lib/librte_eal/ppc/rte_power_intrinsics.c
new file mode 100644
index 0000000000..785effabe6
--- /dev/null
+++ b/lib/librte_eal/ppc/rte_power_intrinsics.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		       const uint64_t value_mask, const uint64_t tsc_timestamp,
+		       const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+			    const uint64_t value_mask, const uint64_t tsc_timestamp,
+			    const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index 354c068f31..31bf76ae81 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -403,6 +403,11 @@ EXPERIMENTAL {
 	rte_service_lcore_may_be_active;
 	rte_vect_get_max_simd_bitwidth;
 	rte_vect_set_max_simd_bitwidth;
+
+	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 };
 
 INTERNAL {
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v14 01/11] eal: uninline power intrinsics
  @ 2021-01-11 14:35  2%     ` Anatoly Burakov
    1 sibling, 0 replies; 200+ results
From: Anatoly Burakov @ 2021-01-11 14:35 UTC (permalink / raw)
  To: dev
  Cc: Jerin Jacob, Ruifeng Wang, Jan Viktorin, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v14:
    - Fix compile issues on ARM and PPC64 by moving implementations to .c files

 .../arm/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/arm/meson.build                |   1 +
 lib/librte_eal/arm/rte_power_intrinsics.c     |  42 ++++++
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |  40 ------
 lib/librte_eal/ppc/meson.build                |   1 +
 lib/librte_eal/ppc/rte_power_intrinsics.c     |  42 ++++++
 lib/librte_eal/version.map                    |   5 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 11 files changed, 215 insertions(+), 198 deletions(-)
 create mode 100644 lib/librte_eal/arm/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/ppc/rte_power_intrinsics.c
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..9e498e9ebf 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on ARM.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/arm/meson.build b/lib/librte_eal/arm/meson.build
index d62875ebae..6ec53ea03a 100644
--- a/lib/librte_eal/arm/meson.build
+++ b/lib/librte_eal/arm/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/arm/rte_power_intrinsics.c b/lib/librte_eal/arm/rte_power_intrinsics.c
new file mode 100644
index 0000000000..e5a49facb4
--- /dev/null
+++ b/lib/librte_eal/arm/rte_power_intrinsics.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on ARM.
+ */
+void rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..c0e9ac279f 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -13,46 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	RTE_SET_USED(p);
-	RTE_SET_USED(expected_value);
-	RTE_SET_USED(value_mask);
-	RTE_SET_USED(tsc_timestamp);
-	RTE_SET_USED(lck);
-	RTE_SET_USED(data_sz);
-}
-
-/**
- * This function is not supported on PPC64.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	RTE_SET_USED(tsc_timestamp);
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/ppc/meson.build b/lib/librte_eal/ppc/meson.build
index f4b6d95c42..43c46542fb 100644
--- a/lib/librte_eal/ppc/meson.build
+++ b/lib/librte_eal/ppc/meson.build
@@ -7,4 +7,5 @@ sources += files(
 	'rte_cpuflags.c',
 	'rte_cycles.c',
 	'rte_hypervisor.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/ppc/rte_power_intrinsics.c b/lib/librte_eal/ppc/rte_power_intrinsics.c
new file mode 100644
index 0000000000..785effabe6
--- /dev/null
+++ b/lib/librte_eal/ppc/rte_power_intrinsics.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		       const uint64_t value_mask, const uint64_t tsc_timestamp,
+		       const uint8_t data_sz)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+			    const uint64_t value_mask, const uint64_t tsc_timestamp,
+			    const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	RTE_SET_USED(p);
+	RTE_SET_USED(expected_value);
+	RTE_SET_USED(value_mask);
+	RTE_SET_USED(tsc_timestamp);
+	RTE_SET_USED(lck);
+	RTE_SET_USED(data_sz);
+}
+
+/**
+ * This function is not supported on PPC64.
+ */
+void rte_power_pause(const uint64_t tsc_timestamp)
+{
+	RTE_SET_USED(tsc_timestamp);
+}
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index 354c068f31..31bf76ae81 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -403,6 +403,11 @@ EXPERIMENTAL {
 	rte_service_lcore_may_be_active;
 	rte_vect_get_max_simd_bitwidth;
 	rte_vect_set_max_simd_bitwidth;
+
+	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 };
 
 INTERNAL {
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] Reader-Writer lock starvation issues
  2021-01-11 11:52  0%   ` Ferruh Yigit
@ 2021-01-11 13:05  0%     ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-01-11 13:05 UTC (permalink / raw)
  To: Ferruh Yigit, Stephen Hemminger, dev; +Cc: nd, Honnappa Nagarahalli, nd

<snip>

> >
> >>
> >> The current version of rte_rwlock doesn't do what it says in the
> >> documentation.
> >> " The lock is used to protect data that allows multiple readers in
> >> parallel,  but only one writer. All readers are blocked until the writer is
> finished  writing."
> >>
> >> The problem is that the current implementation does not stop a a new
> >> reader from acquiring the lock while a writer is waiting.
> > Agree, essentially the arbitration is left to the hardware.
> >
> >>
> >> Writer:
> >>        repeat until x = __atomic_load(&counter) == 0;
> >>        __atomic_compare_exchange(&counter, &x, -1);
> >>
> >> Reader:
> >>        x = __atomic_load(&counter);
> >>        __atomic_compare_exchange(&counter, &x, x + 1);
> >>
> >>
> >> Fixing it likely would require an ABI change to add additional state.
> >>
> >> For more background on reader-writer locks see:
> >>
> >> https://www.cs.rochester.edu/research/synchronization/pseudocode/rw.h
> >> tm
> >> l
> >>
> >> The code in DPDK is actually effectively the same as the first
> >> example "Simple, non-scalable reader-preference lock"
> > I do not think the DPDK implementation has reader-preference. There is no
> code to control the arbitration between writers and readers. It is possible that if
> there are multiple writers the readers might be starved depending on how the
> hardware does the arbitration.
> >
> 
> As far as I can see, in current implementation:
> 
> When writer has the lock, both writers and readers needs to wait, and when
> writer releases reader or writer has chance to acquire the lock.
Yes, since reader or writer can acquire the lock (when writer releases), I do not think we can call the current implementation as 'reader-preference'.

> 
> When reader has the lock, other readers can acquire the lock and writers has to
> wait, and if readers keep coming it can cause writer starvation. Overall this
> doesn't look fair reader-writer lock ...
Agree

> 
> >>
> >> It looks like doing the right thing will require increasing the size
> >> of the rte_rwlock structure and cause an ABI breakage.
> >>
> >> I am running with an alternative which uses ticket locks to do:
> >>    "Simple, non-scalable writer-preference lock"
> > Does it provide good scalability?
> >
> >>
> >> My recommendation would be:
> >>
> >>   1. Fix documentation in rte_rwlock.h (and add release note) and put
> >> this in
> >> 20.02 and LTS.
> > Agree, the document is not clear on the arbitration.
> >
> >>   2. Add new rte_ticket_rwlock.h which provides the correct semantics.
> > Agree.
> >
> >>
> >> Comments?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 14:27  0%                     ` Ferruh Yigit
  2021-01-08 14:31  0%                       ` Kinsella, Ray
@ 2021-01-08 17:34  0%                       ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-08 17:34 UTC (permalink / raw)
  To: Yigit, Ferruh, Thomas Monjalon, Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday 8 January 2021 14:27
> To: Kinsella, Ray <ray.kinsella@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type
> for ecpri
> 
> On 1/8/2021 12:38 PM, Kinsella, Ray wrote:
> >
> >
> >> -----Original Message-----
> >> From: Thomas Monjalon <thomas@monjalon.net>
> >> Sent: Friday 8 January 2021 10:24
> >> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>;
> >> Yigit, Ferruh <ferruh.yigit@intel.com>
> >> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> >> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> >> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> >> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>; Kinsella,
> Ray
> >> <ray.kinsella@intel.com>
> >> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel
> type
> >> for ecpri
> >>
> >> 08/01/2021 10:22, Ferruh Yigit:
> >>> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> >>>> 07/01/2021 13:47, Zhang, Qi Z:
> >>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>> 07/01/2021 10:32, Guo, Jia:
> >>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>>>> Sorry, it is a nack.
> >>>>>>>> BTW, it is probably breaking the ABI because of
> >> RTE_TUNNEL_TYPE_MAX.
> >>>>>
> >>>>> Yes that may break the ABI but fortunately the checking-abi-
> >> compatibility tool shows negative :) , thanks Ferruh' s guide.
> >>>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> >>>>
> >>>> That's very strange. An enum value is changed.
> >>>> Why it is not flagged by libabigail?
> >>>
> >>> As long as the enum values not sent to the application and kept
> >> within
> >>> the library, changing their values shouldn't be problem.
> >>
> >> But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h so
> >> it is exposed to the application.
> >> I think it is a case of ABI breakage.
> >>
> >
> > Really a lot depends on context, Thomas is right it is hard to
> predict how these _MAX values are used.
> >
> > We have seen cases in the past where _MAX enumeration values have
> been used to size arrays the like - I don't immediately see that issue
> here. My understanding is that the only consumer of this enumeration is
> rte_eth_dev_udp_tunnel_port_add and rte_eth_dev_udp_tunnel_port_delete,
> right? On face value, impact looks negligible.
> >
> > I will take a look at why libabigail doesn't complain.

So I spent some time looking a bit closer at why libabigail didn't complain,
I am summarizing it is because there is no symbol that obviously uses enum rte_eth_tunnel_type.
The prot_type field in rte_eth_udp_tunnel is declared as a uint8_t, not as enum rte_eth_tunnel_type.

Is there a particular reason an enumerated field would be declared as unsigned int instead?

/**
 * UDP tunneling configuration.
 * Used to config the UDP port for a type of tunnel.
 * NICs need the UDP port to identify the tunnel type.
 * Normally a type of tunnel has a default UDP port, this structure can be used
 * in case if the users want to change or support more UDP port.
 */
struct rte_eth_udp_tunnel {
        uint16_t udp_port; /**< UDP port used for the tunnel. */
        uint8_t prot_type; /**< Tunnel type. Defined in rte_eth_tunnel_type. */
};

> 
> Application can use the enum, including MAX as they desire, we can't
> really assume anything there.
> 
> In previous case, library was providing an enum value back to
> application. And the problem was application can use those values
> blindly and new unexpected values may cause trouble.
> 
> For this case, even the application create a table with
> RTE_TUNNEL_TYPE_MAX size, library is not sending any type of this enum
> to application to cause any problem, at least abigail seems not able to
> finding any instance of it.

I agree - I think this has marginal risk. 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 14:27  0%                     ` Ferruh Yigit
@ 2021-01-08 14:31  0%                       ` Kinsella, Ray
  2021-01-08 17:34  0%                       ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-08 14:31 UTC (permalink / raw)
  To: Yigit, Ferruh, Thomas Monjalon, Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday 8 January 2021 14:27
> To: Kinsella, Ray <ray.kinsella@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type
> for ecpri
> 
> On 1/8/2021 12:38 PM, Kinsella, Ray wrote:
> >
> >
> >> -----Original Message-----
> >> From: Thomas Monjalon <thomas@monjalon.net>
> >> Sent: Friday 8 January 2021 10:24
> >> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>;
> >> Yigit, Ferruh <ferruh.yigit@intel.com>
> >> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> >> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> >> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> >> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>; Kinsella,
> Ray
> >> <ray.kinsella@intel.com>
> >> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel
> type
> >> for ecpri
> >>
> >> 08/01/2021 10:22, Ferruh Yigit:
> >>> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> >>>> 07/01/2021 13:47, Zhang, Qi Z:
> >>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>> 07/01/2021 10:32, Guo, Jia:
> >>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>>>> Sorry, it is a nack.
> >>>>>>>> BTW, it is probably breaking the ABI because of
> >> RTE_TUNNEL_TYPE_MAX.
> >>>>>
> >>>>> Yes that may break the ABI but fortunately the checking-abi-
> >> compatibility tool shows negative :) , thanks Ferruh' s guide.
> >>>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> >>>>
> >>>> That's very strange. An enum value is changed.
> >>>> Why it is not flagged by libabigail?
> >>>
> >>> As long as the enum values not sent to the application and kept
> >> within
> >>> the library, changing their values shouldn't be problem.
> >>
> >> But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h so
> >> it is exposed to the application.
> >> I think it is a case of ABI breakage.
> >>
> >
> > Really a lot depends on context, Thomas is right it is hard to
> predict how these _MAX values are used.
> >
> > We have seen cases in the past where _MAX enumeration values have
> been used to size arrays the like - I don't immediately see that issue
> here. My understanding is that the only consumer of this enumeration is
> rte_eth_dev_udp_tunnel_port_add and rte_eth_dev_udp_tunnel_port_delete,
> right? On face value, impact looks negligible.
> >
> > I will take a look at why libabigail doesn't complain.
> >
> 
> Application can use the enum, including MAX as they desire, we can't
> really assume anything there.
> 
> In previous case, library was providing an enum value back to
> application. And the problem was application can use those values
> blindly and new unexpected values may cause trouble.
> 
> For this case, even the application create a table with
> RTE_TUNNEL_TYPE_MAX size, library is not sending any type of this enum
> to application to cause any problem, at least abigail seems not able to
> finding any instance of it.

Yes - it makes any problem associated with unlikely then.

Ray K

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 14:06  3%                     ` Thomas Monjalon
@ 2021-01-08 14:07  0%                       ` Kinsella, Ray
  2021-01-08 14:10  0%                         ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-08 14:07 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia, Zhang, Qi Z, Yigit, Ferruh
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Friday 8 January 2021 14:06
> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>; Kinsella, Ray
> <ray.kinsella@intel.com>
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type
> for ecpri
> 
> 08/01/2021 11:43, Ferruh Yigit:
> > On 1/8/2021 10:23 AM, Thomas Monjalon wrote:
> > > 08/01/2021 10:22, Ferruh Yigit:
> > >> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> > >>> 07/01/2021 13:47, Zhang, Qi Z:
> > >>>> From: Thomas Monjalon <thomas@monjalon.net>
> > >>>>> 07/01/2021 10:32, Guo, Jia:
> > >>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> > >>>>>>> Sorry, it is a nack.
> > >>>>>>> BTW, it is probably breaking the ABI because of
> RTE_TUNNEL_TYPE_MAX.
> > >>>>
> > >>>> Yes that may break the ABI but fortunately the checking-abi-
> compatibility tool shows negative :) , thanks Ferruh' s guide.
> > >>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> > >>>
> > >>> That's very strange. An enum value is changed.
> > >>> Why it is not flagged by libabigail?
> > >>
> > >> As long as the enum values not sent to the application and kept
> > >> within the library, changing their values shouldn't be problem.
> > >
> > > But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h
> so
> > > it is exposed to the application.
> > > I think it is a case of ABI breakage.
> >
> > Yes it is exposed to the application. But in runtime does it
> exchanged
> > between library and application is the issue I think.
> > For this case it seems it is not, so not an ABI break.
> 
> If I create a table of size RTE_TUNNEL_TYPE_MAX with DPDK 20.11, I will
> get an overflow when writing to the new ECPRI index.

I guess the question is - are you likely to do this?

> The question is: can I receive the ECPRI value dynamically from ethdev?
> If yes, it is an ABI breakage. But I cannot think of such case now.

Ray K


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 10:23  3%                 ` Thomas Monjalon
  2021-01-08 10:43  3%                   ` Ferruh Yigit
@ 2021-01-08 12:38  0%                   ` Kinsella, Ray
  2021-01-08 14:27  0%                     ` Ferruh Yigit
  1 sibling, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-01-08 12:38 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia, Zhang, Qi Z, Yigit, Ferruh
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Friday 8 January 2021 10:24
> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>; Kinsella, Ray
> <ray.kinsella@intel.com>
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type
> for ecpri
> 
> 08/01/2021 10:22, Ferruh Yigit:
> > On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> > > 07/01/2021 13:47, Zhang, Qi Z:
> > >> From: Thomas Monjalon <thomas@monjalon.net>
> > >>> 07/01/2021 10:32, Guo, Jia:
> > >>>> From: Thomas Monjalon <thomas@monjalon.net>
> > >>>>> Sorry, it is a nack.
> > >>>>> BTW, it is probably breaking the ABI because of
> RTE_TUNNEL_TYPE_MAX.
> > >>
> > >> Yes that may break the ABI but fortunately the checking-abi-
> compatibility tool shows negative :) , thanks Ferruh' s guide.
> > >> https://github.com/ferruhy/dpdk/actions/runs/468859673
> > >
> > > That's very strange. An enum value is changed.
> > > Why it is not flagged by libabigail?
> >
> > As long as the enum values not sent to the application and kept
> within
> > the library, changing their values shouldn't be problem.
> 
> But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h so it
> is exposed to the application.
> I think it is a case of ABI breakage.
> 

Really a lot depends on context, Thomas is right it is hard to predict how these _MAX values are used.

We have seen cases in the past where _MAX enumeration values have been used to size arrays the like - I don't immediately see that issue here. My understanding is that the only consumer of this enumeration is rte_eth_dev_udp_tunnel_port_add and rte_eth_dev_udp_tunnel_port_delete, right? On face value, impact looks negligible. 

I will take a look at why libabigail doesn't complain. 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] Reader-Writer lock starvation issues
  2021-01-08 21:27  0% ` Honnappa Nagarahalli
@ 2021-01-11 11:52  0%   ` Ferruh Yigit
  2021-01-11 13:05  0%     ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-11 11:52 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Stephen Hemminger, dev; +Cc: nd

On 1/8/2021 9:27 PM, Honnappa Nagarahalli wrote:
> <snip>
> 
>>
>> The current version of rte_rwlock doesn't do what it says in the
>> documentation.
>> " The lock is used to protect data that allows multiple readers in parallel,  but
>> only one writer. All readers are blocked until the writer is finished  writing."
>>
>> The problem is that the current implementation does not stop a a new reader
>> from acquiring the lock while a writer is waiting.
> Agree, essentially the arbitration is left to the hardware.
> 
>>
>> Writer:
>>        repeat until x = __atomic_load(&counter) == 0;
>>        __atomic_compare_exchange(&counter, &x, -1);
>>
>> Reader:
>>        x = __atomic_load(&counter);
>>        __atomic_compare_exchange(&counter, &x, x + 1);
>>
>>
>> Fixing it likely would require an ABI change to add additional state.
>>
>> For more background on reader-writer locks see:
>>
>> https://www.cs.rochester.edu/research/synchronization/pseudocode/rw.htm
>> l
>>
>> The code in DPDK is actually effectively the same as the first example
>> "Simple, non-scalable reader-preference lock"
> I do not think the DPDK implementation has reader-preference. There is no code to control the arbitration between writers and readers. It is possible that if there are multiple writers the readers might be starved depending on how the hardware does the arbitration.
> 

As far as I can see, in current implementation:

When writer has the lock, both writers and readers needs to wait, and when 
writer releases reader or writer has chance to acquire the lock.

When reader has the lock, other readers can acquire the lock and writers has to 
wait, and if readers keep coming it can cause writer starvation. Overall this 
doesn't look fair reader-writer lock ...

>>
>> It looks like doing the right thing will require increasing the size of the
>> rte_rwlock structure and cause an ABI breakage.
>>
>> I am running with an alternative which uses ticket locks to do:
>>    "Simple, non-scalable writer-preference lock"
> Does it provide good scalability?
> 
>>
>> My recommendation would be:
>>
>>   1. Fix documentation in rte_rwlock.h (and add release note) and put this in
>> 20.02 and LTS.
> Agree, the document is not clear on the arbitration.
> 
>>   2. Add new rte_ticket_rwlock.h which provides the correct semantics.
> Agree.
> 
>>
>> Comments?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v12 00/11] Add PMD power management
  2021-01-08 16:42  0%   ` Burakov, Anatoly
@ 2021-01-11  8:44  0%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-01-11  8:44 UTC (permalink / raw)
  To: Burakov, Anatoly
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Gage Eads,
	Timothy McDaniel, David Hunt, Bruce Richardson, chris.macnamara,
	Ray Kinsella, Yigit, Ferruh

On Fri, Jan 8, 2021 at 5:42 PM Burakov, Anatoly
<anatoly.burakov@intel.com> wrote:
>
> On 17-Dec-20 4:12 PM, David Marchand wrote:
> > On Thu, Dec 17, 2020 at 3:06 PM Anatoly Burakov
> > <anatoly.burakov@intel.com> wrote:
> >>
> >> This patchset proposes a simple API for Ethernet drivers to cause the
> >> CPU to enter a power-optimized state while waiting for packets to
> >> arrive. This is achieved through cooperation with the NIC driver that
> >> will allow us to know address of wake up event, and wait for writes on
> >> it.
> >>
> >> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
> >> are used in their raw opcode form because there is no widespread
> >> compiler support for them yet. Still, the API is made generic enough to
> >> hopefully support other architectures, if they happen to implement
> >> similar instructions.
> >>
> >> To achieve power savings, there is a very simple mechanism used: we're
> >> counting empty polls, and if a certain threshold is reached, we get the
> >> address of next RX ring descriptor from the NIC driver, arm the
> >> monitoring hardware, and enter a power-optimized state. We will then
> >> wake up when either a timeout happens, or a write happens (or generally
> >> whenever CPU feels like waking up - this is platform-specific), and
> >> proceed as normal. The empty poll counter is reset whenever we actually
> >> get packets, so we only go to sleep when we know nothing is going on.
> >> The mechanism is generic which can be used for any write back
> >> descriptor.
> >>
> >> This patchset also introduces a few changes into existing power
> >> management-related intrinsics, namely to provide a native way of waking
> >> up a sleeping core without application being responsible for it, as well
> >> as general robustness improvements. There's quite a bit of locking going
> >> on, but these locks are per-thread and very little (if any) contention
> >> is expected, so the performance impact shouldn't be that bad (and in any
> >> case the locking happens when we're about to sleep anyway, not on a
> >> hotpath).
> >>
> >> Why are we putting it into ethdev as opposed to leaving this up to the
> >> application? Our customers specifically requested a way to do it wit
> >> minimal changes to the application code. The current approach allows to
> >> just flip a switch and automatically have power savings.
> >>
> >> - Only 1:1 core to queue mapping is supported, meaning that each lcore
> >>    must at most handle RX on a single queue
> >> - Support 3 type policies. Monitor/Pause/Frequency Scaling
> >> - Power management is enabled per-queue
> >> - The API doesn't extend to other device types
> >
> > Fyi, ovsrobot Travis being KO, you probably missed that GHA CI caught this:
> > https://github.com/ovsrobot/dpdk/runs/1571056574?check_suite_focus=true#step:13:16082
> >
> > We will have to put an exception on driver only ABI.
> >
> >
>
> Why does aarch64 build fail there? The functions in question are in the
> version map file, but the build complains that they aren't.

From what I can see, this series puts rte_power_* symbols in a .h.
So it will be seen as symbols exported by any library including such a header.

The check then complains about this as it sees exported symbols
unknown of the library version.map.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] Reader-Writer lock starvation issues
  2021-01-08 19:13  4% [dpdk-dev] Reader-Writer lock starvation issues Stephen Hemminger
@ 2021-01-08 21:27  0% ` Honnappa Nagarahalli
  2021-01-11 11:52  0%   ` Ferruh Yigit
  2021-01-12  1:04  3% ` [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
  1 sibling, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2021-01-08 21:27 UTC (permalink / raw)
  To: Stephen Hemminger, dev; +Cc: Honnappa Nagarahalli, nd, nd

<snip>

> 
> The current version of rte_rwlock doesn't do what it says in the
> documentation.
> " The lock is used to protect data that allows multiple readers in parallel,  but
> only one writer. All readers are blocked until the writer is finished  writing."
> 
> The problem is that the current implementation does not stop a a new reader
> from acquiring the lock while a writer is waiting.
Agree, essentially the arbitration is left to the hardware.

> 
> Writer:
>       repeat until x = __atomic_load(&counter) == 0;
>       __atomic_compare_exchange(&counter, &x, -1);
> 
> Reader:
>       x = __atomic_load(&counter);
>       __atomic_compare_exchange(&counter, &x, x + 1);
> 
> 
> Fixing it likely would require an ABI change to add additional state.
> 
> For more background on reader-writer locks see:
> 
> https://www.cs.rochester.edu/research/synchronization/pseudocode/rw.htm
> l
> 
> The code in DPDK is actually effectively the same as the first example
> "Simple, non-scalable reader-preference lock"
I do not think the DPDK implementation has reader-preference. There is no code to control the arbitration between writers and readers. It is possible that if there are multiple writers the readers might be starved depending on how the hardware does the arbitration.

> 
> It looks like doing the right thing will require increasing the size of the
> rte_rwlock structure and cause an ABI breakage.
> 
> I am running with an alternative which uses ticket locks to do:
>   "Simple, non-scalable writer-preference lock"
Does it provide good scalability?

> 
> My recommendation would be:
> 
>  1. Fix documentation in rte_rwlock.h (and add release note) and put this in
> 20.02 and LTS.
Agree, the document is not clear on the arbitration.

>  2. Add new rte_ticket_rwlock.h which provides the correct semantics.
Agree.

> 
> Comments?

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Reader-Writer lock starvation issues
@ 2021-01-08 19:13  4% Stephen Hemminger
  2021-01-08 21:27  0% ` Honnappa Nagarahalli
  2021-01-12  1:04  3% ` [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
  0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2021-01-08 19:13 UTC (permalink / raw)
  To: dev

The current version of rte_rwlock doesn't do what it says in the documentation.
" The lock is used to protect data that allows multiple readers in parallel,
 but only one writer. All readers are blocked until the writer is finished
 writing."

The problem is that the current implementation does not stop a a new reader
from acquiring the lock while a writer is waiting.

Writer:
      repeat until x = __atomic_load(&counter) == 0;
      __atomic_compare_exchange(&counter, &x, -1);
                                         
Reader:
      x = __atomic_load(&counter);
      __atomic_compare_exchange(&counter, &x, x + 1);


Fixing it likely would require an ABI change to add additional state.

For more background on reader-writer locks see:
  https://www.cs.rochester.edu/research/synchronization/pseudocode/rw.html

The code in DPDK is actually effectively the same as the first example
 "Simple, non-scalable reader-preference lock"

It looks like doing the right thing will require increasing the size of
the rte_rwlock structure and cause an ABI breakage.

I am running with an alternative which uses ticket locks to do:
  "Simple, non-scalable writer-preference lock"

My recommendation would be:

 1. Fix documentation in rte_rwlock.h (and add release note) and put this in 20.02 and LTS.
 2. Add new rte_ticket_rwlock.h which provides the correct semantics.

Comments?

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v13 01/11] eal: uninline power intrinsics
  @ 2021-01-08 17:42  2%   ` Anatoly Burakov
  2021-01-12 15:54  0%     ` Ananyev, Konstantin
    1 sibling, 1 reply; 200+ results
From: Anatoly Burakov @ 2021-01-08 17:42 UTC (permalink / raw)
  To: dev
  Cc: Jan Viktorin, Ruifeng Wang, Jerin Jacob, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, gage.eads, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 .../arm/include/rte_power_intrinsics.h        |   6 +-
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |   6 +-
 lib/librte_eal/version.map                    |   5 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 7 files changed, 135 insertions(+), 124 deletions(-)
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..5e384d380e 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -16,7 +16,7 @@ extern "C" {
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz)
@@ -31,7 +31,7 @@ rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz, rte_spinlock_t *lck)
@@ -47,7 +47,7 @@ rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_pause(const uint64_t tsc_timestamp)
 {
 	RTE_SET_USED(tsc_timestamp);
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..4cb5560c02 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -16,7 +16,7 @@ extern "C" {
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz)
@@ -31,7 +31,7 @@ rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz, rte_spinlock_t *lck)
@@ -47,7 +47,7 @@ rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_pause(const uint64_t tsc_timestamp)
 {
 	RTE_SET_USED(tsc_timestamp);
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index 354c068f31..31bf76ae81 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -403,6 +403,11 @@ EXPERIMENTAL {
 	rte_service_lcore_may_be_active;
 	rte_vect_get_max_simd_bitwidth;
 	rte_vect_set_max_simd_bitwidth;
+
+	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 };
 
 INTERNAL {
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.25.1

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v12 00/11] Add PMD power management
  2020-12-17 16:12  3% ` [dpdk-dev] [PATCH v12 00/11] Add PMD power management David Marchand
@ 2021-01-08 16:42  0%   ` Burakov, Anatoly
  2021-01-11  8:44  0%     ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-01-08 16:42 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Gage Eads,
	Timothy McDaniel, David Hunt, Bruce Richardson, chris.macnamara,
	Ray Kinsella, Yigit, Ferruh

On 17-Dec-20 4:12 PM, David Marchand wrote:
> On Thu, Dec 17, 2020 at 3:06 PM Anatoly Burakov
> <anatoly.burakov@intel.com> wrote:
>>
>> This patchset proposes a simple API for Ethernet drivers to cause the
>> CPU to enter a power-optimized state while waiting for packets to
>> arrive. This is achieved through cooperation with the NIC driver that
>> will allow us to know address of wake up event, and wait for writes on
>> it.
>>
>> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
>> are used in their raw opcode form because there is no widespread
>> compiler support for them yet. Still, the API is made generic enough to
>> hopefully support other architectures, if they happen to implement
>> similar instructions.
>>
>> To achieve power savings, there is a very simple mechanism used: we're
>> counting empty polls, and if a certain threshold is reached, we get the
>> address of next RX ring descriptor from the NIC driver, arm the
>> monitoring hardware, and enter a power-optimized state. We will then
>> wake up when either a timeout happens, or a write happens (or generally
>> whenever CPU feels like waking up - this is platform-specific), and
>> proceed as normal. The empty poll counter is reset whenever we actually
>> get packets, so we only go to sleep when we know nothing is going on.
>> The mechanism is generic which can be used for any write back
>> descriptor.
>>
>> This patchset also introduces a few changes into existing power
>> management-related intrinsics, namely to provide a native way of waking
>> up a sleeping core without application being responsible for it, as well
>> as general robustness improvements. There's quite a bit of locking going
>> on, but these locks are per-thread and very little (if any) contention
>> is expected, so the performance impact shouldn't be that bad (and in any
>> case the locking happens when we're about to sleep anyway, not on a
>> hotpath).
>>
>> Why are we putting it into ethdev as opposed to leaving this up to the
>> application? Our customers specifically requested a way to do it wit
>> minimal changes to the application code. The current approach allows to
>> just flip a switch and automatically have power savings.
>>
>> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>>    must at most handle RX on a single queue
>> - Support 3 type policies. Monitor/Pause/Frequency Scaling
>> - Power management is enabled per-queue
>> - The API doesn't extend to other device types
> 
> Fyi, ovsrobot Travis being KO, you probably missed that GHA CI caught this:
> https://github.com/ovsrobot/dpdk/runs/1571056574?check_suite_focus=true#step:13:16082
> 
> We will have to put an exception on driver only ABI.
> 
> 

Why does aarch64 build fail there? The functions in question are in the 
version map file, but the build complains that they aren't.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 12:38  0%                   ` Kinsella, Ray
@ 2021-01-08 14:27  0%                     ` Ferruh Yigit
  2021-01-08 14:31  0%                       ` Kinsella, Ray
  2021-01-08 17:34  0%                       ` Kinsella, Ray
  0 siblings, 2 replies; 200+ results
From: Ferruh Yigit @ 2021-01-08 14:27 UTC (permalink / raw)
  To: Kinsella, Ray, Thomas Monjalon, Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli

On 1/8/2021 12:38 PM, Kinsella, Ray wrote:
> 
> 
>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Friday 8 January 2021 10:24
>> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
>> Yigit, Ferruh <ferruh.yigit@intel.com>
>> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
>> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
>> dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
>> getelson@nvidia.com; Dodji Seketeli <dodji@redhat.com>; Kinsella, Ray
>> <ray.kinsella@intel.com>
>> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type
>> for ecpri
>>
>> 08/01/2021 10:22, Ferruh Yigit:
>>> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
>>>> 07/01/2021 13:47, Zhang, Qi Z:
>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>> 07/01/2021 10:32, Guo, Jia:
>>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>>> Sorry, it is a nack.
>>>>>>>> BTW, it is probably breaking the ABI because of
>> RTE_TUNNEL_TYPE_MAX.
>>>>>
>>>>> Yes that may break the ABI but fortunately the checking-abi-
>> compatibility tool shows negative :) , thanks Ferruh' s guide.
>>>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
>>>>
>>>> That's very strange. An enum value is changed.
>>>> Why it is not flagged by libabigail?
>>>
>>> As long as the enum values not sent to the application and kept
>> within
>>> the library, changing their values shouldn't be problem.
>>
>> But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h so it
>> is exposed to the application.
>> I think it is a case of ABI breakage.
>>
> 
> Really a lot depends on context, Thomas is right it is hard to predict how these _MAX values are used.
> 
> We have seen cases in the past where _MAX enumeration values have been used to size arrays the like - I don't immediately see that issue here. My understanding is that the only consumer of this enumeration is rte_eth_dev_udp_tunnel_port_add and rte_eth_dev_udp_tunnel_port_delete, right? On face value, impact looks negligible.
> 
> I will take a look at why libabigail doesn't complain.
> 

Application can use the enum, including MAX as they desire, we can't really 
assume anything there.

In previous case, library was providing an enum value back to application. And 
the problem was application can use those values blindly and new unexpected 
values may cause trouble.

For this case, even the application create a table with RTE_TUNNEL_TYPE_MAX 
size, library is not sending any type of this enum to application to cause any 
problem, at least abigail seems not able to finding any instance of it.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 14:07  0%                       ` Kinsella, Ray
@ 2021-01-08 14:10  0%                         ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-01-08 14:10 UTC (permalink / raw)
  To: Guo, Jia, Zhang, Qi Z, Yigit, Ferruh, Kinsella, Ray
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli

08/01/2021 15:07, Kinsella, Ray:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 08/01/2021 11:43, Ferruh Yigit:
> > > On 1/8/2021 10:23 AM, Thomas Monjalon wrote:
> > > > 08/01/2021 10:22, Ferruh Yigit:
> > > >> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> > > >>> 07/01/2021 13:47, Zhang, Qi Z:
> > > >>>> From: Thomas Monjalon <thomas@monjalon.net>
> > > >>>>> 07/01/2021 10:32, Guo, Jia:
> > > >>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> > > >>>>>>> Sorry, it is a nack.
> > > >>>>>>> BTW, it is probably breaking the ABI because of
> > RTE_TUNNEL_TYPE_MAX.
> > > >>>>
> > > >>>> Yes that may break the ABI but fortunately the checking-abi-
> > compatibility tool shows negative :) , thanks Ferruh' s guide.
> > > >>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> > > >>>
> > > >>> That's very strange. An enum value is changed.
> > > >>> Why it is not flagged by libabigail?
> > > >>
> > > >> As long as the enum values not sent to the application and kept
> > > >> within the library, changing their values shouldn't be problem.
> > > >
> > > > But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h
> > so
> > > > it is exposed to the application.
> > > > I think it is a case of ABI breakage.
> > >
> > > Yes it is exposed to the application. But in runtime does it
> > exchanged
> > > between library and application is the issue I think.
> > > For this case it seems it is not, so not an ABI break.
> > 
> > If I create a table of size RTE_TUNNEL_TYPE_MAX with DPDK 20.11, I will
> > get an overflow when writing to the new ECPRI index.
> 
> I guess the question is - are you likely to do this?

As said below, no I cannot think about such a case myself.

> > The question is: can I receive the ECPRI value dynamically from ethdev?
> > If yes, it is an ABI breakage. But I cannot think of such case now.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 10:43  3%                   ` Ferruh Yigit
@ 2021-01-08 14:06  3%                     ` Thomas Monjalon
  2021-01-08 14:07  0%                       ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-08 14:06 UTC (permalink / raw)
  To: Guo, Jia, Zhang, Qi Z, Ferruh Yigit
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli, ray.kinsella

08/01/2021 11:43, Ferruh Yigit:
> On 1/8/2021 10:23 AM, Thomas Monjalon wrote:
> > 08/01/2021 10:22, Ferruh Yigit:
> >> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> >>> 07/01/2021 13:47, Zhang, Qi Z:
> >>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>> 07/01/2021 10:32, Guo, Jia:
> >>>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>>>> Sorry, it is a nack.
> >>>>>>> BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
> >>>>
> >>>> Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
> >>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> >>>
> >>> That's very strange. An enum value is changed.
> >>> Why it is not flagged by libabigail?
> >>
> >> As long as the enum values not sent to the application and kept within the
> >> library, changing their values shouldn't be problem.
> > 
> > But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h
> > so it is exposed to the application.
> > I think it is a case of ABI breakage.
> 
> Yes it is exposed to the application. But in runtime does it exchanged between 
> library and application is the issue I think.
> For this case it seems it is not, so not an ABI break.

If I create a table of size RTE_TUNNEL_TYPE_MAX with DPDK 20.11,
I will get an overflow when writing to the new ECPRI index.
The question is: can I receive the ECPRI value dynamically from ethdev?
If yes, it is an ABI breakage. But I cannot think of such case now.




^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08 10:23  3%                 ` Thomas Monjalon
@ 2021-01-08 10:43  3%                   ` Ferruh Yigit
  2021-01-08 14:06  3%                     ` Thomas Monjalon
  2021-01-08 12:38  0%                   ` Kinsella, Ray
  1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-08 10:43 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli, ray.kinsella

On 1/8/2021 10:23 AM, Thomas Monjalon wrote:
> 08/01/2021 10:22, Ferruh Yigit:
>> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
>>> 07/01/2021 13:47, Zhang, Qi Z:
>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>> 07/01/2021 10:32, Guo, Jia:
>>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>>>> Sorry, it is a nack.
>>>>>>> BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
>>>>
>>>> Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
>>>> https://github.com/ferruhy/dpdk/actions/runs/468859673
>>>
>>> That's very strange. An enum value is changed.
>>> Why it is not flagged by libabigail?
>>
>> As long as the enum values not sent to the application and kept within the
>> library, changing their values shouldn't be problem.
> 
> But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h
> so it is exposed to the application.
> I think it is a case of ABI breakage.
> 

Yes it is exposed to the application. But in runtime does it exchanged between 
library and application is the issue I think.
For this case it seems it is not, so not an ABI break.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-08  9:22  0%               ` Ferruh Yigit
@ 2021-01-08 10:23  3%                 ` Thomas Monjalon
  2021-01-08 10:43  3%                   ` Ferruh Yigit
  2021-01-08 12:38  0%                   ` Kinsella, Ray
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-01-08 10:23 UTC (permalink / raw)
  To: Guo, Jia, Zhang, Qi Z, Ferruh Yigit
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli, ray.kinsella

08/01/2021 10:22, Ferruh Yigit:
> On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> > 07/01/2021 13:47, Zhang, Qi Z:
> >> From: Thomas Monjalon <thomas@monjalon.net>
> >>> 07/01/2021 10:32, Guo, Jia:
> >>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>> Sorry, it is a nack.
> >>>>> BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
> >>
> >> Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
> >> https://github.com/ferruhy/dpdk/actions/runs/468859673
> > 
> > That's very strange. An enum value is changed.
> > Why it is not flagged by libabigail?
> 
> As long as the enum values not sent to the application and kept within the 
> library, changing their values shouldn't be problem.

But RTE_TUNNEL_TYPE_MAX is part of lib/librte_ethdev/rte_ethdev.h
so it is exposed to the application.
I think it is a case of ABI breakage.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 13:33  0%             ` Thomas Monjalon
  2021-01-07 13:45  0%               ` David Marchand
  2021-01-07 15:24  0%               ` Zhang, Qi Z
@ 2021-01-08  9:22  0%               ` Ferruh Yigit
  2021-01-08 10:23  3%                 ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-01-08  9:22 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, andrew.rybchenko,
	orika, getelson, Dodji Seketeli

On 1/7/2021 1:33 PM, Thomas Monjalon wrote:
> 07/01/2021 13:47, Zhang, Qi Z:
>>
>>> -----Original Message-----
>>> From: Thomas Monjalon <thomas@monjalon.net>
>>> Sent: Thursday, January 7, 2021 6:12 PM
>>> To: Guo, Jia <jia.guo@intel.com>
>>> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>>> Yang, Qiming <qiming.yang@intel.com>; Wang, Haiyue
>>> <haiyue.wang@intel.com>; dev@dpdk.org; Yigit, Ferruh
>>> <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
>>> getelson@nvidia.com
>>> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
>>>
>>> 07/01/2021 10:32, Guo, Jia:
>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>> 24/12/2020 07:59, Jeff Guo:
>>>>>> Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
>>>>> type.
>>>>>>
>>>>>> Signed-off-by: Jeff Guo <jia.guo@intel.com>
>>>>>> Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
>>>>> [...]
>>>>>> --- a/lib/librte_ethdev/rte_ethdev.h
>>>>>> +++ b/lib/librte_ethdev/rte_ethdev.h
>>>>>> @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
>>>>>>   	RTE_TUNNEL_TYPE_IP_IN_GRE,
>>>>>>   	RTE_L2_TUNNEL_TYPE_E_TAG,
>>>>>>   	RTE_TUNNEL_TYPE_VXLAN_GPE,
>>>>>> +	RTE_TUNNEL_TYPE_ECPRI,
>>>>>>   	RTE_TUNNEL_TYPE_MAX,
>>>>>>   };
>>>>>
>>>>> We tried to remove all these legacy API in DPDK 20.11.
>>>>> Andrew decided to not remove this one because it is not yet
>>>>> completely replaced by rte_flow in all drivers.
>>>>> However, I am against continuing to update this API.
>>>>> The opposite work should be done: migrate to rte_flow.
>>>>
>>>> Agree but seems that the legacy api and driver legacy implementation
>>>> still keep in this release, and there is no a general way to replace
>>>> the legacy by rte_flow right now.
>>>
>>> I think rte_flow is a complete replacement with more features.
>>
>> Thomas, I may not agree with this.
>>
>> Actually the "enum rte_eth_tunnel_type" is used by rte_eth_dev_udp_tunnel_port_add
>> A packet with specific dst udp port will be recognized as a specific tunnel packet type (e.g. vxlan, vxlan-gpe, ecpri...)
>> In Intel NIC, the API actually changes the configuration of the packet parser in HW but not add a filter rule and I guess all other devices may enable it in a similar way.
>> so naturally it should be a device (port) level configuration but not a rte_flow rule for match, encap, decap...
> 
> I don't understand how it helps to identify an UDP port
> if there is no rule for this tunnel.
> What is the usage?
> 
>> So I think it's not a good idea to replace
>> the rte_eth_dev_udp_tunnel_port_add with rte_flow config
>> and also there is no existing rte_flow_action
>> can cover this requirement unless we introduce some new one.
>>
>>> You can match, encap, decap.
>>> There is even a new API to get tunnel infos after decap.
>>> What is missing?
> 
> I still don't see which use case is missing.
> 
> 
>>>>> Sorry, it is a nack.
>>>>> BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
>>
>> Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
>> https://github.com/ferruhy/dpdk/actions/runs/468859673
> 
> That's very strange. An enum value is changed.
> Why it is not flagged by libabigail?
> 

As long as the enum values not sent to the application and kept within the 
library, changing their values shouldn't be problem.

> 
>>>> Oh, the ABI break should be a problem.
>>>>
>>>>> PS: please Cc ethdev maintainers for such patch, thanks.
>>>>> tip: use --cc-cmd devtools/get-maintainer.sh
>>>>
>>>> Thanks for your helpful tip.
> 
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 13:33  0%             ` Thomas Monjalon
  2021-01-07 13:45  0%               ` David Marchand
@ 2021-01-07 15:24  0%               ` Zhang, Qi Z
  2021-01-08  9:22  0%               ` Ferruh Yigit
  2 siblings, 0 replies; 200+ results
From: Zhang, Qi Z @ 2021-01-07 15:24 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, Yigit, Ferruh,
	andrew.rybchenko, orika, getelson, Dodji Seketeli



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, January 7, 2021 9:34 PM
> To: Guo, Jia <jia.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>;
> andrew.rybchenko@oktetlabs.ru; orika@nvidia.com; getelson@nvidia.com;
> Dodji Seketeli <dodji@redhat.com>
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
> 
> 07/01/2021 13:47, Zhang, Qi Z:
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > Sent: Thursday, January 7, 2021 6:12 PM
> > > To: Guo, Jia <jia.guo@intel.com>
> > > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Wang,
> > > Haiyue <haiyue.wang@intel.com>; dev@dpdk.org; Yigit, Ferruh
> > > <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru;
> > > orika@nvidia.com; getelson@nvidia.com
> > > Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel
> > > type for ecpri
> > >
> > > 07/01/2021 10:32, Guo, Jia:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 24/12/2020 07:59, Jeff Guo:
> > > > > > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev
> > > > > > tunnel
> > > > > type.
> > > > > >
> > > > > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > > > > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > [...]
> > > > > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > > > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > > > > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> > > > > >  	RTE_TUNNEL_TYPE_IP_IN_GRE,
> > > > > >  	RTE_L2_TUNNEL_TYPE_E_TAG,
> > > > > >  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > > > > > +	RTE_TUNNEL_TYPE_ECPRI,
> > > > > >  	RTE_TUNNEL_TYPE_MAX,
> > > > > >  };
> > > > >
> > > > > We tried to remove all these legacy API in DPDK 20.11.
> > > > > Andrew decided to not remove this one because it is not yet
> > > > > completely replaced by rte_flow in all drivers.
> > > > > However, I am against continuing to update this API.
> > > > > The opposite work should be done: migrate to rte_flow.
> > > >
> > > > Agree but seems that the legacy api and driver legacy
> > > > implementation still keep in this release, and there is no a
> > > > general way to replace the legacy by rte_flow right now.
> > >
> > > I think rte_flow is a complete replacement with more features.
> >
> > Thomas, I may not agree with this.
> >
> > Actually the "enum rte_eth_tunnel_type" is used by
> > rte_eth_dev_udp_tunnel_port_add A packet with specific dst udp port
> > will be recognized as a specific tunnel packet type (e.g. vxlan, vxlan-gpe,
> ecpri...) In Intel NIC, the API actually changes the configuration of the packet
> parser in HW but not add a filter rule and I guess all other devices may enable it
> in a similar way.
> > so naturally it should be a device (port) level configuration but not a rte_flow
> rule for match, encap, decap...
> 
> I don't understand how it helps to identify an UDP port if there is no rule for
> this tunnel.
> What is the usage?

Yes, in general It is a rule, it matches a udp packet's dst port and the action is "now the packet is identified as vxlan packet" then all other rte_flow rules that match for a vlxan as pattern will take effect.  but somehow, I think they are not rules in the same domain, just like we have dedicate API for mac/vlan filter, we'd better have a dedicate API for this also. ( RFC for Vxlan explains why we need this. https://tools.ietf.org/html/rfc7348).

"Destination Port: IANA has assigned the value 4789 for the
VXLAN UDP port, and this value SHOULD be used by default as the
destination UDP port.  Some early implementations of VXLAN have
used other values for the destination port.  To enable
interoperability with these implementations, the destination
port SHOULD be configurable."

Thanks
Qi

> 
> > So I think it's not a good idea to replace the
> > rte_eth_dev_udp_tunnel_port_add with rte_flow config and also there is
> > no existing rte_flow_action can cover this requirement unless we
> > introduce some new one.
> >
> > > You can match, encap, decap.
> > > There is even a new API to get tunnel infos after decap.
> > > What is missing?
> 
> I still don't see which use case is missing.
> 
> 
> > > > > Sorry, it is a nack.
> > > > > BTW, it is probably breaking the ABI because of
> RTE_TUNNEL_TYPE_MAX.
> >
> > Yes that may break the ABI but fortunately the checking-abi-compatibility tool
> shows negative :) , thanks Ferruh' s guide.
> > https://github.com/ferruhy/dpdk/actions/runs/468859673
> 
> That's very strange. An enum value is changed.
> Why it is not flagged by libabigail?
> 
> 
> > > > Oh, the ABI break should be a problem.
> > > >
> > > > > PS: please Cc ethdev maintainers for such patch, thanks.
> > > > > tip: use --cc-cmd devtools/get-maintainer.sh
> > > >
> > > > Thanks for your helpful tip.
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 13:45  0%               ` David Marchand
@ 2021-01-07 14:27  3%                 ` Dodji Seketeli
  0 siblings, 0 replies; 200+ results
From: Dodji Seketeli @ 2021-01-07 14:27 UTC (permalink / raw)
  To: David Marchand
  Cc: Thomas Monjalon, Guo, Jia, Zhang, Qi Z, Wu, Jingjing, Yang,
	Qiming, Wang, Haiyue, dev, Yigit, Ferruh, andrew.rybchenko,
	orika, getelson

David Marchand <david.marchand@redhat.com> writes:

> On Thu, Jan 7, 2021 at 2:33 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>> > Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
>> > https://github.com/ferruhy/dpdk/actions/runs/468859673
>>
>> That's very strange. An enum value is changed.
>> Why it is not flagged by libabigail?
>
> I suspect this is because the enum is not referenced in any object...
> all I see is an integer with a comment that it should be filled with
> values from the enum.

I am not sure about the full context but David is right in theory.

If the enum is not reachable from a publically exported interface
(function or global variable) then it won't we considered as being part
of the ABI and changes to that enum won't be reported.

I am not sure if that is what is happening in this particular case,
though.

Cheers,

-- 
		Dodji


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 13:33  0%             ` Thomas Monjalon
@ 2021-01-07 13:45  0%               ` David Marchand
  2021-01-07 14:27  3%                 ` Dodji Seketeli
  2021-01-07 15:24  0%               ` Zhang, Qi Z
  2021-01-08  9:22  0%               ` Ferruh Yigit
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-07 13:45 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Guo, Jia, Zhang, Qi Z, Wu, Jingjing, Yang, Qiming, Wang, Haiyue,
	dev, Yigit, Ferruh, andrew.rybchenko, orika, getelson,
	Dodji Seketeli

On Thu, Jan 7, 2021 at 2:33 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
> > https://github.com/ferruhy/dpdk/actions/runs/468859673
>
> That's very strange. An enum value is changed.
> Why it is not flagged by libabigail?

I suspect this is because the enum is not referenced in any object...
all I see is an integer with a comment that it should be filled with
values from the enum.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 12:47  4%           ` Zhang, Qi Z
@ 2021-01-07 13:33  0%             ` Thomas Monjalon
  2021-01-07 13:45  0%               ` David Marchand
                                 ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2021-01-07 13:33 UTC (permalink / raw)
  To: Guo, Jia, Zhang, Qi Z
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, Yigit, Ferruh,
	andrew.rybchenko, orika, getelson, Dodji Seketeli

07/01/2021 13:47, Zhang, Qi Z:
> 
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Thursday, January 7, 2021 6:12 PM
> > To: Guo, Jia <jia.guo@intel.com>
> > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> > Yang, Qiming <qiming.yang@intel.com>; Wang, Haiyue
> > <haiyue.wang@intel.com>; dev@dpdk.org; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> > getelson@nvidia.com
> > Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
> > 
> > 07/01/2021 10:32, Guo, Jia:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 24/12/2020 07:59, Jeff Guo:
> > > > > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
> > > > type.
> > > > >
> > > > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > > > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > [...]
> > > > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > > > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> > > > >  	RTE_TUNNEL_TYPE_IP_IN_GRE,
> > > > >  	RTE_L2_TUNNEL_TYPE_E_TAG,
> > > > >  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > > > > +	RTE_TUNNEL_TYPE_ECPRI,
> > > > >  	RTE_TUNNEL_TYPE_MAX,
> > > > >  };
> > > >
> > > > We tried to remove all these legacy API in DPDK 20.11.
> > > > Andrew decided to not remove this one because it is not yet
> > > > completely replaced by rte_flow in all drivers.
> > > > However, I am against continuing to update this API.
> > > > The opposite work should be done: migrate to rte_flow.
> > >
> > > Agree but seems that the legacy api and driver legacy implementation
> > > still keep in this release, and there is no a general way to replace
> > > the legacy by rte_flow right now.
> > 
> > I think rte_flow is a complete replacement with more features.
> 
> Thomas, I may not agree with this.
> 
> Actually the "enum rte_eth_tunnel_type" is used by rte_eth_dev_udp_tunnel_port_add 
> A packet with specific dst udp port will be recognized as a specific tunnel packet type (e.g. vxlan, vxlan-gpe, ecpri...)
> In Intel NIC, the API actually changes the configuration of the packet parser in HW but not add a filter rule and I guess all other devices may enable it in a similar way.
> so naturally it should be a device (port) level configuration but not a rte_flow rule for match, encap, decap...

I don't understand how it helps to identify an UDP port
if there is no rule for this tunnel.
What is the usage?

> So I think it's not a good idea to replace
> the rte_eth_dev_udp_tunnel_port_add with rte_flow config
> and also there is no existing rte_flow_action
> can cover this requirement unless we introduce some new one.
> 
> > You can match, encap, decap.
> > There is even a new API to get tunnel infos after decap.
> > What is missing?

I still don't see which use case is missing.


> > > > Sorry, it is a nack.
> > > > BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
> 
> Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
> https://github.com/ferruhy/dpdk/actions/runs/468859673

That's very strange. An enum value is changed.
Why it is not flagged by libabigail?


> > > Oh, the ABI break should be a problem.
> > >
> > > > PS: please Cc ethdev maintainers for such patch, thanks.
> > > > tip: use --cc-cmd devtools/get-maintainer.sh
> > >
> > > Thanks for your helpful tip.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07 10:11  0%         ` Thomas Monjalon
@ 2021-01-07 12:47  4%           ` Zhang, Qi Z
  2021-01-07 13:33  0%             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Zhang, Qi Z @ 2021-01-07 12:47 UTC (permalink / raw)
  To: Thomas Monjalon, Guo, Jia
  Cc: Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev, Yigit, Ferruh,
	andrew.rybchenko, orika, getelson



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, January 7, 2021 6:12 PM
> To: Guo, Jia <jia.guo@intel.com>
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Yang, Qiming <qiming.yang@intel.com>; Wang, Haiyue
> <haiyue.wang@intel.com>; dev@dpdk.org; Yigit, Ferruh
> <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; orika@nvidia.com;
> getelson@nvidia.com
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
> 
> 07/01/2021 10:32, Guo, Jia:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 24/12/2020 07:59, Jeff Guo:
> > > > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
> > > type.
> > > >
> > > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> > > [...]
> > > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> > > >  	RTE_TUNNEL_TYPE_IP_IN_GRE,
> > > >  	RTE_L2_TUNNEL_TYPE_E_TAG,
> > > >  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > > > +	RTE_TUNNEL_TYPE_ECPRI,
> > > >  	RTE_TUNNEL_TYPE_MAX,
> > > >  };
> > >
> > > We tried to remove all these legacy API in DPDK 20.11.
> > > Andrew decided to not remove this one because it is not yet
> > > completely replaced by rte_flow in all drivers.
> > > However, I am against continuing to update this API.
> > > The opposite work should be done: migrate to rte_flow.
> >
> > Agree but seems that the legacy api and driver legacy implementation
> > still keep in this release, and there is no a general way to replace
> > the legacy by rte_flow right now.
> 
> I think rte_flow is a complete replacement with more features.

Thomas, I may not agree with this.

Actually the "enum rte_eth_tunnel_type" is used by rte_eth_dev_udp_tunnel_port_add 
A packet with specific dst udp port will be recognized as a specific tunnel packet type (e.g. vxlan, vxlan-gpe, ecpri...)
In Intel NIC, the API actually changes the configuration of the packet parser in HW but not add a filter rule and I guess all other devices may enable it in a similar way.
so naturally it should be a device (port) level configuration but not a rte_flow rule for match, encap, decap...
So I think it's not a good idea to replace the rte_eth_dev_udp_tunnel_port_add with rte_flow config
and also there is no existing rte_flow_action can cover this requirement unless we introduce some new one.

> You can match, encap, decap.
> There is even a new API to get tunnel infos after decap.
> What is missing?
> 
> 
> > > Sorry, it is a nack.
> > > BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.

Yes that may break the ABI but fortunately the checking-abi-compatibility tool shows negative :) , thanks Ferruh' s guide.
https://github.com/ferruhy/dpdk/actions/runs/468859673

Thanks
Qi

> >
> > Oh, the ABI break should be a problem.
> >
> > > PS: please Cc ethdev maintainers for such patch, thanks.
> > > tip: use --cc-cmd devtools/get-maintainer.sh
> >
> > Thanks for your helpful tip.
> 
> 


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-07  9:32  3%       ` Guo, Jia
@ 2021-01-07 10:11  0%         ` Thomas Monjalon
  2021-01-07 12:47  4%           ` Zhang, Qi Z
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-07 10:11 UTC (permalink / raw)
  To: Guo, Jia
  Cc: Zhang, Qi Z, Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev,
	Yigit, Ferruh, andrew.rybchenko, orika, getelson

07/01/2021 10:32, Guo, Jia:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 24/12/2020 07:59, Jeff Guo:
> > > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
> > type.
> > >
> > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> > [...]
> > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> > >  	RTE_TUNNEL_TYPE_IP_IN_GRE,
> > >  	RTE_L2_TUNNEL_TYPE_E_TAG,
> > >  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > > +	RTE_TUNNEL_TYPE_ECPRI,
> > >  	RTE_TUNNEL_TYPE_MAX,
> > >  };
> > 
> > We tried to remove all these legacy API in DPDK 20.11.
> > Andrew decided to not remove this one because it is not yet completely
> > replaced by rte_flow in all drivers.
> > However, I am against continuing to update this API.
> > The opposite work should be done: migrate to rte_flow.
> 
> Agree but seems that the legacy api and driver legacy implementation
> still keep in this release, and there is no a general way to replace
> the legacy by rte_flow right now.

I think rte_flow is a complete replacement with more features.
You can match, encap, decap.
There is even a new API to get tunnel infos after decap.
What is missing?


> > Sorry, it is a nack.
> > BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
> 
> Oh, the ABI break should be a problem.
> 
> > PS: please Cc ethdev maintainers for such patch, thanks.
> > tip: use --cc-cmd devtools/get-maintainer.sh
> 
> Thanks for your helpful tip.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  2021-01-06 22:12  3%     ` Thomas Monjalon
@ 2021-01-07  9:32  3%       ` Guo, Jia
  2021-01-07 10:11  0%         ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Guo, Jia @ 2021-01-07  9:32 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Zhang, Qi Z, Wu, Jingjing, Yang, Qiming, Wang, Haiyue, dev,
	Yigit, Ferruh, andrew.rybchenko


> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, January 7, 2021 6:12 AM
> To: Guo, Jia <jia.guo@intel.com>
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>; dev@dpdk.org; Yigit, Ferruh
> <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru
> Subject: Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for
> ecpri
> 
> 24/12/2020 07:59, Jeff Guo:
> > Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel
> type.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> [...]
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
> >  	RTE_TUNNEL_TYPE_IP_IN_GRE,
> >  	RTE_L2_TUNNEL_TYPE_E_TAG,
> >  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> > +	RTE_TUNNEL_TYPE_ECPRI,
> >  	RTE_TUNNEL_TYPE_MAX,
> >  };
> 
> We tried to remove all these legacy API in DPDK 20.11.
> Andrew decided to not remove this one because it is not yet completely
> replaced by rte_flow in all drivers.
> However, I am against continuing to update this API.
> The opposite work should be done: migrate to rte_flow.
> 

Agree but seems that the legacy api and driver legacy implementation still keep in this release, and there is no a general way to replace the legacy by rte_flow right now.

> Sorry, it is a nack.
> BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.
> 

Oh, the ABI break should be a problem.

> PS: please Cc ethdev maintainers for such patch, thanks.
> tip: use --cc-cmd devtools/get-maintainer.sh
> 

Thanks for your helpful tip.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type for ecpri
  @ 2021-01-06 22:12  3%     ` Thomas Monjalon
  2021-01-07  9:32  3%       ` Guo, Jia
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-01-06 22:12 UTC (permalink / raw)
  To: Jeff Guo
  Cc: qi.z.zhang, jingjing.wu, qiming.yang, haiyue.wang, dev,
	ferruh.yigit, andrew.rybchenko

24/12/2020 07:59, Jeff Guo:
> Add type of RTE_TUNNEL_TYPE_ECPRI into the enum of ethdev tunnel type.
> 
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
[...]
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -1219,6 +1219,7 @@ enum rte_eth_tunnel_type {
>  	RTE_TUNNEL_TYPE_IP_IN_GRE,
>  	RTE_L2_TUNNEL_TYPE_E_TAG,
>  	RTE_TUNNEL_TYPE_VXLAN_GPE,
> +	RTE_TUNNEL_TYPE_ECPRI,
>  	RTE_TUNNEL_TYPE_MAX,
>  };

We tried to remove all these legacy API in DPDK 20.11.
Andrew decided to not remove this one because it is
not yet completely replaced by rte_flow in all drivers.
However, I am against continuing to update this API.
The opposite work should be done: migrate to rte_flow.

Sorry, it is a nack.
BTW, it is probably breaking the ABI because of RTE_TUNNEL_TYPE_MAX.

PS: please Cc ethdev maintainers for such patch, thanks.
tip: use --cc-cmd devtools/get-maintainer.sh



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user
  2021-01-06  9:11  3%     ` Thomas Monjalon
  2021-01-06  9:22  0%       ` Maxime Coquelin
@ 2021-01-06 16:37  0%       ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-01-06 16:37 UTC (permalink / raw)
  To: Thomas Monjalon, Maxime Coquelin, David Marchand
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata



On 06/01/2021 09:11, Thomas Monjalon wrote:
> 06/01/2021 10:06, David Marchand:
>> On Sun, Dec 20, 2020 at 10:14 PM Maxime Coquelin
>> <maxime.coquelin@redhat.com> wrote:
>>> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
>>> index 1f1f63a1a5..f4775ff141 100644
>>> --- a/drivers/net/virtio/virtio_user_ethdev.c
>>> +++ b/drivers/net/virtio/virtio_user_ethdev.c
>>> @@ -663,6 +663,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
>>>         char *mac_addr = NULL;
>>>         int ret = -1;
>>>
>>> +       /*
>>> +        * ToDo 1: Implement detection mechanism at vdev bus level as PCI, but
>>> +        * it implies API breakage.
>>
>> Extending rte_vdev_driver to implement this detection would be an ABI breakage.
>> This is a driver-only API (rte_vdev_driver is only used by the vdev
>> bus and drivers afaics).
>>
>> Doing this is allowed as per my understanding of the ABI policy which
>> guarantees ABI stability for applications.
>> We do not guarantee this stability for OOT drivers.
> 
> I agree.
> As a reminder, the A in ABI stands for Application.
> 

+1, as long as the binary interface remains the same, we are good.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user
  2021-01-06  9:11  3%     ` Thomas Monjalon
@ 2021-01-06  9:22  0%       ` Maxime Coquelin
  2021-01-06 16:37  0%       ` Kinsella, Ray
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-01-06  9:22 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand
  Cc: Ray Kinsella, dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata



On 1/6/21 10:11 AM, Thomas Monjalon wrote:
> 06/01/2021 10:06, David Marchand:
>> On Sun, Dec 20, 2020 at 10:14 PM Maxime Coquelin
>> <maxime.coquelin@redhat.com> wrote:
>>> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
>>> index 1f1f63a1a5..f4775ff141 100644
>>> --- a/drivers/net/virtio/virtio_user_ethdev.c
>>> +++ b/drivers/net/virtio/virtio_user_ethdev.c
>>> @@ -663,6 +663,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
>>>         char *mac_addr = NULL;
>>>         int ret = -1;
>>>
>>> +       /*
>>> +        * ToDo 1: Implement detection mechanism at vdev bus level as PCI, but
>>> +        * it implies API breakage.
>>
>> Extending rte_vdev_driver to implement this detection would be an ABI breakage.
>> This is a driver-only API (rte_vdev_driver is only used by the vdev
>> bus and drivers afaics).
>>
>> Doing this is allowed as per my understanding of the ABI policy which
>> guarantees ABI stability for applications.
>> We do not guarantee this stability for OOT drivers.
> 
> I agree.
> As a reminder, the A in ABI stands for Application.

Cool, so we're all good.

Thanks for the prompt reply!
Maxime


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user
  2021-01-06  9:06  4%   ` David Marchand
  2021-01-06  9:11  3%     ` Thomas Monjalon
@ 2021-01-06  9:14  0%     ` Maxime Coquelin
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-01-06  9:14 UTC (permalink / raw)
  To: David Marchand, Ray Kinsella, Thomas Monjalon
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata



On 1/6/21 10:06 AM, David Marchand wrote:
> On Sun, Dec 20, 2020 at 10:14 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
>> index 1f1f63a1a5..f4775ff141 100644
>> --- a/drivers/net/virtio/virtio_user_ethdev.c
>> +++ b/drivers/net/virtio/virtio_user_ethdev.c
>> @@ -663,6 +663,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
>>         char *mac_addr = NULL;
>>         int ret = -1;
>>
>> +       /*
>> +        * ToDo 1: Implement detection mechanism at vdev bus level as PCI, but
>> +        * it implies API breakage.
> 
> Extending rte_vdev_driver to implement this detection would be an ABI breakage.
> This is a driver-only API (rte_vdev_driver is only used by the vdev
> bus and drivers afaics).
> 
> Doing this is allowed as per my understanding of the ABI policy which
> guarantees ABI stability for applications.
> We do not guarantee this stability for OOT drivers.
> 

That would be a good news, as it would remove impacting the user by
requiring him to manually add --iova-mode=va in the EAL parameters.

I can change this in the v2 if this is confirmed. Ray, Thomas, is that
OK for you?

Thanks,
Maxime


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user
  2021-01-06  9:06  4%   ` David Marchand
@ 2021-01-06  9:11  3%     ` Thomas Monjalon
  2021-01-06  9:22  0%       ` Maxime Coquelin
  2021-01-06 16:37  0%       ` Kinsella, Ray
  2021-01-06  9:14  0%     ` Maxime Coquelin
  1 sibling, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-01-06  9:11 UTC (permalink / raw)
  To: Maxime Coquelin, David Marchand
  Cc: Ray Kinsella, dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata

06/01/2021 10:06, David Marchand:
> On Sun, Dec 20, 2020 at 10:14 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
> > diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
> > index 1f1f63a1a5..f4775ff141 100644
> > --- a/drivers/net/virtio/virtio_user_ethdev.c
> > +++ b/drivers/net/virtio/virtio_user_ethdev.c
> > @@ -663,6 +663,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
> >         char *mac_addr = NULL;
> >         int ret = -1;
> >
> > +       /*
> > +        * ToDo 1: Implement detection mechanism at vdev bus level as PCI, but
> > +        * it implies API breakage.
> 
> Extending rte_vdev_driver to implement this detection would be an ABI breakage.
> This is a driver-only API (rte_vdev_driver is only used by the vdev
> bus and drivers afaics).
> 
> Doing this is allowed as per my understanding of the ABI policy which
> guarantees ABI stability for applications.
> We do not guarantee this stability for OOT drivers.

I agree.
As a reminder, the A in ABI stands for Application.




^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user
  @ 2021-01-06  9:06  4%   ` David Marchand
  2021-01-06  9:11  3%     ` Thomas Monjalon
  2021-01-06  9:14  0%     ` Maxime Coquelin
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-01-06  9:06 UTC (permalink / raw)
  To: Maxime Coquelin, Ray Kinsella, Thomas Monjalon
  Cc: dev, Xia, Chenbo, Olivier Matz, Adrian Moreno Zapata

On Sun, Dec 20, 2020 at 10:14 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
> index 1f1f63a1a5..f4775ff141 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -663,6 +663,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
>         char *mac_addr = NULL;
>         int ret = -1;
>
> +       /*
> +        * ToDo 1: Implement detection mechanism at vdev bus level as PCI, but
> +        * it implies API breakage.

Extending rte_vdev_driver to implement this detection would be an ABI breakage.
This is a driver-only API (rte_vdev_driver is only used by the vdev
bus and drivers afaics).

Doing this is allowed as per my understanding of the ABI policy which
guarantees ABI stability for applications.
We do not guarantee this stability for OOT drivers.

-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] ci: fix default ccache in GitHub Actions
  2021-01-05 12:16  5% [dpdk-dev] [PATCH] ci: fix default ccache in GitHub Actions David Marchand
@ 2021-01-05 14:09  0% ` Aaron Conole
  0 siblings, 0 replies; 200+ results
From: Aaron Conole @ 2021-01-05 14:09 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Michael Santana, Thomas Monjalon

David Marchand <david.marchand@redhat.com> writes:

> 'main' might not be the default branch name.
>
> Fixes: 87009585e293 ("ci: hook to GitHub Actions")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> I found no other option but to call to the remote repository since github
> does not seem to expose a HEAD symbolic reference.

Ugh... I thought I had set it to 'main' during DPDKs transition, but
seems I didn't (guess it was just an oversight on my part - sorry).

> The other alternative would be to simply rename ovsrobot/dpdk default
> branch from 'master' to 'main'.

I will do that rename anyway - it should be consistent.

> Example: https://github.com/ovsrobot/dpdk/runs/1641274373?check_suite_focus=true#step:4:4
>
> ---
>  .github/workflows/build.yml | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
> index 0b72df0ebe..751eb82c16 100644
> --- a/.github/workflows/build.yml
> +++ b/.github/workflows/build.yml
> @@ -67,13 +67,15 @@ jobs:
>          echo 'libabigail-${{ matrix.config.os }}'
>          echo -n '::set-output name=abi::'
>          echo 'abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}'
> +        echo -n '::set-output name=default_branch::'
> +        git ls-remote --symref origin HEAD |awk '/^ref:/ {print $2}'
>      - name: Retrieve ccache cache
>        uses: actions/cache@v2
>        with:
>          path: ~/.ccache
>          key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
>          restore-keys: |
> -          ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
> +          ${{ steps.get_ref_keys.outputs.ccache }}-${{ steps.get_ref_keys.outputs.default_branch }}
>      - name: Retrieve libabigail cache
>        id: libabigail-cache
>        uses: actions/cache@v2


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] ci: fix default ccache in GitHub Actions
@ 2021-01-05 12:16  5% David Marchand
  2021-01-05 14:09  0% ` Aaron Conole
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-01-05 12:16 UTC (permalink / raw)
  To: dev, aconole; +Cc: Michael Santana, Thomas Monjalon

'main' might not be the default branch name.

Fixes: 87009585e293 ("ci: hook to GitHub Actions")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
I found no other option but to call to the remote repository since github
does not seem to expose a HEAD symbolic reference.

The other alternative would be to simply rename ovsrobot/dpdk default
branch from 'master' to 'main'.
Example: https://github.com/ovsrobot/dpdk/runs/1641274373?check_suite_focus=true#step:4:4

---
 .github/workflows/build.yml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 0b72df0ebe..751eb82c16 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -67,13 +67,15 @@ jobs:
         echo 'libabigail-${{ matrix.config.os }}'
         echo -n '::set-output name=abi::'
         echo 'abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}'
+        echo -n '::set-output name=default_branch::'
+        git ls-remote --symref origin HEAD |awk '/^ref:/ {print $2}'
     - name: Retrieve ccache cache
       uses: actions/cache@v2
       with:
         path: ~/.ccache
         key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
         restore-keys: |
-          ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
+          ${{ steps.get_ref_keys.outputs.ccache }}-${{ steps.get_ref_keys.outputs.default_branch }}
     - name: Retrieve libabigail cache
       id: libabigail-cache
       uses: actions/cache@v2
-- 
2.23.0


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [RFC 3/7] devarg: change reprsentor ID to bitmap
  @ 2021-01-05  6:19  3%     ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-01-05  6:19 UTC (permalink / raw)
  To: Andrew Rybchenko, Slava Ovsiienko, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit, Olivier Matz, Matan Azrad
  Cc: dev, Asaf Penso

Hi Andrew,

>-----Original Message-----
>From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>Sent: Monday, December 28, 2020 9:37 PM
>To: Xueming(Steven) Li <xuemingl@nvidia.com>; Slava Ovsiienko
><viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
><thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Olivier Matz
><olivier.matz@6wind.com>; Matan Azrad <matan@nvidia.com>
>Cc: dev@dpdk.org; Asaf Penso <asafp@nvidia.com>
>Subject: Re: [RFC 3/7] devarg: change reprsentor ID to bitmap
>
>On 12/18/20 5:55 PM, Xueming Li wrote:
>> In eth representor comparer callback, ethdev was compared with devarg.
>
>comparer -> comparator?
>
>> Since ethdev representor port didn't contain controller(host) and
>> owner port information, callback only compared representor port and
>> returned representor port on other PF port.
>>
>> This patch changes representor port to bitmap encoding, expands and
>> updates representor port ID after parsing, when device representor ID
>> uses the same bitmap encoding, the eth representor comparer callback
>> returns correct ethdev.
>>
>> Representor port ID bitmap definition:
>>  Representor ID bitmap:
>>  xxxx xxxx xxxx xxxx
>>  |||| |LLL LLLL LLLL vf/sf id
>>  |||| L 1:sf, 0:vf
>>  ||LL pf id
>
>Just 2 bits for PF ID is definitely not future proof.

Yes, this is a valid concern, to keep ABI compatibility, need to wait next LTS to
change it to u32 or u64.

>
>>  LL controller(host) id
>
>Same here.
>
>In general, I'm not sure that such approch with bitmap makes sense. I think
>we need a new API which returns information about available functions which
>could be represented and IDs there could be used as representor IDs.

Agree, will introduce rte_eth_representor_id_encode() and rte_eth_representor_id_parse()
in next vesion.
>
>>
>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>> ---
>>  0000-cover-letter.patch               | 44 +++++++++++++++++++++++++++
>
>I guess it should not be added to the changeset.
>
>>  lib/librte_ethdev/ethdev_private.c    | 42 ++++++++++++++++++++++++-
>>  lib/librte_ethdev/rte_ethdev_driver.h | 22 ++++++++++++++
>>  3 files changed, 107 insertions(+), 1 deletion(-)  create mode 100644
>> 0000-cover-letter.patch
>>
>> diff --git a/0000-cover-letter.patch b/0000-cover-letter.patch new
>> file mode 100644 index 0000000000..3f8ce2be72
>> --- /dev/null
>> +++ b/0000-cover-letter.patch
>> @@ -0,0 +1,44 @@
>> +From 4e1f8fc062fa6813e0b57f78ad72760601ca1d98 Mon Sep 17 00:00:00
>> +2001
>> +From: Xueming Li <xuemingl@nvidia.com>
>> +Date: Fri, 18 Dec 2020 22:31:53 +0800
>> +Subject: [RFC 0/7] *** SUBJECT HERE ***
>> +To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
>> +    Thomas Monjalon <thomas@monjalon.net>,
>> +    Ferruh Yigit <ferruh.yigit@intel.com>,
>> +    Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
>> +    Olivier Matz <olivier.matz@6wind.com>,
>> +    Matan Azrad <matan@nvidia.com>
>> +Cc: dev@dpdk.org,
>> +    xuemingl@nvidia.com,
>> +    Asaf Penso <asafp@nvidia.com>
>> +
>> +*** BLURB HERE ***
>> +
>> +Xueming Li (7):
>> +  ethdev: support sub function representor
>> +  ethdev: support multi-host representor
>> +  devarg: change reprsentor ID to bitmap
>> +  ethdev: capability for new representor syntax
>> +  kvargs: update parser for new representor syntax
>> +  common/mlx5: update representor name parsing
>> +  net/mlx5: support representor of sub function
>> +
>> + config/rte_config.h                        |   1 +
>> + drivers/common/mlx5/linux/mlx5_common_os.c |  32 ++--
>> + drivers/common/mlx5/linux/mlx5_nl.c        |   2 +
>> + drivers/common/mlx5/mlx5_common.h          |   2 +
>> + drivers/net/mlx5/linux/mlx5_ethdev_os.c    |   5 +
>> + drivers/net/mlx5/linux/mlx5_os.c           |  69 ++++++++-
>> + drivers/net/mlx5/mlx5_ethdev.c             |   2 +
>> + lib/librte_ethdev/ethdev_private.c         | 163 ++++++++++++++-------
>> + lib/librte_ethdev/ethdev_private.h         |   3 -
>> + lib/librte_ethdev/rte_class_eth.c          |   7 +-
>> + lib/librte_ethdev/rte_ethdev.c             |   5 +-
>> + lib/librte_ethdev/rte_ethdev.h             |   2 +
>> + lib/librte_ethdev/rte_ethdev_driver.h      |  35 +++++
>> + lib/librte_kvargs/rte_kvargs.c             |  82 +++++++----
>> + 14 files changed, 306 insertions(+), 104 deletions(-)
>> +
>> +--
>> +2.25.1
>> +
>> diff --git a/lib/librte_ethdev/ethdev_private.c
>> b/lib/librte_ethdev/ethdev_private.c
>> index 3e455acea9..a0fc187378 100644
>> --- a/lib/librte_ethdev/ethdev_private.c
>> +++ b/lib/librte_ethdev/ethdev_private.c
>> @@ -93,16 +93,20 @@ rte_eth_devargs_process_list(char *str, uint16_t
>> *list, uint16_t *len_list,  }
>>
>>  /*
>> - * representor format:
>> + * Parse representor ports, expand and update representor port ID.
>> + * Representor format:
>>   *   #: range or single number of VF representor - legacy
>>   *   [[c#]pf#]vf#: VF port representor/s
>>   *   [[c#]pf#]sf#: SF port representor/s
>> + *
>> + * See RTE_ETH_REPR() for representor ID format.
>>   */
>>  int
>>  rte_eth_devargs_parse_representor_ports(char *str, void *data)  {
>>  	struct rte_eth_devargs *eth_da = data;
>>  	int ret;
>> +	uint32_t c, p, f, i = 0;
>>
>>  	eth_da->type = RTE_ETH_REPRESENTOR_NONE;
>>  	if (str[0] == 'c') {
>> @@ -136,6 +140,42 @@ rte_eth_devargs_parse_representor_ports(char
>*str, void *data)
>>  	}
>>  	ret = rte_eth_devargs_process_list(str, eth_da->representor_ports,
>>  		&eth_da->nb_representor_ports, RTE_MAX_ETHPORTS);
>> +	if (ret < 0)
>> +		goto err;
>> +
>> +	/* Set default values, expand and update representor ID. */
>> +	if (!eth_da->nb_mh_controllers) {
>
>DPDK coding style requires to compare vs 0 expliticly.
>
>> +		eth_da->nb_mh_controllers = 1;
>> +		eth_da->mh_controllers[0] = 0;
>> +	}
>> +	if (!eth_da->nb_ports) {
>
>DPDK coding style requires to compare vs 0 expliticly.
>
>> +		eth_da->nb_ports = 1;
>> +		eth_da->ports[0] = 0;
>> +	}
>> +	if (!eth_da->nb_representor_ports) {
>
>DPDK coding style requires to compare vs 0 expliticly.
>
>> +		eth_da->nb_representor_ports = 1;
>> +		eth_da->representor_ports[0] = 0;
>> +	}
>> +	for (c = 0; c < eth_da->nb_mh_controllers; ++c) {
>> +		for (p = 0; p < eth_da->nb_ports; ++p) {
>> +			for (f = 0; f < eth_da->nb_representor_ports; ++f) {
>> +				i = c * eth_da->nb_ports *
>> +					eth_da->nb_representor_ports +
>> +				    p * eth_da->nb_representor_ports + f;
>> +				if (i >= RTE_DIM(eth_da->representor_ports))
>{
>> +					RTE_LOG(ERR, EAL, "too many
>representor specified: %s",
>
>Missing \n
>
>> +						str);
>> +					return -EINVAL;
>> +				}
>> +				eth_da->representor_ports[i] =
>RTE_ETH_REPR(
>> +					eth_da->mh_controllers[c],
>> +					eth_da->ports[p],
>> +					eth_da->type ==
>RTE_ETH_REPRESENTOR_SF,
>> +					eth_da->representor_ports[f]);
>> +			}
>> +		}
>> +	}
>> +	eth_da->nb_representor_ports = i + 1;
>>  err:
>>  	if (ret < 0)
>>  		RTE_LOG(ERR, EAL, "wrong representor format: %s", str); diff -
>-git
>> a/lib/librte_ethdev/rte_ethdev_driver.h
>> b/lib/librte_ethdev/rte_ethdev_driver.h
>> index a7969c9408..dbad55c704 100644
>> --- a/lib/librte_ethdev/rte_ethdev_driver.h
>> +++ b/lib/librte_ethdev/rte_ethdev_driver.h
>> @@ -1218,6 +1218,28 @@ struct rte_eth_devargs {
>>  	enum rte_eth_representor_type type; /* type of representor */  };
>>
>> +/**
>> + * Encoding representor port ID.
>> + *
>> + * The compact format is used for device iterator that comparing
>> + * ethdev representor ID with target devargs.
>> + *
>> + * xxxx xxxx xxxx xxxx
>> + * |||| |LLL LLLL LLLL vf/sf id
>> + * |||| L 1:sf, 0:vf
>> + * ||LL pf id
>> + * LL controller(host) id
>> + */
>> +#define RTE_ETH_REPR(c, pf, sf, port) \
>> +	((((c) & 3) << 14) |     \
>> +	(((pf) & 3) << 12) |     \
>> +	(!!(sf) << 11) |         \
>> +	((port) & 0x7ff))
>> +/** Get 'pf' port id from representor ID */ #define
>> +RTE_ETH_REPR_PF(repr) (((repr) >> 12) & 3)
>> +/** Get 'vf' or 'sf' port from representor ID */ #define
>> +RTE_ETH_REPR_PORT(repr) ((repr) & 0x7ff)
>> +
>>  /**
>>   * PMD helper function to parse ethdev arguments
>>   *
>>


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  2020-12-22 14:42  2% ` [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions Abhinandan Gujjar
@ 2021-01-04  6:59  0%   ` Gujjar, Abhinandan S
    1 sibling, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-01-04  6:59 UTC (permalink / raw)
  To: dev, akhil.goyal, Ananyev, Konstantin

Hi Akhil,

Could you please review the patches?

Regards
Abhinandan

> -----Original Message-----
> From: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>
> Sent: Tuesday, December 22, 2020 8:13 PM
> To: dev@dpdk.org; akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>
> Subject: [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback
> functions
> 
> This patch adds APIs to add/remove callback functions on crypto
> enqueue/dequeue burst. The callback function will be called for each burst of
> crypto ops received/sent on a given crypto device queue pair.
> 
> Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  config/rte_config.h                     |   1 +
>  doc/guides/prog_guide/cryptodev_lib.rst |  44 +++
>  doc/guides/rel_notes/release_21_02.rst  |   9 +
>  lib/librte_cryptodev/meson.build        |   2 +-
>  lib/librte_cryptodev/rte_cryptodev.c    | 398 +++++++++++++++++++++++-
>  lib/librte_cryptodev/rte_cryptodev.h    | 246 ++++++++++++++-
>  lib/librte_cryptodev/version.map        |   7 +
>  7 files changed, 702 insertions(+), 5 deletions(-)
> 
> diff --git a/config/rte_config.h b/config/rte_config.h index
> a0b5160ff..87f9786d7 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -62,6 +62,7 @@
>  /* cryptodev defines */
>  #define RTE_CRYPTO_MAX_DEVS 64
>  #define RTE_CRYPTODEV_NAME_LEN 64
> +#define RTE_CRYPTO_CALLBACKS 1
> 
>  /* compressdev defines */
>  #define RTE_COMPRESS_MAX_DEVS 64
> diff --git a/doc/guides/prog_guide/cryptodev_lib.rst
> b/doc/guides/prog_guide/cryptodev_lib.rst
> index 473b014a1..9b1cf8d49 100644
> --- a/doc/guides/prog_guide/cryptodev_lib.rst
> +++ b/doc/guides/prog_guide/cryptodev_lib.rst
> @@ -338,6 +338,50 @@ start of private data information. The offset is counted
> from the start of the  rte_crypto_op including other crypto information such as
> the IVs (since there can  be an IV also for authentication).
> 
> +User callback APIs
> +~~~~~~~~~~~~~~~~~~
> +The add APIs configures a user callback function to be called for each
> +burst of crypto ops received/sent on a given crypto device queue pair.
> +The return value is a pointer that can be used later to remove the
> +callback using remove API. Application is expected to register a
> +callback function of type ``rte_cryptodev_callback_fn``. Multiple
> +callback functions can be added for a given queue pair. API does not restrict
> on maximum number of callbacks.
> +
> +Callbacks registered by application would not survive
> +``rte_cryptodev_configure`` as it reinitializes the callback list. It
> +is user responsibility to remove all installed callbacks before calling
> ``rte_cryptodev_configure`` to avoid possible memory leakage.
> +
> +So, the application is expected to add user callback after
> ``rte_cryptodev_configure``.
> +The callbacks can also be added at the runtime. These callbacks get
> +executed when
> ``rte_cryptodev_enqueue_burst``/``rte_cryptodev_dequeue_burst`` is called.
> +
> +.. code-block:: c
> +
> +	struct rte_cryptodev_cb *
> +		rte_cryptodev_add_enq_callback(uint8_t dev_id, uint16_t
> qp_id,
> +					       rte_cryptodev_callback_fn cb_fn,
> +					       void *cb_arg);
> +
> +	struct rte_cryptodev_cb *
> +		rte_cryptodev_add_deq_callback(uint8_t dev_id, uint16_t
> qp_id,
> +					       rte_cryptodev_callback_fn cb_fn,
> +					       void *cb_arg);
> +
> +	uint16_t (* rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
> +					       struct rte_crypto_op **ops,
> +					       uint16_t nb_ops, void
> *user_param);
> +
> +The remove API removes a callback function added by
> +``rte_cryptodev_add_enq_callback``/``rte_cryptodev_add_deq_callback``.
> +
> +.. code-block:: c
> +
> +	int rte_cryptodev_remove_enq_callback(uint8_t dev_id, uint16_t
> qp_id,
> +					      struct rte_cryptodev_cb *cb);
> +
> +	int rte_cryptodev_remove_deq_callback(uint8_t dev_id, uint16_t
> qp_id,
> +					      struct rte_cryptodev_cb *cb);
> +
> 
>  Enqueue / Dequeue Burst APIs
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> diff --git a/doc/guides/rel_notes/release_21_02.rst
> b/doc/guides/rel_notes/release_21_02.rst
> index 638f98168..8c7866401 100644
> --- a/doc/guides/rel_notes/release_21_02.rst
> +++ b/doc/guides/rel_notes/release_21_02.rst
> @@ -55,6 +55,13 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
> 
> +* **Added enqueue & dequeue callback APIs for cryptodev library.**
> +
> +  Cryptodev library is added with enqueue & dequeue callback APIs to
> + enable applications to add/remove user callbacks which gets called
> + for every enqueue/dequeue operation.
> +
> +
> 
>  Removed Items
>  -------------
> @@ -84,6 +91,8 @@ API Changes
>     Also, make sure to start the actual text at the margin.
>     =======================================================
> 
> +* cryptodev: The structure ``rte_cryptodev`` has been updated with
> +pointers
> +  for adding enqueue and dequeue callbacks.
> 
>  ABI Changes
>  -----------
> diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
> index c4c6b3b6a..8c5493f4c 100644
> --- a/lib/librte_cryptodev/meson.build
> +++ b/lib/librte_cryptodev/meson.build
> @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
>  	'rte_crypto.h',
>  	'rte_crypto_sym.h',
>  	'rte_crypto_asym.h')
> -deps += ['kvargs', 'mbuf']
> +deps += ['kvargs', 'mbuf', 'rcu']
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index 3d95ac6ea..40f55a3cd 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -448,6 +448,122 @@
> rte_cryptodev_asym_xform_capability_check_modlen(
>  	return 0;
>  }
> 
> +/* spinlock for crypto device enq callbacks */ static rte_spinlock_t
> +rte_cryptodev_callback_lock = RTE_SPINLOCK_INITIALIZER;
> +
> +static void
> +cryptodev_cb_cleanup(struct rte_cryptodev *dev) {
> +	struct rte_cryptodev_cb_rcu *list;
> +	struct rte_cryptodev_cb *cb, *next;
> +	uint16_t qp_id;
> +
> +	if (dev->enq_cbs == NULL && dev->deq_cbs == NULL)
> +		return;
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		list = &dev->enq_cbs[qp_id];
> +		cb = list->next;
> +		while (cb != NULL) {
> +			next = cb->next;
> +			rte_free(cb);
> +			cb = next;
> +		}
> +
> +		rte_free(list->qsbr);
> +	}
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		list = &dev->deq_cbs[qp_id];
> +		cb = list->next;
> +		while (cb != NULL) {
> +			next = cb->next;
> +			rte_free(cb);
> +			cb = next;
> +		}
> +
> +		rte_free(list->qsbr);
> +	}
> +
> +	rte_free(dev->enq_cbs);
> +	dev->enq_cbs = NULL;
> +	rte_free(dev->deq_cbs);
> +	dev->deq_cbs = NULL;
> +}
> +
> +static int
> +cryptodev_cb_init(struct rte_cryptodev *dev) {
> +	struct rte_cryptodev_cb_rcu *list;
> +	struct rte_rcu_qsbr *qsbr;
> +	uint16_t qp_id;
> +	size_t size;
> +
> +	/* Max thread set to 1, as one DP thread accessing a queue-pair */
> +	const uint32_t max_threads = 1;
> +
> +	dev->enq_cbs = rte_zmalloc(NULL,
> +				   sizeof(struct rte_cryptodev_cb_rcu) *
> +				   dev->data->nb_queue_pairs, 0);
> +	if (dev->enq_cbs == NULL) {
> +		CDEV_LOG_ERR("Failed to allocate memory for enq
> callbacks");
> +		return -ENOMEM;
> +	}
> +
> +	dev->deq_cbs = rte_zmalloc(NULL,
> +				   sizeof(struct rte_cryptodev_cb_rcu) *
> +				   dev->data->nb_queue_pairs, 0);
> +	if (dev->deq_cbs == NULL) {
> +		CDEV_LOG_ERR("Failed to allocate memory for deq
> callbacks");
> +		rte_free(dev->enq_cbs);
> +		return -ENOMEM;
> +	}
> +
> +	/* Create RCU QSBR variable */
> +	size = rte_rcu_qsbr_get_memsize(max_threads);
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		list = &dev->enq_cbs[qp_id];
> +		qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> +		if (qsbr == NULL) {
> +			CDEV_LOG_ERR("Failed to allocate memory for RCU
> on "
> +				"queue_pair_id=%d", qp_id);
> +			goto cb_init_err;
> +		}
> +
> +		if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> +			CDEV_LOG_ERR("Failed to initialize for RCU on "
> +				"queue_pair_id=%d", qp_id);
> +			goto cb_init_err;
> +		}
> +
> +		list->qsbr = qsbr;
> +	}
> +
> +	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> +		list = &dev->deq_cbs[qp_id];
> +		qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> +		if (qsbr == NULL) {
> +			CDEV_LOG_ERR("Failed to allocate memory for RCU
> on "
> +				"queue_pair_id=%d", qp_id);
> +			goto cb_init_err;
> +		}
> +
> +		if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> +			CDEV_LOG_ERR("Failed to initialize for RCU on "
> +				"queue_pair_id=%d", qp_id);
> +			goto cb_init_err;
> +		}
> +
> +		list->qsbr = qsbr;
> +	}
> +
> +	return 0;
> +
> +cb_init_err:
> +	cryptodev_cb_cleanup(dev);
> +	return -ENOMEM;
> +}
> 
>  const char *
>  rte_cryptodev_get_feature_name(uint64_t flag) @@ -927,6 +1043,10 @@
> rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
> 
>  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> ENOTSUP);
> 
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +	cryptodev_cb_cleanup(dev);
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +
>  	/* Setup new number of queue pairs and reconfigure device. */
>  	diag = rte_cryptodev_queue_pairs_config(dev, config-
> >nb_queue_pairs,
>  			config->socket_id);
> @@ -936,11 +1056,18 @@ rte_cryptodev_configure(uint8_t dev_id, struct
> rte_cryptodev_config *config)
>  		return diag;
>  	}
> 
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +	diag = cryptodev_cb_init(dev);
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +	if (diag) {
> +		CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id);
> +		return diag;
> +	}
> +
>  	rte_cryptodev_trace_configure(dev_id, config);
>  	return (*dev->dev_ops->dev_configure)(dev, config);  }
> 
> -
>  int
>  rte_cryptodev_start(uint8_t dev_id)
>  {
> @@ -1136,6 +1263,275 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id,
> uint16_t queue_pair_id,
>  			socket_id);
>  }
> 
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> +			       uint16_t qp_id,
> +			       rte_cryptodev_callback_fn cb_fn,
> +			       void *cb_arg)
> +{
> +	struct rte_cryptodev *dev;
> +	struct rte_cryptodev_cb_rcu *list;
> +	struct rte_cryptodev_cb *cb, *tail;
> +
> +	if (!cb_fn) {
> +		CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> +		rte_errno = ENODEV;
> +		return NULL;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (qp_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> +		rte_errno = ENODEV;
> +		return NULL;
> +	}
> +
> +	cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> +	if (cb == NULL) {
> +		CDEV_LOG_ERR("Failed to allocate memory for callback on "
> +			     "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> +		rte_errno = ENOMEM;
> +		return NULL;
> +	}
> +
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +
> +	cb->fn = cb_fn;
> +	cb->arg = cb_arg;
> +
> +	/* Add the callbacks in fifo order. */
> +	list = &dev->enq_cbs[qp_id];
> +	tail = list->next;
> +
> +	if (tail) {
> +		while (tail->next)
> +			tail = tail->next;
> +		/* Stores to cb->fn and cb->param should complete before
> +		 * cb is visible to data plane.
> +		 */
> +		__atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
> +	} else {
> +		/* Stores to cb->fn and cb->param should complete before
> +		 * cb is visible to data plane.
> +		 */
> +		__atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
> +	}
> +
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +
> +	return cb;
> +}
> +
> +int
> +rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> +				  uint16_t qp_id,
> +				  struct rte_cryptodev_cb *cb)
> +{
> +	struct rte_cryptodev *dev;
> +	struct rte_cryptodev_cb **prev_cb, *curr_cb;
> +	struct rte_cryptodev_cb_rcu *list;
> +	int ret;
> +
> +	ret = -EINVAL;
> +
> +	if (!cb) {
> +		CDEV_LOG_ERR("Callback is NULL");
> +		return -EINVAL;
> +	}
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> +		return -ENODEV;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (qp_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> +		return -ENODEV;
> +	}
> +
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +	if (dev->enq_cbs == NULL) {
> +		CDEV_LOG_ERR("Callback not initialized");
> +		goto cb_err;
> +	}
> +
> +	list = &dev->enq_cbs[qp_id];
> +	if (list == NULL) {
> +		CDEV_LOG_ERR("Callback list is NULL");
> +		goto cb_err;
> +	}
> +
> +	if (list->qsbr == NULL) {
> +		CDEV_LOG_ERR("Rcu qsbr is NULL");
> +		goto cb_err;
> +	}
> +
> +	prev_cb = &list->next;
> +	for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> +		curr_cb = *prev_cb;
> +		if (curr_cb == cb) {
> +			/* Remove the user cb from the callback list. */
> +			__atomic_store_n(prev_cb, curr_cb->next,
> +				__ATOMIC_RELAXED);
> +			ret = 0;
> +			break;
> +		}
> +	}
> +
> +	if (!ret) {
> +		/* Call sync with invalid thread id as this is part of
> +		 * control plane API
> +		 */
> +		rte_rcu_qsbr_synchronize(list->qsbr,
> RTE_QSBR_THRID_INVALID);
> +		rte_free(cb);
> +	}
> +
> +cb_err:
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +	return ret;
> +}
> +
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_deq_callback(uint8_t dev_id,
> +			       uint16_t qp_id,
> +			       rte_cryptodev_callback_fn cb_fn,
> +			       void *cb_arg)
> +{
> +	struct rte_cryptodev *dev;
> +	struct rte_cryptodev_cb_rcu *list;
> +	struct rte_cryptodev_cb *cb, *tail;
> +
> +	if (!cb_fn) {
> +		CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> +		rte_errno = ENODEV;
> +		return NULL;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (qp_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> +		rte_errno = ENODEV;
> +		return NULL;
> +	}
> +
> +	cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> +	if (cb == NULL) {
> +		CDEV_LOG_ERR("Failed to allocate memory for callback on "
> +			     "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> +		rte_errno = ENOMEM;
> +		return NULL;
> +	}
> +
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +
> +	cb->fn = cb_fn;
> +	cb->arg = cb_arg;
> +
> +	/* Add the callbacks in fifo order. */
> +	list = &dev->deq_cbs[qp_id];
> +	tail = list->next;
> +
> +	if (tail) {
> +		while (tail->next)
> +			tail = tail->next;
> +		/* Stores to cb->fn and cb->param should complete before
> +		 * cb is visible to data plane.
> +		 */
> +		__atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
> +	} else {
> +		/* Stores to cb->fn and cb->param should complete before
> +		 * cb is visible to data plane.
> +		 */
> +		__atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
> +	}
> +
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +
> +	return cb;
> +}
> +
> +int
> +rte_cryptodev_remove_deq_callback(uint8_t dev_id,
> +				  uint16_t qp_id,
> +				  struct rte_cryptodev_cb *cb)
> +{
> +	struct rte_cryptodev *dev;
> +	struct rte_cryptodev_cb **prev_cb, *curr_cb;
> +	struct rte_cryptodev_cb_rcu *list;
> +	int ret;
> +
> +	ret = -EINVAL;
> +
> +	if (!cb) {
> +		CDEV_LOG_ERR("Callback is NULL");
> +		return -EINVAL;
> +	}
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> +		return -ENODEV;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (qp_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> +		return -ENODEV;
> +	}
> +
> +	rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +	if (dev->enq_cbs == NULL) {
> +		CDEV_LOG_ERR("Callback not initialized");
> +		goto cb_err;
> +	}
> +
> +	list = &dev->deq_cbs[qp_id];
> +	if (list == NULL) {
> +		CDEV_LOG_ERR("Callback list is NULL");
> +		goto cb_err;
> +	}
> +
> +	if (list->qsbr == NULL) {
> +		CDEV_LOG_ERR("Rcu qsbr is NULL");
> +		goto cb_err;
> +	}
> +
> +	prev_cb = &list->next;
> +	for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> +		curr_cb = *prev_cb;
> +		if (curr_cb == cb) {
> +			/* Remove the user cb from the callback list. */
> +			__atomic_store_n(prev_cb, curr_cb->next,
> +				__ATOMIC_RELAXED);
> +			ret = 0;
> +			break;
> +		}
> +	}
> +
> +	if (!ret) {
> +		/* Call sync with invalid thread id as this is part of
> +		 * control plane API
> +		 */
> +		rte_rcu_qsbr_synchronize(list->qsbr,
> RTE_QSBR_THRID_INVALID);
> +		rte_free(cb);
> +	}
> +
> +cb_err:
> +	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +	return ret;
> +}
> 
>  int
>  rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats) diff
> --git a/lib/librte_cryptodev/rte_cryptodev.h
> b/lib/librte_cryptodev/rte_cryptodev.h
> index 0935fd587..ae34f33f6 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -23,6 +23,7 @@ extern "C" {
>  #include "rte_dev.h"
>  #include <rte_common.h>
>  #include <rte_config.h>
> +#include <rte_rcu_qsbr.h>
> 
>  #include "rte_cryptodev_trace_fp.h"
> 
> @@ -522,6 +523,30 @@ struct rte_cryptodev_qp_conf {
>  	/**< The mempool for creating sess private data in sessionless mode
> */  };
> 
> +/**
> + * Function type used for processing crypto ops when enqueue/dequeue
> +burst is
> + * called.
> + *
> + * The callback function is called on enqueue/dequeue burst immediately.
> + *
> + * @param	dev_id		The identifier of the device.
> + * @param	qp_id		The index of the queue pair on which ops are
> + *				enqueued/dequeued. The value must be in the
> + *				range [0, nb_queue_pairs - 1] previously
> + *				supplied to *rte_cryptodev_configure*.
> + * @param	ops		The address of an array of *nb_ops* pointers
> + *				to *rte_crypto_op* structures which contain
> + *				the crypto operations to be processed.
> + * @param	nb_ops		The number of operations to process.
> + * @param	user_param	The arbitrary user parameter passed in by the
> + *				application when the callback was originally
> + *				registered.
> + * @return			The number of ops to be enqueued to the
> + *				crypto device.
> + */
> +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t
> qp_id,
> +		struct rte_crypto_op **ops, uint16_t nb_ops, void
> *user_param);
> +
>  /**
>   * Typedef for application callback function to be registered by application
>   * software for notification of device events @@ -822,7 +847,6 @@
> rte_cryptodev_callback_unregister(uint8_t dev_id,
>  		enum rte_cryptodev_event_type event,
>  		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
> 
> -
>  typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
>  		struct rte_crypto_op **ops,	uint16_t nb_ops);
>  /**< Dequeue processed packets from queue pair of a device. */ @@ -839,6
> +863,30 @@ struct rte_cryptodev_callback;
>  /** Structure to keep track of registered callbacks */
> TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
> 
> +/**
> + * Structure used to hold information about the callbacks to be called
> +for a
> + * queue pair on enqueue/dequeue.
> + */
> +struct rte_cryptodev_cb {
> +	struct rte_cryptodev_cb *next;
> +	/**< Pointer to next callback */
> +	rte_cryptodev_callback_fn fn;
> +	/**< Pointer to callback function */
> +	void *arg;
> +	/**< Pointer to argument */
> +};
> +
> +/**
> + * @internal
> + * Structure used to hold information about the RCU for a queue pair.
> + */
> +struct rte_cryptodev_cb_rcu {
> +	struct rte_cryptodev_cb *next;
> +	/**< Pointer to next callback */
> +	struct rte_rcu_qsbr *qsbr;
> +	/**< RCU QSBR variable per queue pair */ };
> +
>  /** The data structure associated with each crypto device. */  struct
> rte_cryptodev {
>  	dequeue_pkt_burst_t dequeue_burst;
> @@ -867,6 +915,12 @@ struct rte_cryptodev {
>  	__extension__
>  	uint8_t attached : 1;
>  	/**< Flag indicating the device is attached */
> +
> +	struct rte_cryptodev_cb_rcu *enq_cbs;
> +	/**< User application callback for pre enqueue processing */
> +
> +	struct rte_cryptodev_cb_rcu *deq_cbs;
> +	/**< User application callback for post dequeue processing */
>  } __rte_cache_aligned;
> 
>  void *
> @@ -945,10 +999,33 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id,
> uint16_t qp_id,  {
>  	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> 
> +	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops,
> +nb_ops);
>  	nb_ops = (*dev->dequeue_burst)
>  			(dev->data->queue_pairs[qp_id], ops, nb_ops);
> -
> -	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> +#ifdef RTE_CRYPTO_CALLBACKS
> +	if (unlikely(dev->deq_cbs != NULL)) {
> +		struct rte_cryptodev_cb_rcu *list;
> +		struct rte_cryptodev_cb *cb;
> +
> +		/* __ATOMIC_RELEASE memory order was used when the
> +		 * call back was inserted into the list.
> +		 * Since there is a clear dependency between loading
> +		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order
> is
> +		 * not required.
> +		 */
> +		list = &dev->deq_cbs[qp_id];
> +		rte_rcu_qsbr_thread_online(list->qsbr, 0);
> +		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> +
> +		while (cb != NULL) {
> +			nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> +					cb->arg);
> +			cb = cb->next;
> +		};
> +
> +		rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> +	}
> +#endif
>  	return nb_ops;
>  }
> 
> @@ -989,6 +1066,31 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id,
> uint16_t qp_id,  {
>  	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> 
> +#ifdef RTE_CRYPTO_CALLBACKS
> +	if (unlikely(dev->enq_cbs != NULL)) {
> +		struct rte_cryptodev_cb_rcu *list;
> +		struct rte_cryptodev_cb *cb;
> +
> +		/* __ATOMIC_RELEASE memory order was used when the
> +		 * call back was inserted into the list.
> +		 * Since there is a clear dependency between loading
> +		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order
> is
> +		 * not required.
> +		 */
> +		list = &dev->enq_cbs[qp_id];
> +		rte_rcu_qsbr_thread_online(list->qsbr, 0);
> +		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> +
> +		while (cb != NULL) {
> +			nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> +					cb->arg);
> +			cb = cb->next;
> +		};
> +
> +		rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> +	}
> +#endif
> +
>  	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
>  	return (*dev->enqueue_burst)(
>  			dev->data->queue_pairs[qp_id], ops, nb_ops); @@ -
> 1730,6 +1832,144 @@ int  rte_cryptodev_raw_dequeue_done(struct
> rte_crypto_raw_dp_ctx *ctx,
>  		uint32_t n);
> 
> +/**
> + * Add a user callback for a given crypto device and queue pair which
> +will be
> + * called on crypto ops enqueue.
> + *
> + * This API configures a function to be called for each burst of crypto
> +ops
> + * received on a given crypto device queue pair. The return value is a
> +pointer
> + * that can be used later to remove the callback using
> + * rte_cryptodev_remove_enq_callback().
> + *
> + * Callbacks registered by application would not survive
> + * rte_cryptodev_configure() as it reinitializes the callback list.
> + * It is user responsibility to remove all installed callbacks before
> + * calling rte_cryptodev_configure() to avoid possible memory leakage.
> + * Application is expected to call add API after rte_cryptodev_configure().
> + *
> + * Multiple functions can be registered per queue pair & they are
> +called
> + * in the order they were added. The API does not restrict on maximum
> +number
> + * of callbacks.
> + *
> + * @param	dev_id		The identifier of the device.
> + * @param	qp_id		The index of the queue pair on which ops are
> + *				to be enqueued for processing. The value
> + *				must be in the range [0, nb_queue_pairs - 1]
> + *				previously supplied to
> + *				*rte_cryptodev_configure*.
> + * @param	cb_fn		The callback function
> + * @param	cb_arg		A generic pointer parameter which will be
> passed
> + *				to each invocation of the callback function on
> + *				this crypto device and queue pair.
> + *
> + * @return
> + *  - NULL on error & rte_errno will contain the error code.
> + *  - On success, a pointer value which can later be used to remove the
> + *    callback.
> + */
> +
> +__rte_experimental
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> +			       uint16_t qp_id,
> +			       rte_cryptodev_callback_fn cb_fn,
> +			       void *cb_arg);
> +
> +/**
> + * Remove a user callback function for given crypto device and queue pair.
> + *
> + * This function is used to remove enqueue callbacks that were added to
> +a
> + * crypto device queue pair using rte_cryptodev_add_enq_callback().
> + *
> + *
> + *
> + * @param	dev_id		The identifier of the device.
> + * @param	qp_id		The index of the queue pair on which ops are
> + *				to be enqueued. The value must be in the
> + *				range [0, nb_queue_pairs - 1] previously
> + *				supplied to *rte_cryptodev_configure*.
> + * @param	cb		Pointer to user supplied callback created via
> + *				rte_cryptodev_add_enq_callback().
> + *
> + * @return
> + *   -  0: Success. Callback was removed.
> + *   - <0: The dev_id or the qp_id is out of range, or the callback
> + *         is NULL or not found for the crypto device queue pair.
> + */
> +
> +__rte_experimental
> +int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> +				      uint16_t qp_id,
> +				      struct rte_cryptodev_cb *cb);
> +
> +/**
> + * Add a user callback for a given crypto device and queue pair which
> +will be
> + * called on crypto ops dequeue.
> + *
> + * This API configures a function to be called for each burst of crypto
> +ops
> + * received on a given crypto device queue pair. The return value is a
> +pointer
> + * that can be used later to remove the callback using
> + * rte_cryptodev_remove_deq_callback().
> + *
> + * Callbacks registered by application would not survive
> + * rte_cryptodev_configure() as it reinitializes the callback list.
> + * It is user responsibility to remove all installed callbacks before
> + * calling rte_cryptodev_configure() to avoid possible memory leakage.
> + * Application is expected to call add API after rte_cryptodev_configure().
> + *
> + * Multiple functions can be registered per queue pair & they are
> +called
> + * in the order they were added. The API does not restrict on maximum
> +number
> + * of callbacks.
> + *
> + * @param	dev_id		The identifier of the device.
> + * @param	qp_id		The index of the queue pair on which ops are
> + *				to be dequeued. The value must be in the
> + *				range [0, nb_queue_pairs - 1] previously
> + *				supplied to *rte_cryptodev_configure*.
> + * @param	cb_fn		The callback function
> + * @param	cb_arg		A generic pointer parameter which will be
> passed
> + *				to each invocation of the callback function on
> + *				this crypto device and queue pair.
> + *
> + * @return
> + *   - NULL on error & rte_errno will contain the error code.
> + *   - On success, a pointer value which can later be used to remove the
> + *     callback.
> + */
> +
> +__rte_experimental
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_deq_callback(uint8_t dev_id,
> +			       uint16_t qp_id,
> +			       rte_cryptodev_callback_fn cb_fn,
> +			       void *cb_arg);
> +
> +/**
> + * Remove a user callback function for given crypto device and queue pair.
> + *
> + * This function is used to remove dequeue callbacks that were added to
> +a
> + * crypto device queue pair using rte_cryptodev_add_deq_callback().
> + *
> + *
> + *
> + * @param	dev_id		The identifier of the device.
> + * @param	qp_id		The index of the queue pair on which ops are
> + *				to be dequeued. The value must be in the
> + *				range [0, nb_queue_pairs - 1] previously
> + *				supplied to *rte_cryptodev_configure*.
> + * @param	cb		Pointer to user supplied callback created via
> + *				rte_cryptodev_add_deq_callback().
> + *
> + * @return
> + *   -  0: Success. Callback was removed.
> + *   - <0: The dev_id or the qp_id is out of range, or the callback
> + *         is NULL or not found for the crypto device queue pair.
> + */
> +__rte_experimental
> +int rte_cryptodev_remove_deq_callback(uint8_t dev_id,
> +				      uint16_t qp_id,
> +				      struct rte_cryptodev_cb *cb);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_cryptodev/version.map b/lib/librte_cryptodev/version.map
> index 7e4360ff0..9f04737ae 100644
> --- a/lib/librte_cryptodev/version.map
> +++ b/lib/librte_cryptodev/version.map
> @@ -109,4 +109,11 @@ EXPERIMENTAL {
>  	rte_cryptodev_raw_enqueue;
>  	rte_cryptodev_raw_enqueue_burst;
>  	rte_cryptodev_raw_enqueue_done;
> +
> +	# added in 21.02
> +	rte_cryptodev_add_deq_callback;
> +	rte_cryptodev_add_enq_callback;
> +	rte_cryptodev_remove_deq_callback;
> +	rte_cryptodev_remove_enq_callback;
> +
>  };
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] mem_debug add more log
  2020-12-21 18:44  3%     ` Stephen Hemminger
@ 2020-12-25  7:20  3%       ` Peng, ZhihongX
  0 siblings, 0 replies; 200+ results
From: Peng, ZhihongX @ 2020-12-25  7:20 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Wang, Haiyue, Zhang, Qi Z, Xing, Beilei, dev, Lin, Xueqin, Yu, PingX

The performance of our simple scheme is better than asan. We are trying the asan solution.

Regards,
Peng,Zhihong

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
Sent: Tuesday, December 22, 2020 2:44 AM
To: Peng, ZhihongX <zhihongx.peng@intel.com>
Cc: Wang, Haiyue <haiyue.wang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org; Lin, Xueqin <xueqin.lin@intel.com>; Yu, PingX <pingx.yu@intel.com>
Subject: Re: [dpdk-dev] [RFC] mem_debug add more log

On Mon, 21 Dec 2020 07:35:10 +0000
"Peng, ZhihongX" <zhihongx.peng@intel.com> wrote:

> 1. I think this implement doesn't add significant overhead. Overhead only will be occurred in rte_malloc and rte_free.
> 
> 2. Current existing address sanitizer infrastructure only support libc malloc.
> 
> Regards,
> Peng,Zhihong
> 
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Saturday, December 19, 2020 2:54 AM
> To: Peng, ZhihongX <zhihongx.peng@intel.com>
> Cc: Wang, Haiyue <haiyue.wang@intel.com>; Zhang, Qi Z 
> <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; 
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC] mem_debug add more log
> 
> On Fri, 18 Dec 2020 14:21:09 -0500
> Peng Zhihong <zhihongx.peng@intel.com> wrote:
> 
> > 1. The debugging log in current DPDK RTE_MALLOC_DEBUG mode is insufficient,
> >    which makes it difficult to locate the issues, such as:
> >    a) When a memeory overlflow occur in rte_free, there is a little log
> >       information. Even if abort here, we can find which API is core
> >       dumped but we still need to read the source code to find out where
> >       the requested memory is overflowed.
> >    b) Current DPDK can NOT find that the overflow if the memory has been
> >       used and not released.
> >    c) If there are two pieces of continuous memory, when the first block
> >       is not released and an overflow is occured and also the second block
> >       of memory is covered, a memory overflow will be detected once the second
> >       block of memory is released. However, current DPDK can not find the
> >       correct point of memory overflow. It only detect the memory overflow
> >       of the second block but should dedect the one of first block.
> >       ----------------------------------------------------------------------------------
> >       | header cookie | data1 | trailer cookie | header cookie | 
> > data2 |trailer cookie |
> >       
> > --------------------------------------------------------------------
> > --
> > ------------ 2. To fix above issues, we can store the requested 
> > information When DPDK
> >    request memory. Including the requested address and requested momory's
> >    file, function and numbers of rows and then put it into a list.
> >    --------------------     ----------------------     ----------------------
> >    | struct list_head |---->| struct malloc_info |---->| struct malloc_info |
> >    --------------------     ----------------------     ----------------------
> >    The above 3 problems can be solved through this implementation:
> >    a) If there is a memory overflow in rte_free, you can traverse the
> >       list to find the information of overflow memory and print the
> >       overflow memory information. like this:
> >       code:
> >       37         char *p = rte_zmalloc(NULL, 64, 0);
> >       38         memset(p, 0, 65);
> >       39         rte_free(p);
> >       40         //rte_malloc_validate_all_memory();
> >       memory error:
> >       EAL: Error: Invalid memory
> >       malloc memory address 0x17ff2c340 overflow in \
> >       file:../examples/helloworld/main.c function:main line:37
> >    b)c) Provide a interface to check all memory overflow in function
> >       rte_malloc_validate_all_memory, this function will check all
> >       memory on the list. Call this funcation manually at the exit
> >       point of business logic, we can find all overflow points in time.
> > 
> > Signed-off-by: Peng Zhihong <zhihongx.peng@intel.com>
> 
> Good concept, but doesn't this add significant overhead?
> 
> Maybe we could make rte_malloc work with existing address sanitizer infrastructure in gcc/clang?  That would provide faster and more immediate better diagnostic info.

Everybody builds there own custom debug hooks, and some of these are worth sharing.
But lots of time debug code becomes a technical debt, creates API/ABI issues and causes more trouble than it is worth.

Therefore my desire is for DPDK to be better supported by standard tools such as valgrind and address sanitizer. The standard tools catch more errors faster and do not create project maintenance workload.

See:
https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm




^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions
  @ 2020-12-22 14:42  2% ` Abhinandan Gujjar
  2021-01-04  6:59  0%   ` Gujjar, Abhinandan S
    0 siblings, 2 replies; 200+ results
From: Abhinandan Gujjar @ 2020-12-22 14:42 UTC (permalink / raw)
  To: dev, akhil.goyal, konstantin.ananyev; +Cc: abhinandan.gujjar

This patch adds APIs to add/remove callback functions on crypto
enqueue/dequeue burst. The callback function will be called for
each burst of crypto ops received/sent on a given crypto device
queue pair.

Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/rte_config.h                     |   1 +
 doc/guides/prog_guide/cryptodev_lib.rst |  44 +++
 doc/guides/rel_notes/release_21_02.rst  |   9 +
 lib/librte_cryptodev/meson.build        |   2 +-
 lib/librte_cryptodev/rte_cryptodev.c    | 398 +++++++++++++++++++++++-
 lib/librte_cryptodev/rte_cryptodev.h    | 246 ++++++++++++++-
 lib/librte_cryptodev/version.map        |   7 +
 7 files changed, 702 insertions(+), 5 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index a0b5160ff..87f9786d7 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -62,6 +62,7 @@
 /* cryptodev defines */
 #define RTE_CRYPTO_MAX_DEVS 64
 #define RTE_CRYPTODEV_NAME_LEN 64
+#define RTE_CRYPTO_CALLBACKS 1
 
 /* compressdev defines */
 #define RTE_COMPRESS_MAX_DEVS 64
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 473b014a1..9b1cf8d49 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -338,6 +338,50 @@ start of private data information. The offset is counted from the start of the
 rte_crypto_op including other crypto information such as the IVs (since there can
 be an IV also for authentication).
 
+User callback APIs
+~~~~~~~~~~~~~~~~~~
+The add APIs configures a user callback function to be called for each burst of crypto
+ops received/sent on a given crypto device queue pair. The return value is a pointer
+that can be used later to remove the callback using remove API. Application is expected
+to register a callback function of type ``rte_cryptodev_callback_fn``. Multiple callback
+functions can be added for a given queue pair. API does not restrict on maximum number of
+callbacks.
+
+Callbacks registered by application would not survive ``rte_cryptodev_configure`` as it
+reinitializes the callback list. It is user responsibility to remove all installed
+callbacks before calling ``rte_cryptodev_configure`` to avoid possible memory leakage.
+
+So, the application is expected to add user callback after ``rte_cryptodev_configure``.
+The callbacks can also be added at the runtime. These callbacks get executed when
+``rte_cryptodev_enqueue_burst``/``rte_cryptodev_dequeue_burst`` is called.
+
+.. code-block:: c
+
+	struct rte_cryptodev_cb *
+		rte_cryptodev_add_enq_callback(uint8_t dev_id, uint16_t qp_id,
+					       rte_cryptodev_callback_fn cb_fn,
+					       void *cb_arg);
+
+	struct rte_cryptodev_cb *
+		rte_cryptodev_add_deq_callback(uint8_t dev_id, uint16_t qp_id,
+					       rte_cryptodev_callback_fn cb_fn,
+					       void *cb_arg);
+
+	uint16_t (* rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
+					       struct rte_crypto_op **ops,
+					       uint16_t nb_ops, void *user_param);
+
+The remove API removes a callback function added by
+``rte_cryptodev_add_enq_callback``/``rte_cryptodev_add_deq_callback``.
+
+.. code-block:: c
+
+	int rte_cryptodev_remove_enq_callback(uint8_t dev_id, uint16_t qp_id,
+					      struct rte_cryptodev_cb *cb);
+
+	int rte_cryptodev_remove_deq_callback(uint8_t dev_id, uint16_t qp_id,
+					      struct rte_cryptodev_cb *cb);
+
 
 Enqueue / Dequeue Burst APIs
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 638f98168..8c7866401 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -55,6 +55,13 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added enqueue & dequeue callback APIs for cryptodev library.**
+
+  Cryptodev library is added with enqueue & dequeue callback APIs to
+  enable applications to add/remove user callbacks which gets called
+  for every enqueue/dequeue operation.
+
+
 
 Removed Items
 -------------
@@ -84,6 +91,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cryptodev: The structure ``rte_cryptodev`` has been updated with pointers
+  for adding enqueue and dequeue callbacks.
 
 ABI Changes
 -----------
diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
index c4c6b3b6a..8c5493f4c 100644
--- a/lib/librte_cryptodev/meson.build
+++ b/lib/librte_cryptodev/meson.build
@@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
 	'rte_crypto.h',
 	'rte_crypto_sym.h',
 	'rte_crypto_asym.h')
-deps += ['kvargs', 'mbuf']
+deps += ['kvargs', 'mbuf', 'rcu']
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 3d95ac6ea..40f55a3cd 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -448,6 +448,122 @@ rte_cryptodev_asym_xform_capability_check_modlen(
 	return 0;
 }
 
+/* spinlock for crypto device enq callbacks */
+static rte_spinlock_t rte_cryptodev_callback_lock = RTE_SPINLOCK_INITIALIZER;
+
+static void
+cryptodev_cb_cleanup(struct rte_cryptodev *dev)
+{
+	struct rte_cryptodev_cb_rcu *list;
+	struct rte_cryptodev_cb *cb, *next;
+	uint16_t qp_id;
+
+	if (dev->enq_cbs == NULL && dev->deq_cbs == NULL)
+		return;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		list = &dev->enq_cbs[qp_id];
+		cb = list->next;
+		while (cb != NULL) {
+			next = cb->next;
+			rte_free(cb);
+			cb = next;
+		}
+
+		rte_free(list->qsbr);
+	}
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		list = &dev->deq_cbs[qp_id];
+		cb = list->next;
+		while (cb != NULL) {
+			next = cb->next;
+			rte_free(cb);
+			cb = next;
+		}
+
+		rte_free(list->qsbr);
+	}
+
+	rte_free(dev->enq_cbs);
+	dev->enq_cbs = NULL;
+	rte_free(dev->deq_cbs);
+	dev->deq_cbs = NULL;
+}
+
+static int
+cryptodev_cb_init(struct rte_cryptodev *dev)
+{
+	struct rte_cryptodev_cb_rcu *list;
+	struct rte_rcu_qsbr *qsbr;
+	uint16_t qp_id;
+	size_t size;
+
+	/* Max thread set to 1, as one DP thread accessing a queue-pair */
+	const uint32_t max_threads = 1;
+
+	dev->enq_cbs = rte_zmalloc(NULL,
+				   sizeof(struct rte_cryptodev_cb_rcu) *
+				   dev->data->nb_queue_pairs, 0);
+	if (dev->enq_cbs == NULL) {
+		CDEV_LOG_ERR("Failed to allocate memory for enq callbacks");
+		return -ENOMEM;
+	}
+
+	dev->deq_cbs = rte_zmalloc(NULL,
+				   sizeof(struct rte_cryptodev_cb_rcu) *
+				   dev->data->nb_queue_pairs, 0);
+	if (dev->deq_cbs == NULL) {
+		CDEV_LOG_ERR("Failed to allocate memory for deq callbacks");
+		rte_free(dev->enq_cbs);
+		return -ENOMEM;
+	}
+
+	/* Create RCU QSBR variable */
+	size = rte_rcu_qsbr_get_memsize(max_threads);
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		list = &dev->enq_cbs[qp_id];
+		qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+		if (qsbr == NULL) {
+			CDEV_LOG_ERR("Failed to allocate memory for RCU on "
+				"queue_pair_id=%d", qp_id);
+			goto cb_init_err;
+		}
+
+		if (rte_rcu_qsbr_init(qsbr, max_threads)) {
+			CDEV_LOG_ERR("Failed to initialize for RCU on "
+				"queue_pair_id=%d", qp_id);
+			goto cb_init_err;
+		}
+
+		list->qsbr = qsbr;
+	}
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		list = &dev->deq_cbs[qp_id];
+		qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+		if (qsbr == NULL) {
+			CDEV_LOG_ERR("Failed to allocate memory for RCU on "
+				"queue_pair_id=%d", qp_id);
+			goto cb_init_err;
+		}
+
+		if (rte_rcu_qsbr_init(qsbr, max_threads)) {
+			CDEV_LOG_ERR("Failed to initialize for RCU on "
+				"queue_pair_id=%d", qp_id);
+			goto cb_init_err;
+		}
+
+		list->qsbr = qsbr;
+	}
+
+	return 0;
+
+cb_init_err:
+	cryptodev_cb_cleanup(dev);
+	return -ENOMEM;
+}
 
 const char *
 rte_cryptodev_get_feature_name(uint64_t flag)
@@ -927,6 +1043,10 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+	cryptodev_cb_cleanup(dev);
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+
 	/* Setup new number of queue pairs and reconfigure device. */
 	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
 			config->socket_id);
@@ -936,11 +1056,18 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 		return diag;
 	}
 
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+	diag = cryptodev_cb_init(dev);
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+	if (diag) {
+		CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id);
+		return diag;
+	}
+
 	rte_cryptodev_trace_configure(dev_id, config);
 	return (*dev->dev_ops->dev_configure)(dev, config);
 }
 
-
 int
 rte_cryptodev_start(uint8_t dev_id)
 {
@@ -1136,6 +1263,275 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 			socket_id);
 }
 
+struct rte_cryptodev_cb *
+rte_cryptodev_add_enq_callback(uint8_t dev_id,
+			       uint16_t qp_id,
+			       rte_cryptodev_callback_fn cb_fn,
+			       void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_cb_rcu *list;
+	struct rte_cryptodev_cb *cb, *tail;
+
+	if (!cb_fn) {
+		CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (qp_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+
+	cb = rte_zmalloc(NULL, sizeof(*cb), 0);
+	if (cb == NULL) {
+		CDEV_LOG_ERR("Failed to allocate memory for callback on "
+			     "dev=%d, queue_pair_id=%d", dev_id, qp_id);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+
+	cb->fn = cb_fn;
+	cb->arg = cb_arg;
+
+	/* Add the callbacks in fifo order. */
+	list = &dev->enq_cbs[qp_id];
+	tail = list->next;
+
+	if (tail) {
+		while (tail->next)
+			tail = tail->next;
+		/* Stores to cb->fn and cb->param should complete before
+		 * cb is visible to data plane.
+		 */
+		__atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
+	} else {
+		/* Stores to cb->fn and cb->param should complete before
+		 * cb is visible to data plane.
+		 */
+		__atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+
+	return cb;
+}
+
+int
+rte_cryptodev_remove_enq_callback(uint8_t dev_id,
+				  uint16_t qp_id,
+				  struct rte_cryptodev_cb *cb)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_cb **prev_cb, *curr_cb;
+	struct rte_cryptodev_cb_rcu *list;
+	int ret;
+
+	ret = -EINVAL;
+
+	if (!cb) {
+		CDEV_LOG_ERR("Callback is NULL");
+		return -EINVAL;
+	}
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (qp_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+		return -ENODEV;
+	}
+
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+	if (dev->enq_cbs == NULL) {
+		CDEV_LOG_ERR("Callback not initialized");
+		goto cb_err;
+	}
+
+	list = &dev->enq_cbs[qp_id];
+	if (list == NULL) {
+		CDEV_LOG_ERR("Callback list is NULL");
+		goto cb_err;
+	}
+
+	if (list->qsbr == NULL) {
+		CDEV_LOG_ERR("Rcu qsbr is NULL");
+		goto cb_err;
+	}
+
+	prev_cb = &list->next;
+	for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
+		curr_cb = *prev_cb;
+		if (curr_cb == cb) {
+			/* Remove the user cb from the callback list. */
+			__atomic_store_n(prev_cb, curr_cb->next,
+				__ATOMIC_RELAXED);
+			ret = 0;
+			break;
+		}
+	}
+
+	if (!ret) {
+		/* Call sync with invalid thread id as this is part of
+		 * control plane API
+		 */
+		rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID);
+		rte_free(cb);
+	}
+
+cb_err:
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+	return ret;
+}
+
+struct rte_cryptodev_cb *
+rte_cryptodev_add_deq_callback(uint8_t dev_id,
+			       uint16_t qp_id,
+			       rte_cryptodev_callback_fn cb_fn,
+			       void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_cb_rcu *list;
+	struct rte_cryptodev_cb *cb, *tail;
+
+	if (!cb_fn) {
+		CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (qp_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+
+	cb = rte_zmalloc(NULL, sizeof(*cb), 0);
+	if (cb == NULL) {
+		CDEV_LOG_ERR("Failed to allocate memory for callback on "
+			     "dev=%d, queue_pair_id=%d", dev_id, qp_id);
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+
+	cb->fn = cb_fn;
+	cb->arg = cb_arg;
+
+	/* Add the callbacks in fifo order. */
+	list = &dev->deq_cbs[qp_id];
+	tail = list->next;
+
+	if (tail) {
+		while (tail->next)
+			tail = tail->next;
+		/* Stores to cb->fn and cb->param should complete before
+		 * cb is visible to data plane.
+		 */
+		__atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
+	} else {
+		/* Stores to cb->fn and cb->param should complete before
+		 * cb is visible to data plane.
+		 */
+		__atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+
+	return cb;
+}
+
+int
+rte_cryptodev_remove_deq_callback(uint8_t dev_id,
+				  uint16_t qp_id,
+				  struct rte_cryptodev_cb *cb)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_cb **prev_cb, *curr_cb;
+	struct rte_cryptodev_cb_rcu *list;
+	int ret;
+
+	ret = -EINVAL;
+
+	if (!cb) {
+		CDEV_LOG_ERR("Callback is NULL");
+		return -EINVAL;
+	}
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (qp_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+		return -ENODEV;
+	}
+
+	rte_spinlock_lock(&rte_cryptodev_callback_lock);
+	if (dev->enq_cbs == NULL) {
+		CDEV_LOG_ERR("Callback not initialized");
+		goto cb_err;
+	}
+
+	list = &dev->deq_cbs[qp_id];
+	if (list == NULL) {
+		CDEV_LOG_ERR("Callback list is NULL");
+		goto cb_err;
+	}
+
+	if (list->qsbr == NULL) {
+		CDEV_LOG_ERR("Rcu qsbr is NULL");
+		goto cb_err;
+	}
+
+	prev_cb = &list->next;
+	for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
+		curr_cb = *prev_cb;
+		if (curr_cb == cb) {
+			/* Remove the user cb from the callback list. */
+			__atomic_store_n(prev_cb, curr_cb->next,
+				__ATOMIC_RELAXED);
+			ret = 0;
+			break;
+		}
+	}
+
+	if (!ret) {
+		/* Call sync with invalid thread id as this is part of
+		 * control plane API
+		 */
+		rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID);
+		rte_free(cb);
+	}
+
+cb_err:
+	rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+	return ret;
+}
 
 int
 rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 0935fd587..ae34f33f6 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -23,6 +23,7 @@ extern "C" {
 #include "rte_dev.h"
 #include <rte_common.h>
 #include <rte_config.h>
+#include <rte_rcu_qsbr.h>
 
 #include "rte_cryptodev_trace_fp.h"
 
@@ -522,6 +523,30 @@ struct rte_cryptodev_qp_conf {
 	/**< The mempool for creating sess private data in sessionless mode */
 };
 
+/**
+ * Function type used for processing crypto ops when enqueue/dequeue burst is
+ * called.
+ *
+ * The callback function is called on enqueue/dequeue burst immediately.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair on which ops are
+ *				enqueued/dequeued. The value must be in the
+ *				range [0, nb_queue_pairs - 1] previously
+ *				supplied to *rte_cryptodev_configure*.
+ * @param	ops		The address of an array of *nb_ops* pointers
+ *				to *rte_crypto_op* structures which contain
+ *				the crypto operations to be processed.
+ * @param	nb_ops		The number of operations to process.
+ * @param	user_param	The arbitrary user parameter passed in by the
+ *				application when the callback was originally
+ *				registered.
+ * @return			The number of ops to be enqueued to the
+ *				crypto device.
+ */
+typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
+		struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param);
+
 /**
  * Typedef for application callback function to be registered by application
  * software for notification of device events
@@ -822,7 +847,6 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
 		enum rte_cryptodev_event_type event,
 		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
 
-
 typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
 		struct rte_crypto_op **ops,	uint16_t nb_ops);
 /**< Dequeue processed packets from queue pair of a device. */
@@ -839,6 +863,30 @@ struct rte_cryptodev_callback;
 /** Structure to keep track of registered callbacks */
 TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
+/**
+ * Structure used to hold information about the callbacks to be called for a
+ * queue pair on enqueue/dequeue.
+ */
+struct rte_cryptodev_cb {
+	struct rte_cryptodev_cb *next;
+	/**< Pointer to next callback */
+	rte_cryptodev_callback_fn fn;
+	/**< Pointer to callback function */
+	void *arg;
+	/**< Pointer to argument */
+};
+
+/**
+ * @internal
+ * Structure used to hold information about the RCU for a queue pair.
+ */
+struct rte_cryptodev_cb_rcu {
+	struct rte_cryptodev_cb *next;
+	/**< Pointer to next callback */
+	struct rte_rcu_qsbr *qsbr;
+	/**< RCU QSBR variable per queue pair */
+};
+
 /** The data structure associated with each crypto device. */
 struct rte_cryptodev {
 	dequeue_pkt_burst_t dequeue_burst;
@@ -867,6 +915,12 @@ struct rte_cryptodev {
 	__extension__
 	uint8_t attached : 1;
 	/**< Flag indicating the device is attached */
+
+	struct rte_cryptodev_cb_rcu *enq_cbs;
+	/**< User application callback for pre enqueue processing */
+
+	struct rte_cryptodev_cb_rcu *deq_cbs;
+	/**< User application callback for post dequeue processing */
 } __rte_cache_aligned;
 
 void *
@@ -945,10 +999,33 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 {
 	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
 
+	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
 	nb_ops = (*dev->dequeue_burst)
 			(dev->data->queue_pairs[qp_id], ops, nb_ops);
-
-	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
+#ifdef RTE_CRYPTO_CALLBACKS
+	if (unlikely(dev->deq_cbs != NULL)) {
+		struct rte_cryptodev_cb_rcu *list;
+		struct rte_cryptodev_cb *cb;
+
+		/* __ATOMIC_RELEASE memory order was used when the
+		 * call back was inserted into the list.
+		 * Since there is a clear dependency between loading
+		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+		 * not required.
+		 */
+		list = &dev->deq_cbs[qp_id];
+		rte_rcu_qsbr_thread_online(list->qsbr, 0);
+		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
+
+		while (cb != NULL) {
+			nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
+					cb->arg);
+			cb = cb->next;
+		};
+
+		rte_rcu_qsbr_thread_offline(list->qsbr, 0);
+	}
+#endif
 	return nb_ops;
 }
 
@@ -989,6 +1066,31 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 {
 	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
 
+#ifdef RTE_CRYPTO_CALLBACKS
+	if (unlikely(dev->enq_cbs != NULL)) {
+		struct rte_cryptodev_cb_rcu *list;
+		struct rte_cryptodev_cb *cb;
+
+		/* __ATOMIC_RELEASE memory order was used when the
+		 * call back was inserted into the list.
+		 * Since there is a clear dependency between loading
+		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+		 * not required.
+		 */
+		list = &dev->enq_cbs[qp_id];
+		rte_rcu_qsbr_thread_online(list->qsbr, 0);
+		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
+
+		while (cb != NULL) {
+			nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
+					cb->arg);
+			cb = cb->next;
+		};
+
+		rte_rcu_qsbr_thread_offline(list->qsbr, 0);
+	}
+#endif
+
 	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
 	return (*dev->enqueue_burst)(
 			dev->data->queue_pairs[qp_id], ops, nb_ops);
@@ -1730,6 +1832,144 @@ int
 rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
 		uint32_t n);
 
+/**
+ * Add a user callback for a given crypto device and queue pair which will be
+ * called on crypto ops enqueue.
+ *
+ * This API configures a function to be called for each burst of crypto ops
+ * received on a given crypto device queue pair. The return value is a pointer
+ * that can be used later to remove the callback using
+ * rte_cryptodev_remove_enq_callback().
+ *
+ * Callbacks registered by application would not survive
+ * rte_cryptodev_configure() as it reinitializes the callback list.
+ * It is user responsibility to remove all installed callbacks before
+ * calling rte_cryptodev_configure() to avoid possible memory leakage.
+ * Application is expected to call add API after rte_cryptodev_configure().
+ *
+ * Multiple functions can be registered per queue pair & they are called
+ * in the order they were added. The API does not restrict on maximum number
+ * of callbacks.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair on which ops are
+ *				to be enqueued for processing. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				*rte_cryptodev_configure*.
+ * @param	cb_fn		The callback function
+ * @param	cb_arg		A generic pointer parameter which will be passed
+ *				to each invocation of the callback function on
+ *				this crypto device and queue pair.
+ *
+ * @return
+ *  - NULL on error & rte_errno will contain the error code.
+ *  - On success, a pointer value which can later be used to remove the
+ *    callback.
+ */
+
+__rte_experimental
+struct rte_cryptodev_cb *
+rte_cryptodev_add_enq_callback(uint8_t dev_id,
+			       uint16_t qp_id,
+			       rte_cryptodev_callback_fn cb_fn,
+			       void *cb_arg);
+
+/**
+ * Remove a user callback function for given crypto device and queue pair.
+ *
+ * This function is used to remove enqueue callbacks that were added to a
+ * crypto device queue pair using rte_cryptodev_add_enq_callback().
+ *
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair on which ops are
+ *				to be enqueued. The value must be in the
+ *				range [0, nb_queue_pairs - 1] previously
+ *				supplied to *rte_cryptodev_configure*.
+ * @param	cb		Pointer to user supplied callback created via
+ *				rte_cryptodev_add_enq_callback().
+ *
+ * @return
+ *   -  0: Success. Callback was removed.
+ *   - <0: The dev_id or the qp_id is out of range, or the callback
+ *         is NULL or not found for the crypto device queue pair.
+ */
+
+__rte_experimental
+int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
+				      uint16_t qp_id,
+				      struct rte_cryptodev_cb *cb);
+
+/**
+ * Add a user callback for a given crypto device and queue pair which will be
+ * called on crypto ops dequeue.
+ *
+ * This API configures a function to be called for each burst of crypto ops
+ * received on a given crypto device queue pair. The return value is a pointer
+ * that can be used later to remove the callback using
+ * rte_cryptodev_remove_deq_callback().
+ *
+ * Callbacks registered by application would not survive
+ * rte_cryptodev_configure() as it reinitializes the callback list.
+ * It is user responsibility to remove all installed callbacks before
+ * calling rte_cryptodev_configure() to avoid possible memory leakage.
+ * Application is expected to call add API after rte_cryptodev_configure().
+ *
+ * Multiple functions can be registered per queue pair & they are called
+ * in the order they were added. The API does not restrict on maximum number
+ * of callbacks.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair on which ops are
+ *				to be dequeued. The value must be in the
+ *				range [0, nb_queue_pairs - 1] previously
+ *				supplied to *rte_cryptodev_configure*.
+ * @param	cb_fn		The callback function
+ * @param	cb_arg		A generic pointer parameter which will be passed
+ *				to each invocation of the callback function on
+ *				this crypto device and queue pair.
+ *
+ * @return
+ *   - NULL on error & rte_errno will contain the error code.
+ *   - On success, a pointer value which can later be used to remove the
+ *     callback.
+ */
+
+__rte_experimental
+struct rte_cryptodev_cb *
+rte_cryptodev_add_deq_callback(uint8_t dev_id,
+			       uint16_t qp_id,
+			       rte_cryptodev_callback_fn cb_fn,
+			       void *cb_arg);
+
+/**
+ * Remove a user callback function for given crypto device and queue pair.
+ *
+ * This function is used to remove dequeue callbacks that were added to a
+ * crypto device queue pair using rte_cryptodev_add_deq_callback().
+ *
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair on which ops are
+ *				to be dequeued. The value must be in the
+ *				range [0, nb_queue_pairs - 1] previously
+ *				supplied to *rte_cryptodev_configure*.
+ * @param	cb		Pointer to user supplied callback created via
+ *				rte_cryptodev_add_deq_callback().
+ *
+ * @return
+ *   -  0: Success. Callback was removed.
+ *   - <0: The dev_id or the qp_id is out of range, or the callback
+ *         is NULL or not found for the crypto device queue pair.
+ */
+__rte_experimental
+int rte_cryptodev_remove_deq_callback(uint8_t dev_id,
+				      uint16_t qp_id,
+				      struct rte_cryptodev_cb *cb);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_cryptodev/version.map b/lib/librte_cryptodev/version.map
index 7e4360ff0..9f04737ae 100644
--- a/lib/librte_cryptodev/version.map
+++ b/lib/librte_cryptodev/version.map
@@ -109,4 +109,11 @@ EXPERIMENTAL {
 	rte_cryptodev_raw_enqueue;
 	rte_cryptodev_raw_enqueue_burst;
 	rte_cryptodev_raw_enqueue_done;
+
+	# added in 21.02
+	rte_cryptodev_add_deq_callback;
+	rte_cryptodev_add_enq_callback;
+	rte_cryptodev_remove_deq_callback;
+	rte_cryptodev_remove_enq_callback;
+
 };
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [RFC] mem_debug add more log
  @ 2020-12-21 18:44  3%     ` Stephen Hemminger
  2020-12-25  7:20  3%       ` Peng, ZhihongX
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2020-12-21 18:44 UTC (permalink / raw)
  To: Peng, ZhihongX
  Cc: Wang, Haiyue, Zhang, Qi Z, Xing, Beilei, dev, Lin, Xueqin, Yu, PingX

On Mon, 21 Dec 2020 07:35:10 +0000
"Peng, ZhihongX" <zhihongx.peng@intel.com> wrote:

> 1. I think this implement doesn't add significant overhead. Overhead only will be occurred in rte_malloc and rte_free.
> 
> 2. Current existing address sanitizer infrastructure only support libc malloc.
> 
> Regards,
> Peng,Zhihong
> 
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org> 
> Sent: Saturday, December 19, 2020 2:54 AM
> To: Peng, ZhihongX <zhihongx.peng@intel.com>
> Cc: Wang, Haiyue <haiyue.wang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC] mem_debug add more log
> 
> On Fri, 18 Dec 2020 14:21:09 -0500
> Peng Zhihong <zhihongx.peng@intel.com> wrote:
> 
> > 1. The debugging log in current DPDK RTE_MALLOC_DEBUG mode is insufficient,
> >    which makes it difficult to locate the issues, such as:
> >    a) When a memeory overlflow occur in rte_free, there is a little log
> >       information. Even if abort here, we can find which API is core
> >       dumped but we still need to read the source code to find out where
> >       the requested memory is overflowed.
> >    b) Current DPDK can NOT find that the overflow if the memory has been
> >       used and not released.
> >    c) If there are two pieces of continuous memory, when the first block
> >       is not released and an overflow is occured and also the second block
> >       of memory is covered, a memory overflow will be detected once the second
> >       block of memory is released. However, current DPDK can not find the
> >       correct point of memory overflow. It only detect the memory overflow
> >       of the second block but should dedect the one of first block.
> >       ----------------------------------------------------------------------------------
> >       | header cookie | data1 | trailer cookie | header cookie | data2 |trailer cookie |
> >       
> > ----------------------------------------------------------------------
> > ------------ 2. To fix above issues, we can store the requested 
> > information When DPDK
> >    request memory. Including the requested address and requested momory's
> >    file, function and numbers of rows and then put it into a list.
> >    --------------------     ----------------------     ----------------------
> >    | struct list_head |---->| struct malloc_info |---->| struct malloc_info |
> >    --------------------     ----------------------     ----------------------
> >    The above 3 problems can be solved through this implementation:
> >    a) If there is a memory overflow in rte_free, you can traverse the
> >       list to find the information of overflow memory and print the
> >       overflow memory information. like this:
> >       code:
> >       37         char *p = rte_zmalloc(NULL, 64, 0);
> >       38         memset(p, 0, 65);
> >       39         rte_free(p);
> >       40         //rte_malloc_validate_all_memory();
> >       memory error:
> >       EAL: Error: Invalid memory
> >       malloc memory address 0x17ff2c340 overflow in \
> >       file:../examples/helloworld/main.c function:main line:37
> >    b)c) Provide a interface to check all memory overflow in function
> >       rte_malloc_validate_all_memory, this function will check all
> >       memory on the list. Call this funcation manually at the exit
> >       point of business logic, we can find all overflow points in time.
> > 
> > Signed-off-by: Peng Zhihong <zhihongx.peng@intel.com>  
> 
> Good concept, but doesn't this add significant overhead?
> 
> Maybe we could make rte_malloc work with existing address sanitizer infrastructure in gcc/clang?  That would provide faster and more immediate better diagnostic info.

Everybody builds there own custom debug hooks, and some of these are worth sharing.
But lots of time debug code becomes a technical debt, creates API/ABI issues and
causes more trouble than it is worth.

Therefore my desire is for DPDK to be better supported by standard tools such
as valgrind and address sanitizer. The standard tools catch more errors faster and
do not create project maintenance workload.

See:
https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm




^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework
  2020-12-20 21:13  2% [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework Maxime Coquelin
  @ 2020-12-21 10:58  0% ` Maxime Coquelin
  1 sibling, 0 replies; 200+ results
From: Maxime Coquelin @ 2020-12-21 10:58 UTC (permalink / raw)
  To: dev, chenbo.xia, olivier.matz, amorenoz, david.marchand



On 12/20/20 10:13 PM, Maxime Coquelin wrote:
> This series significantly rework Virtio PMD to improve
> the Virtio-user PMD and its backends integration.
> 
> First part of the series (first 21 patches) removes the
> dependency of Virtio-user ethdev on Virtio PCI, by
> creating generic files, adding per-bus meta data, ...
> 
> Main (if not single) functionnal change of this first
> part is to remove the hack for Virtio-user to work in
> IOVA as PA mode, this hack being very fragile. Now, the
> user has to manually pass --iova-mode=va in EAL
> parameters, otherwise vdev probe will fail. In v21.11,
> when ABI/API can be changed, I will add vdev driver
> flags so that the Virtio-user PMD can request IOVA as VA
> mode to be used.
> 
> Second part of the series reworks Virtio-user internal,
> by reworking the requests handling so that vDPA and Kernel
> backends no more hack into being Vhost-user backend. It
> implies implementing new ops for all the request types.
> Also, all the backend specific actions are moved from the
> virtio_user_dev.c and virtio_user_ethdev.c to their
> backend files.
> 
> Only functionnal change in this second part is making the
> Vhost-user server mode blocking at init time, as long as
> a client is not connected. The goal of this change is to
> make the Vhost-user support much more robust, as without
> blocking, the driver has to assume features that are going
> to be supported by the client, which is very fragile and
> error prone. As a side-effect, it also simplifies the
> logic nin several place of the virtio-user PMD.
> 
> Plese note that I haven't tested the last 5 patches yet,
> I will conduct more testing early next week.

I forgot to add the remaining things to do in next release:
1. More testing
2. Rebase on top of Vhost-vDPA batching support
3. Rebase on top of Olivier's protocol features fix
4. (Maybe) Loosen restrictions on IOVA as VA mode, by making Vhost-
   backend to use IOVA instead of directly VAs, but still warn IOVA
   as VA mode is advised to ensure init won't fail.

Regards,
Maxime


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework
@ 2020-12-20 21:13  2% Maxime Coquelin
    2020-12-21 10:58  0% ` [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework Maxime Coquelin
  0 siblings, 2 replies; 200+ results
From: Maxime Coquelin @ 2020-12-20 21:13 UTC (permalink / raw)
  To: dev, chenbo.xia, olivier.matz, amorenoz, david.marchand; +Cc: Maxime Coquelin

This series significantly rework Virtio PMD to improve
the Virtio-user PMD and its backends integration.

First part of the series (first 21 patches) removes the
dependency of Virtio-user ethdev on Virtio PCI, by
creating generic files, adding per-bus meta data, ...

Main (if not single) functionnal change of this first
part is to remove the hack for Virtio-user to work in
IOVA as PA mode, this hack being very fragile. Now, the
user has to manually pass --iova-mode=va in EAL
parameters, otherwise vdev probe will fail. In v21.11,
when ABI/API can be changed, I will add vdev driver
flags so that the Virtio-user PMD can request IOVA as VA
mode to be used.

Second part of the series reworks Virtio-user internal,
by reworking the requests handling so that vDPA and Kernel
backends no more hack into being Vhost-user backend. It
implies implementing new ops for all the request types.
Also, all the backend specific actions are moved from the
virtio_user_dev.c and virtio_user_ethdev.c to their
backend files.

Only functionnal change in this second part is making the
Vhost-user server mode blocking at init time, as long as
a client is not connected. The goal of this change is to
make the Vhost-user support much more robust, as without
blocking, the driver has to assume features that are going
to be supported by the client, which is very fragile and
error prone. As a side-effect, it also simplifies the
logic nin several place of the virtio-user PMD.

Plese note that I haven't tested the last 5 patches yet,
I will conduct more testing early next week.

Maxime Coquelin (40):
  bus/vdev: add helper to get vdev from eth dev
  net/virtio: Introduce Virtio bus type
  net/virtio: refactor virtio-user device
  net/virtio: introduce PCI device metadata
  net/virtio: move PCI device init in dedicated file
  net/virtio: move PCI specific dev init to PCI ethdev init
  net/virtio: move MSIX detection to PCI ethdev
  net/virtio: force IOVA as VA mode for Virtio-user
  net/virtio: store PCI type in Virtio device metadata
  net/virtio: add callback for device closing
  net/virtio: validate features at bus level
  net/virtio: remove bus type enum
  net/virtio: move PCI-specific fields to PCI device
  net/virtio: pack virtio HW struct
  net/virtio: move legacy IO to Virtio PCI
  net/virtio: introduce generic virtio header
  net/virtio: move features definition to generic header
  net/virtio: move virtqueue defines in generic header
  net/virtio: move config definitions to generic header
  net/virtio: make interrupt handling more generic
  net/virtio: move vring alignment to generic header
  net/virtio: remove last PCI refs in non-PCI code
  net/virtio: make Vhost-user req sender consistent
  net/virtio: add Virtio-user ops to set owner
  net/virtio: add Virtio-user features ops
  net/virtio: add Virtio-user protocol features ops
  net/virtio: add Virtio-user memory tables ops
  net/virtio: add Virtio-user vring setting ops
  net/virtio: add Virtio-user vring file ops
  net/virtio: add Virtio-user vring address ops
  net/virtio: add Virtio-user status ops
  net/virtio: remove useless request ops
  net/virtio: improve Virtio-user errors handling
  net/virtio: move Vhost-user reqs to Vhost-user backend
  net/virtio: make server mode blocking
  net/virtio: move protocol features to Vhost-user
  net/virtio: introduce backend data
  net/virtio: move Vhost-user specifics to its backend
  net/virtio: move Vhost-kernel data to its backend
  net/virtio: move Vhost-vDPA data to its backend

 drivers/bus/vdev/rte_bus_vdev.h               |   2 +
 drivers/net/virtio/meson.build                |   6 +-
 drivers/net/virtio/virtio.c                   |  71 ++
 drivers/net/virtio/virtio.h                   | 247 ++++++
 drivers/net/virtio/virtio_ethdev.c            | 441 +++------
 drivers/net/virtio/virtio_ethdev.h            |   5 +-
 drivers/net/virtio/virtio_pci.c               | 399 +++++----
 drivers/net/virtio/virtio_pci.h               | 286 +-----
 drivers/net/virtio/virtio_pci_ethdev.c        | 225 +++++
 drivers/net/virtio/virtio_ring.h              |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  90 +-
 drivers/net/virtio/virtio_rxtx_packed_avx.c   |  18 +-
 drivers/net/virtio/virtio_rxtx_simple.h       |   3 +-
 drivers/net/virtio/virtio_user/vhost.h        |  80 +-
 drivers/net/virtio/virtio_user/vhost_kernel.c | 435 ++++++---
 .../net/virtio/virtio_user/vhost_kernel_tap.c |  25 +-
 .../net/virtio/virtio_user/vhost_kernel_tap.h |   1 +
 drivers/net/virtio/virtio_user/vhost_user.c   | 835 ++++++++++++++----
 drivers/net/virtio/virtio_user/vhost_vdpa.c   | 257 ++++--
 .../net/virtio/virtio_user/virtio_user_dev.c  | 490 +++++-----
 .../net/virtio/virtio_user/virtio_user_dev.h  |  22 +-
 drivers/net/virtio/virtio_user_ethdev.c       | 304 ++-----
 drivers/net/virtio/virtqueue.c                |   6 +-
 drivers/net/virtio/virtqueue.h                |  41 +-
 24 files changed, 2481 insertions(+), 1810 deletions(-)
 create mode 100644 drivers/net/virtio/virtio.c
 create mode 100644 drivers/net/virtio/virtio.h
 create mode 100644 drivers/net/virtio/virtio_pci_ethdev.c

-- 
2.29.2


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH] ci: fix package installation in GitHub Actions
@ 2020-12-19  8:26  4% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-12-19  8:26 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana, Thomas Monjalon

APT cache must be updated to avoid trying to install an unavailable
version of a package.

Fixes: 87009585e293 ("ci: hook to GitHub Actions")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
I did not find a way for the update to be done by GHA itself,
so adding an explicit step.

The robot hits this issue on all 32-bits builds at the moment.
I will apply this quickly.

---
 .github/workflows/build.yml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 05eb59527f..0b72df0ebe 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -87,6 +87,8 @@ jobs:
       with:
         path: reference
         key: ${{ steps.get_ref_keys.outputs.abi }}
+    - name: Update APT cache
+      run: sudo apt update
     - name: Install packages
       run: sudo apt install -y ccache libnuma-dev python3-setuptools
         python3-wheel python3-pip ninja-build libbsd-dev libpcap-dev
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v12 00/11] Add PMD power management
      @ 2020-12-17 16:12  3% ` David Marchand
  2021-01-08 16:42  0%   ` Burakov, Anatoly
    3 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-12-17 16:12 UTC (permalink / raw)
  To: Anatoly Burakov
  Cc: dev, Thomas Monjalon, Ananyev, Konstantin, Gage Eads,
	Timothy McDaniel, David Hunt, Bruce Richardson, chris.macnamara,
	Ray Kinsella, Yigit, Ferruh

On Thu, Dec 17, 2020 at 3:06 PM Anatoly Burakov
<anatoly.burakov@intel.com> wrote:
>
> This patchset proposes a simple API for Ethernet drivers to cause the
> CPU to enter a power-optimized state while waiting for packets to
> arrive. This is achieved through cooperation with the NIC driver that
> will allow us to know address of wake up event, and wait for writes on
> it.
>
> On IA, this is achieved through using UMONITOR/UMWAIT instructions. They
> are used in their raw opcode form because there is no widespread
> compiler support for them yet. Still, the API is made generic enough to
> hopefully support other architectures, if they happen to implement
> similar instructions.
>
> To achieve power savings, there is a very simple mechanism used: we're
> counting empty polls, and if a certain threshold is reached, we get the
> address of next RX ring descriptor from the NIC driver, arm the
> monitoring hardware, and enter a power-optimized state. We will then
> wake up when either a timeout happens, or a write happens (or generally
> whenever CPU feels like waking up - this is platform-specific), and
> proceed as normal. The empty poll counter is reset whenever we actually
> get packets, so we only go to sleep when we know nothing is going on.
> The mechanism is generic which can be used for any write back
> descriptor.
>
> This patchset also introduces a few changes into existing power
> management-related intrinsics, namely to provide a native way of waking
> up a sleeping core without application being responsible for it, as well
> as general robustness improvements. There's quite a bit of locking going
> on, but these locks are per-thread and very little (if any) contention
> is expected, so the performance impact shouldn't be that bad (and in any
> case the locking happens when we're about to sleep anyway, not on a
> hotpath).
>
> Why are we putting it into ethdev as opposed to leaving this up to the
> application? Our customers specifically requested a way to do it wit
> minimal changes to the application code. The current approach allows to
> just flip a switch and automatically have power savings.
>
> - Only 1:1 core to queue mapping is supported, meaning that each lcore
>   must at most handle RX on a single queue
> - Support 3 type policies. Monitor/Pause/Frequency Scaling
> - Power management is enabled per-queue
> - The API doesn't extend to other device types

Fyi, ovsrobot Travis being KO, you probably missed that GHA CI caught this:
https://github.com/ovsrobot/dpdk/runs/1571056574?check_suite_focus=true#step:13:16082

We will have to put an exception on driver only ABI.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v12 01/11] eal: uninline power intrinsics
  @ 2020-12-17 14:05  2%   ` Anatoly Burakov
  0 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2020-12-17 14:05 UTC (permalink / raw)
  To: dev
  Cc: Jan Viktorin, Ruifeng Wang, Jerin Jacob, David Christensen,
	Ray Kinsella, Neil Horman, Bruce Richardson, Konstantin Ananyev,
	thomas, gage.eads, timothy.mcdaniel, david.hunt, chris.macnamara

Currently, power intrinsics are inline functions. Make them part of the
ABI so that we can have various internal data associated with them
without exposing said data to the outside world.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 .../arm/include/rte_power_intrinsics.h        |   6 +-
 .../include/generic/rte_power_intrinsics.h    |   6 +-
 .../ppc/include/rte_power_intrinsics.h        |   6 +-
 lib/librte_eal/version.map                    |   5 +
 .../x86/include/rte_power_intrinsics.h        | 115 -----------------
 lib/librte_eal/x86/meson.build                |   1 +
 lib/librte_eal/x86/rte_power_intrinsics.c     | 120 ++++++++++++++++++
 7 files changed, 135 insertions(+), 124 deletions(-)
 create mode 100644 lib/librte_eal/x86/rte_power_intrinsics.c

diff --git a/lib/librte_eal/arm/include/rte_power_intrinsics.h b/lib/librte_eal/arm/include/rte_power_intrinsics.h
index a4a1bc1159..5e384d380e 100644
--- a/lib/librte_eal/arm/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/arm/include/rte_power_intrinsics.h
@@ -16,7 +16,7 @@ extern "C" {
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz)
@@ -31,7 +31,7 @@ rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz, rte_spinlock_t *lck)
@@ -47,7 +47,7 @@ rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on ARM.
  */
-static inline void
+void
 rte_power_pause(const uint64_t tsc_timestamp)
 {
 	RTE_SET_USED(tsc_timestamp);
diff --git a/lib/librte_eal/include/generic/rte_power_intrinsics.h b/lib/librte_eal/include/generic/rte_power_intrinsics.h
index dd520d90fa..67977bd511 100644
--- a/lib/librte_eal/include/generic/rte_power_intrinsics.h
+++ b/lib/librte_eal/include/generic/rte_power_intrinsics.h
@@ -52,7 +52,7 @@
  *   to undefined result.
  */
 __rte_experimental
-static inline void rte_power_monitor(const volatile void *p,
+void rte_power_monitor(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz);
 
@@ -97,7 +97,7 @@ static inline void rte_power_monitor(const volatile void *p,
  *   wakes up.
  */
 __rte_experimental
-static inline void rte_power_monitor_sync(const volatile void *p,
+void rte_power_monitor_sync(const volatile void *p,
 		const uint64_t expected_value, const uint64_t value_mask,
 		const uint64_t tsc_timestamp, const uint8_t data_sz,
 		rte_spinlock_t *lck);
@@ -118,6 +118,6 @@ static inline void rte_power_monitor_sync(const volatile void *p,
  *   architecture-dependent.
  */
 __rte_experimental
-static inline void rte_power_pause(const uint64_t tsc_timestamp);
+void rte_power_pause(const uint64_t tsc_timestamp);
 
 #endif /* _RTE_POWER_INTRINSIC_H_ */
diff --git a/lib/librte_eal/ppc/include/rte_power_intrinsics.h b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
index 4ed03d521f..4cb5560c02 100644
--- a/lib/librte_eal/ppc/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/ppc/include/rte_power_intrinsics.h
@@ -16,7 +16,7 @@ extern "C" {
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz)
@@ -31,7 +31,7 @@ rte_power_monitor(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 		const uint64_t value_mask, const uint64_t tsc_timestamp,
 		const uint8_t data_sz, rte_spinlock_t *lck)
@@ -47,7 +47,7 @@ rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
 /**
  * This function is not supported on PPC64.
  */
-static inline void
+void
 rte_power_pause(const uint64_t tsc_timestamp)
 {
 	RTE_SET_USED(tsc_timestamp);
diff --git a/lib/librte_eal/version.map b/lib/librte_eal/version.map
index 354c068f31..31bf76ae81 100644
--- a/lib/librte_eal/version.map
+++ b/lib/librte_eal/version.map
@@ -403,6 +403,11 @@ EXPERIMENTAL {
 	rte_service_lcore_may_be_active;
 	rte_vect_get_max_simd_bitwidth;
 	rte_vect_set_max_simd_bitwidth;
+
+	# added in 21.02
+	rte_power_monitor;
+	rte_power_monitor_sync;
+	rte_power_pause;
 };
 
 INTERNAL {
diff --git a/lib/librte_eal/x86/include/rte_power_intrinsics.h b/lib/librte_eal/x86/include/rte_power_intrinsics.h
index c7d790c854..e4c2b87f73 100644
--- a/lib/librte_eal/x86/include/rte_power_intrinsics.h
+++ b/lib/librte_eal/x86/include/rte_power_intrinsics.h
@@ -13,121 +13,6 @@ extern "C" {
 
 #include "generic/rte_power_intrinsics.h"
 
-static inline uint64_t
-__rte_power_get_umwait_val(const volatile void *p, const uint8_t sz)
-{
-	switch (sz) {
-	case sizeof(uint8_t):
-		return *(const volatile uint8_t *)p;
-	case sizeof(uint16_t):
-		return *(const volatile uint16_t *)p;
-	case sizeof(uint32_t):
-		return *(const volatile uint32_t *)p;
-	case sizeof(uint64_t):
-		return *(const volatile uint64_t *)p;
-	default:
-		/* this is an intrinsic, so we can't have any error handling */
-		RTE_ASSERT(0);
-		return 0;
-	}
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-}
-
-/**
- * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
- * For more information about usage of these instructions, please refer to
- * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
-		const uint64_t value_mask, const uint64_t tsc_timestamp,
-		const uint8_t data_sz, rte_spinlock_t *lck)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-	/*
-	 * we're using raw byte codes for now as only the newest compiler
-	 * versions support this instruction natively.
-	 */
-
-	/* set address for UMONITOR */
-	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
-			:
-			: "D"(p));
-
-	if (value_mask) {
-		const uint64_t cur_value = __rte_power_get_umwait_val(p, data_sz);
-		const uint64_t masked = cur_value & value_mask;
-
-		/* if the masked value is already matching, abort */
-		if (masked == expected_value)
-			return;
-	}
-	rte_spinlock_unlock(lck);
-
-	/* execute UMWAIT */
-	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
-			: /* ignore rflags */
-			: "D"(0), /* enter C0.2 */
-			  "a"(tsc_l), "d"(tsc_h));
-
-	rte_spinlock_lock(lck);
-}
-
-/**
- * This function uses TPAUSE instruction  and will enter C0.2 state. For more
- * information about usage of this instruction, please refer to Intel(R) 64 and
- * IA-32 Architectures Software Developer's Manual.
- */
-static inline void
-rte_power_pause(const uint64_t tsc_timestamp)
-{
-	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
-	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
-
-	/* execute TPAUSE */
-	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
-		: /* ignore rflags */
-		: "D"(0), /* enter C0.2 */
-		  "a"(tsc_l), "d"(tsc_h));
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/x86/meson.build b/lib/librte_eal/x86/meson.build
index e78f29002e..dfd42dee0c 100644
--- a/lib/librte_eal/x86/meson.build
+++ b/lib/librte_eal/x86/meson.build
@@ -8,4 +8,5 @@ sources += files(
 	'rte_cycles.c',
 	'rte_hypervisor.c',
 	'rte_spinlock.c',
+	'rte_power_intrinsics.c',
 )
diff --git a/lib/librte_eal/x86/rte_power_intrinsics.c b/lib/librte_eal/x86/rte_power_intrinsics.c
new file mode 100644
index 0000000000..34c5fd9c3e
--- /dev/null
+++ b/lib/librte_eal/x86/rte_power_intrinsics.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include "rte_power_intrinsics.h"
+
+static inline uint64_t
+__get_umwait_val(const volatile void *p, const uint8_t sz)
+{
+	switch (sz) {
+	case sizeof(uint8_t):
+		return *(const volatile uint8_t *)p;
+	case sizeof(uint16_t):
+		return *(const volatile uint16_t *)p;
+	case sizeof(uint32_t):
+		return *(const volatile uint32_t *)p;
+	case sizeof(uint64_t):
+		return *(const volatile uint64_t *)p;
+	default:
+		/* this is an intrinsic, so we can't have any error handling */
+		RTE_ASSERT(0);
+		return 0;
+	}
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+}
+
+/**
+ * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
+ * For more information about usage of these instructions, please refer to
+ * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_monitor_sync(const volatile void *p, const uint64_t expected_value,
+		const uint64_t value_mask, const uint64_t tsc_timestamp,
+		const uint8_t data_sz, rte_spinlock_t *lck)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+	/*
+	 * we're using raw byte codes for now as only the newest compiler
+	 * versions support this instruction natively.
+	 */
+
+	/* set address for UMONITOR */
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf7;"
+			:
+			: "D"(p));
+
+	if (value_mask) {
+		const uint64_t cur_value = __get_umwait_val(p, data_sz);
+		const uint64_t masked = cur_value & value_mask;
+
+		/* if the masked value is already matching, abort */
+		if (masked == expected_value)
+			return;
+	}
+	rte_spinlock_unlock(lck);
+
+	/* execute UMWAIT */
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			  "a"(tsc_l), "d"(tsc_h));
+
+	rte_spinlock_lock(lck);
+}
+
+/**
+ * This function uses TPAUSE instruction  and will enter C0.2 state. For more
+ * information about usage of this instruction, please refer to Intel(R) 64 and
+ * IA-32 Architectures Software Developer's Manual.
+ */
+void
+rte_power_pause(const uint64_t tsc_timestamp)
+{
+	const uint32_t tsc_l = (uint32_t)tsc_timestamp;
+	const uint32_t tsc_h = (uint32_t)(tsc_timestamp >> 32);
+
+	/* execute TPAUSE */
+	asm volatile(".byte 0x66, 0x0f, 0xae, 0xf7;"
+			: /* ignore rflags */
+			: "D"(0), /* enter C0.2 */
+			"a"(tsc_l), "d"(tsc_h));
+}
-- 
2.17.1

^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v2 1/1] devtools: adjust verbosity of ABI check
  2020-12-07 17:32 36% [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check Thomas Monjalon
  2020-12-08 15:22  9% ` Kinsella, Ray
  2020-12-08 15:31  4% ` David Marchand
@ 2020-12-17  9:05 36% ` Thomas Monjalon
  2021-01-13  9:21  4%   ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-12-17  9:05 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson, Ray Kinsella, Neil Horman

The scripts gen-abi.sh and check-abi.sh are updated
to print error messages to stderr so they are likely never ignored.

When called from test-meson-builds.sh, the standard messages on stdout
can be more quiet depending on the verbosity settings.
The beginning of the ABI check is announced in verbose mode.
The commands are printed in very verbose mode.
The check result details are available in verbose mode.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v2: remove abidiff command from stdout (already printed on error)
---
 devtools/check-abi.sh         | 20 ++++++++++----------
 devtools/gen-abi.sh           |  4 ++--
 devtools/test-meson-builds.sh |  9 +++++++--
 3 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfbc..9835e346da 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -3,7 +3,7 @@
 # Copyright (c) 2019 Red Hat, Inc.
 
 if [ $# != 2 ] && [ $# != 3 ]; then
-	echo "Usage: $0 refdir newdir [warnonly]"
+	echo "Usage: $0 refdir newdir [warnonly]" >&2
 	exit 1
 fi
 
@@ -13,23 +13,23 @@ warnonly=${3:-}
 ABIDIFF_OPTIONS="--suppr $(dirname $0)/libabigail.abignore --no-added-syms"
 
 if [ ! -d $refdir ]; then
-	echo "Error: reference directory '$refdir' does not exist."
+	echo "Error: reference directory '$refdir' does not exist." >&2
 	exit 1
 fi
 incdir=$(find $refdir -type d -a -name include)
 if [ -z "$incdir" ] || [ ! -e "$incdir" ]; then
-	echo "WARNING: could not identify a include directory for $refdir, expect false positives..."
+	echo "WARNING: could not identify an include directory for $refdir, expect false positives..." >&2
 else
 	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir1 $incdir"
 fi
 
 if [ ! -d $newdir ]; then
-	echo "Error: directory to check '$newdir' does not exist."
+	echo "Error: directory to check '$newdir' does not exist." >&2
 	exit 1
 fi
 incdir2=$(find $newdir -type d -a -name include)
 if [ -z "$incdir2" ] || [ ! -e "$incdir2" ]; then
-	echo "WARNING: could not identify a include directory for $newdir, expect false positives..."
+	echo "WARNING: could not identify an include directory for $newdir, expect false positives..." >&2
 else
 	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir2 $incdir2"
 fi
@@ -46,23 +46,23 @@ for dump in $(find $refdir -name "*.dump"); do
 	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
-		echo "Error: can't find $name in $newdir"
+		echo "Error: cannot find $name in $newdir" >&2
 		error=1
 		continue
 	fi
 	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
 		abiret=$?
-		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'"
+		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'" >&2
 		error=1
 		echo
 		if [ $(($abiret & 3)) -ne 0 ]; then
-			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue."
+			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue." >&2
 		fi
 		if [ $(($abiret & 4)) -ne 0 ]; then
-			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)."
+			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)." >&2
 		fi
 		if [ $(($abiret & 8)) -ne 0 ]; then
-			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI."
+			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI." >&2
 		fi
 		echo
 	}
diff --git a/devtools/gen-abi.sh b/devtools/gen-abi.sh
index c44b0e228a..f15a3b9aaf 100755
--- a/devtools/gen-abi.sh
+++ b/devtools/gen-abi.sh
@@ -3,13 +3,13 @@
 # Copyright (c) 2019 Red Hat, Inc.
 
 if [ $# != 1 ]; then
-	echo "Usage: $0 installdir"
+	echo "Usage: $0 installdir" >&2
 	exit 1
 fi
 
 installdir=$1
 if [ ! -d $installdir ]; then
-	echo "Error: install directory '$installdir' does not exist."
+	echo "Error: install directory '$installdir' does not exist." >&2
 	exit 1
 fi
 
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index ed44d4ffb1..16a81b6241 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -194,10 +194,15 @@ build () # <directory> <target compiler | cross file> <meson options>
 
 		install_target $builds_dir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
+		echo "Checking ABI compatibility of $targetdir" >&$verbose
+		echo $srcdir/devtools/gen-abi.sh \
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install)
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
+		echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
-			$(readlink -f $builds_dir/$targetdir/install)
+			$(readlink -f $builds_dir/$targetdir/install) >&$verbose
 	fi
 }
 
-- 
2.29.2


^ permalink raw reply	[relevance 36%]

* Re: [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks
  2020-12-04 17:36 20%   ` [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks David Marchand
@ 2020-12-14 14:13  4%     ` Aaron Conole
  0 siblings, 0 replies; 200+ results
From: Aaron Conole @ 2020-12-14 14:13 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Michael Santana

David Marchand <david.marchand@redhat.com> writes:

> v21 ABI will be maintained until v21.11.
>
> Let's use the latest released libabigail 1.8.
>
> In GitHub Actions, libabigail binaries and the ABI reference are stored
> in two shared caches as all branches can use the same.
>
> While at it, we can reproduce changes from the commit 0b8086ce3fe7
> ("devtools: remove useless files from ABI reference").
> This will save some space in the CI caches.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---

Acked-by: Aaron Conole <aconole@redhat.com>


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions
  2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
  2020-12-04 17:36 20%   ` [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks David Marchand
  2020-12-11 20:07  3%   ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions Ferruh Yigit
@ 2020-12-14 14:12  0%   ` Aaron Conole
  2 siblings, 0 replies; 200+ results
From: Aaron Conole @ 2020-12-14 14:12 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Michael Santana, Thomas Monjalon

David Marchand <david.marchand@redhat.com> writes:

> With the recent changes in terms of free access to the Travis CI, let's
> offer an alternative with GitHub Actions.
> Running jobs on ARM is not supported unless using external runners, so
> this commit only adds builds for x86_64 and cross compiling for i386 and
> aarch64.
>
> Differences with the Travis CI integration:
> - Error logs are not dumped to the console when something goes wrong.
>   Instead, they are gathered in a "catch-all" step and attached as
>   artifacts.
> - A cache entry is stored once and for all, but if no cache is found you
>   can inherit from the default branch cache. The cache is 5GB large, for
>   the whole git repository.
> - The maximum retention of logs and artifacts is 3 months.
> - /home/runner is world writable, so a workaround has been added for
>   starting dpdk processes.
> - Ilya, working on OVS GHA support, noticed that jobs can run with
>   processors that don't have the same capabilities. For DPDK, this
>   impacts the ccache content since everything was built with
>   -march=native so far, and we will end up with binaries that can't run
>   in a later build. The problem has not been seen in Travis CI (?) but
>   it is safer to use a fixed "-Dmachine=default" in any case.

That's because the build machine and test machine are the same, but I
think GHA uses a different model, and will spawn a new environment for
the steps.  I'm not 100% sure, because it's all supposed to be a
black-box.

> - Scheduling jobs is part of the configuration and takes the form of a
>   crontab. A build is scheduled every Monday at 0:00 (UTC) to provide a
>   default ccache for the week (useful for the ovsrobot).
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---

Acked-by: Aaron Conole <aconole@redhat.com>

> Changelog since v1:
> - changed shell variables value in CI scripts and Travis configuration
>   (s/=[^\$]*1/=\1true), this makes it easier for GHA,
> - forced compilation as 'default' to avoid random unit tests issues in
>   GHA,
> - scheduled a run per week on Monday at 0:00 UTC,
> - updated the ccache key:
>   - no need to depend on the default-library parameter since this
>     parameter only impacts the linking of dpdk binaries,
>   - the week when the cache is generated is added so that jobs in
>     other branches can benefit from a recent cache (mimicking what we had
>     for the robot in Travis),
> - realigned documentation generation with what is done in Travis:
>   generating the doc in all jobs was a waste of resources,
>
> ---
>  .ci/linux-build.sh          |  17 +++---
>  .github/workflows/build.yml | 100 ++++++++++++++++++++++++++++++++++++
>  .travis.yml                 |  24 ++++-----
>  MAINTAINERS                 |   1 +
>  4 files changed, 123 insertions(+), 19 deletions(-)
>  create mode 100644 .github/workflows/build.yml
>
> diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
> index d079801d78..ee8d07f865 100755
> --- a/.ci/linux-build.sh
> +++ b/.ci/linux-build.sh
> @@ -12,7 +12,9 @@ on_error() {
>          fi
>      done
>  }
> -trap on_error EXIT
> +# We capture the error logs as artifacts in Github Actions, no need to dump
> +# them via a EXIT handler.
> +[ -n "$GITHUB_WORKFLOW" ] || trap on_error EXIT
>  
>  install_libabigail() {
>      version=$1
> @@ -28,16 +30,16 @@ install_libabigail() {
>      rm ${version}.tar.gz
>  }
>  
> -if [ "$AARCH64" = "1" ]; then
> +if [ "$AARCH64" = "true" ]; then
>      # convert the arch specifier
>      OPTS="$OPTS --cross-file config/arm/arm64_armv8_linux_gcc"
>  fi
>  
> -if [ "$BUILD_DOCS" = "1" ]; then
> +if [ "$BUILD_DOCS" = "true" ]; then
>      OPTS="$OPTS -Denable_docs=true"
>  fi
>  
> -if [ "$BUILD_32BIT" = "1" ]; then
> +if [ "$BUILD_32BIT" = "true" ]; then
>      OPTS="$OPTS -Dc_args=-m32 -Dc_link_args=-m32"
>      export PKG_CONFIG_LIBDIR="/usr/lib32/pkgconfig"
>  fi
> @@ -48,16 +50,17 @@ else
>      OPTS="$OPTS -Dexamples=all"
>  fi
>  
> +OPTS="$OPTS -Dmachine=default"
>  OPTS="$OPTS --default-library=$DEF_LIB"
>  OPTS="$OPTS --buildtype=debugoptimized"
>  meson build --werror $OPTS
>  ninja -C build
>  
> -if [ "$AARCH64" != "1" ]; then
> +if [ "$AARCH64" != "true" ]; then
>      devtools/test-null.sh
>  fi
>  
> -if [ "$ABI_CHECKS" = "1" ]; then
> +if [ "$ABI_CHECKS" = "true" ]; then
>      LIBABIGAIL_VERSION=${LIBABIGAIL_VERSION:-libabigail-1.6}
>  
>      if [ "$(cat libabigail/VERSION 2>/dev/null)" != "$LIBABIGAIL_VERSION" ]; then
> @@ -95,6 +98,6 @@ if [ "$ABI_CHECKS" = "1" ]; then
>      devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
>  fi
>  
> -if [ "$RUN_TESTS" = "1" ]; then
> +if [ "$RUN_TESTS" = "true" ]; then
>      sudo meson test -C build --suite fast-tests -t 3
>  fi
> diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
> new file mode 100644
> index 0000000000..bef6e52372
> --- /dev/null
> +++ b/.github/workflows/build.yml
> @@ -0,0 +1,100 @@
> +name: build
> +
> +on:
> +  push:
> +  schedule:
> +    - cron: '0 0 * * 1'
> +
> +defaults:
> +  run:
> +    shell: bash --noprofile --norc -exo pipefail {0}
> +
> +jobs:
> +  build:
> +    name: ${{ join(matrix.config.*, '-') }}
> +    runs-on: ${{ matrix.config.os }}
> +    env:
> +      AARCH64: ${{ matrix.config.cross == 'aarch64' }}
> +      BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
> +      BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
> +      CC: ccache ${{ matrix.config.compiler }}
> +      DEF_LIB: ${{ matrix.config.library }}
> +      RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
> +
> +    strategy:
> +      fail-fast: false
> +      matrix:
> +        config:
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: shared
> +            checks: doc+tests
> +          - os: ubuntu-18.04
> +            compiler: clang
> +            library: static
> +          - os: ubuntu-18.04
> +            compiler: clang
> +            library: shared
> +            checks: doc+tests
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +            cross: i386
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +            cross: aarch64
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: shared
> +            cross: aarch64
> +
> +    steps:
> +    - name: Checkout sources
> +      uses: actions/checkout@v2
> +    - name: Generate cache keys
> +      id: get_ref_keys
> +      run: |
> +        echo -n '::set-output name=ccache::'
> +        echo 'ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W)
> +    - name: Retrieve ccache cache
> +      uses: actions/cache@v2
> +      with:
> +        path: ~/.ccache
> +        key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
> +        restore-keys: |
> +          ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
> +    - name: Install packages
> +      run: sudo apt install -y ccache libnuma-dev python3-setuptools
> +        python3-wheel python3-pip ninja-build libbsd-dev libpcap-dev
> +        libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
> +    - name: Install i386 cross compiling packages
> +      if: env.BUILD_32BIT == 'true'
> +      run: sudo apt install -y gcc-multilib
> +    - name: Install aarch64 cross compiling packages
> +      if: env.AARCH64 == 'true'
> +      run: sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross
> +        pkg-config-aarch64-linux-gnu
> +    - name: Install doc generation packages
> +      if: env.BUILD_DOCS == 'true'
> +      run: sudo apt install -y doxygen graphviz python3-sphinx
> +        python3-sphinx-rtd-theme
> +    - name: Run setup
> +      run: |
> +        .ci/linux-setup.sh
> +        # Workaround on $HOME permissions as EAL checks them for plugin loading
> +        chmod o-w $HOME
> +    - name: Build and test
> +      run: .ci/linux-build.sh
> +    - name: Upload logs on failure
> +      if: failure()
> +      uses: actions/upload-artifact@v2
> +      with:
> +        name: meson-logs-${{ join(matrix.config.*, '-') }}
> +        path: |
> +          build/meson-logs/testlog.txt
> +          build/.ninja_log
> +          build/meson-logs/meson-log.txt
> diff --git a/.travis.yml b/.travis.yml
> index 5e12db23b5..d655e286c3 100644
> --- a/.travis.yml
> +++ b/.travis.yml
> @@ -34,10 +34,10 @@ jobs:
>    - env: DEF_LIB="static"
>      arch: amd64
>      compiler: gcc
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      arch: amd64
>      compiler: gcc
> -  - env: DEF_LIB="shared" BUILD_DOCS=1
> +  - env: DEF_LIB="shared" BUILD_DOCS=true
>      arch: amd64
>      compiler: gcc
>      addons:
> @@ -49,10 +49,10 @@ jobs:
>    - env: DEF_LIB="static"
>      arch: amd64
>      compiler: clang
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      arch: amd64
>      compiler: clang
> -  - env: DEF_LIB="shared" BUILD_DOCS=1
> +  - env: DEF_LIB="shared" BUILD_DOCS=true
>      arch: amd64
>      compiler: clang
>      addons:
> @@ -61,7 +61,7 @@ jobs:
>            - *required_packages
>            - *doc_packages
>    # x86_64 cross-compiling 32-bits jobs
> -  - env: DEF_LIB="static" BUILD_32BIT=1
> +  - env: DEF_LIB="static" BUILD_32BIT=true
>      arch: amd64
>      compiler: gcc
>      addons:
> @@ -69,14 +69,14 @@ jobs:
>          packages:
>            - *build_32b_packages
>    # x86_64 cross-compiling aarch64 jobs
> -  - env: DEF_LIB="static" AARCH64=1
> +  - env: DEF_LIB="static" AARCH64=true
>      arch: amd64
>      compiler: gcc
>      addons:
>        apt:
>          packages:
>            - *aarch64_packages
> -  - env: DEF_LIB="shared" AARCH64=1
> +  - env: DEF_LIB="shared" AARCH64=true
>      arch: amd64
>      compiler: gcc
>      addons:
> @@ -87,16 +87,16 @@ jobs:
>    - env: DEF_LIB="static"
>      arch: arm64
>      compiler: gcc
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      arch: arm64
>      compiler: gcc
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      dist: focal
>      arch: arm64-graviton2
>      virt: vm
>      group: edge
>      compiler: gcc
> -  - env: DEF_LIB="shared" BUILD_DOCS=1
> +  - env: DEF_LIB="shared" BUILD_DOCS=true
>      arch: arm64
>      compiler: gcc
>      addons:
> @@ -108,10 +108,10 @@ jobs:
>    - env: DEF_LIB="static"
>      arch: arm64
>      compiler: clang
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      arch: arm64
>      compiler: clang
> -  - env: DEF_LIB="shared" RUN_TESTS=1
> +  - env: DEF_LIB="shared" RUN_TESTS=true
>      dist: focal
>      arch: arm64-graviton2
>      virt: vm
> diff --git a/MAINTAINERS b/MAINTAINERS
> index eafe9f8c46..f45c8c1b13 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -109,6 +109,7 @@ Public CI
>  M: Aaron Conole <aconole@redhat.com>
>  M: Michael Santana <maicolgabriel@hotmail.com>
>  F: .travis.yml
> +F: .github/workflows/build.yml
>  F: .ci/
>  
>  ABI Policy & Versioning


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions
  2020-12-11 20:07  3%   ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions Ferruh Yigit
@ 2020-12-14 10:44  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-12-14 10:44 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, aconole, Michael Santana, Ferruh Yigit

11/12/2020 21:07, Ferruh Yigit:
> On 12/4/2020 5:36 PM, David Marchand wrote:
> > With the recent changes in terms of free access to the Travis CI, let's
> > offer an alternative with GitHub Actions.
> > Running jobs on ARM is not supported unless using external runners, so
> > this commit only adds builds for x86_64 and cross compiling for i386 and
> > aarch64.
> > 
> > Differences with the Travis CI integration:
> > - Error logs are not dumped to the console when something goes wrong.
> >    Instead, they are gathered in a "catch-all" step and attached as
> >    artifacts.
> > - A cache entry is stored once and for all, but if no cache is found you
> >    can inherit from the default branch cache. The cache is 5GB large, for
> >    the whole git repository.
> > - The maximum retention of logs and artifacts is 3 months.
> > - /home/runner is world writable, so a workaround has been added for
> >    starting dpdk processes.
> > - Ilya, working on OVS GHA support, noticed that jobs can run with
> >    processors that don't have the same capabilities. For DPDK, this
> >    impacts the ccache content since everything was built with
> >    -march=native so far, and we will end up with binaries that can't run
> >    in a later build. The problem has not been seen in Travis CI (?) but
> >    it is safer to use a fixed "-Dmachine=default" in any case.
> > - Scheduling jobs is part of the configuration and takes the form of a
> >    crontab. A build is scheduled every Monday at 0:00 (UTC) to provide a
> >    default ccache for the week (useful for the ovsrobot).
> > 
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> > Changelog since v1:
> > - changed shell variables value in CI scripts and Travis configuration
> >    (s/=[^\$]*1/=\1true), this makes it easier for GHA,
> > - forced compilation as 'default' to avoid random unit tests issues in
> >    GHA,
> > - scheduled a run per week on Monday at 0:00 UTC,
> > - updated the ccache key:
> >    - no need to depend on the default-library parameter since this
> >      parameter only impacts the linking of dpdk binaries,
> >    - the week when the cache is generated is added so that jobs in
> >      other branches can benefit from a recent cache (mimicking what we had
> >      for the robot in Travis),
> > - realigned documentation generation with what is done in Travis:
> >    generating the doc in all jobs was a waste of resources,
> > 
> 
> For series,
> Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> Confirmed that ABI check script is detecting issues, in the absence of the 
> Travis checks I am for having this alternative.

Thanks for offering an interesting CI alternative.
For the series,
Acked-by: Thomas Monjalon <thomas@monjalon.net>




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions
  2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
  2020-12-04 17:36 20%   ` [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks David Marchand
@ 2020-12-11 20:07  3%   ` Ferruh Yigit
  2020-12-14 10:44  0%     ` Thomas Monjalon
  2020-12-14 14:12  0%   ` Aaron Conole
  2 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-12-11 20:07 UTC (permalink / raw)
  To: David Marchand, dev; +Cc: aconole, Michael Santana, Thomas Monjalon

On 12/4/2020 5:36 PM, David Marchand wrote:
> With the recent changes in terms of free access to the Travis CI, let's
> offer an alternative with GitHub Actions.
> Running jobs on ARM is not supported unless using external runners, so
> this commit only adds builds for x86_64 and cross compiling for i386 and
> aarch64.
> 
> Differences with the Travis CI integration:
> - Error logs are not dumped to the console when something goes wrong.
>    Instead, they are gathered in a "catch-all" step and attached as
>    artifacts.
> - A cache entry is stored once and for all, but if no cache is found you
>    can inherit from the default branch cache. The cache is 5GB large, for
>    the whole git repository.
> - The maximum retention of logs and artifacts is 3 months.
> - /home/runner is world writable, so a workaround has been added for
>    starting dpdk processes.
> - Ilya, working on OVS GHA support, noticed that jobs can run with
>    processors that don't have the same capabilities. For DPDK, this
>    impacts the ccache content since everything was built with
>    -march=native so far, and we will end up with binaries that can't run
>    in a later build. The problem has not been seen in Travis CI (?) but
>    it is safer to use a fixed "-Dmachine=default" in any case.
> - Scheduling jobs is part of the configuration and takes the form of a
>    crontab. A build is scheduled every Monday at 0:00 (UTC) to provide a
>    default ccache for the week (useful for the ovsrobot).
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Changelog since v1:
> - changed shell variables value in CI scripts and Travis configuration
>    (s/=[^\$]*1/=\1true), this makes it easier for GHA,
> - forced compilation as 'default' to avoid random unit tests issues in
>    GHA,
> - scheduled a run per week on Monday at 0:00 UTC,
> - updated the ccache key:
>    - no need to depend on the default-library parameter since this
>      parameter only impacts the linking of dpdk binaries,
>    - the week when the cache is generated is added so that jobs in
>      other branches can benefit from a recent cache (mimicking what we had
>      for the robot in Travis),
> - realigned documentation generation with what is done in Travis:
>    generating the doc in all jobs was a waste of resources,
> 

For series,
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>

Confirmed that ABI check script is detecting issues, in the absence of the 
Travis checks I am for having this alternative.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] DPDK Release Status Meeting 10/12/2020
@ 2020-12-10 10:54  3% Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-12-10 10:54 UTC (permalink / raw)
  To: dev; +Cc: Thomas Monjalon

Meeting minutes of 10 December 2020
-----------------------------------

Agenda:
* Release Dates
* Subtrees
* LTS
* OvS
* Opens

Participants:
* Arm
* Broadcom
* Debian/Microsoft
* Intel
* Nvidia
* Red Hat


Release Dates
-------------

* v21.02 dates
   * Proposal/V1:    Sunday, 20 December 2020
   * -rc1:           Friday, 15 January 2021
   * Release:        Friday, 5 February 2021

   * Please send roadmaps, preferably before beginning of the release
     * Thanks to NTT for sending roadmap


Subtrees
--------

* main
   * Tooling/testing patches in the backlog
   * There are patches deferred from previous release
   * It is preferred to get risky patches early, to give enough time them to be
     tested
   * Travis not working is a concern for ABI checks
     * David sent patch a patch to enable github actions for checks
       * https://patches.dpdk.org/project/dpdk/list/?series=14188
         * Please test
   * Many people will be taking holidays on last two weeks of the year

* next-net
   * A few patches reviewed & merged
   * No big backlog as of now, nothing interesting

* next-crypto
   * No update

* next-eventdev
   * No update

* next-virtio
   * There are some patches deferred from previous release

* next-net-mlx, next-net-brcm
   * Some patches merged and can be pulled

* next-net-intel, next-net-mrvl
   * No update


LTS
---

* v19.11.6-rc1 is out, please test
   * http://inbox.dpdk.org/dev/20201203093856.299103-1-luca.boccassi@gmail.com/
   * Waiting for test reports
   * Target release date is 17 December

* v18.11.10 work is going on
   * waiting for backports, request emails sent
     * Some backports received, waiting for some others
   * target is to close the -rc1 before holidays


OvS
---

* OvS side reported a build error related to the pkgconfig flag
   * Bruce is investigating it


Opens
-----

* Asaf shared a draft release process documentation
   * After review there were no objection to the document, it will be shared
     publicly



DPDK Release Status Meetings
============================

The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to
"John McNamara <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
  2020-12-08 15:37  4% ` David Marchand
@ 2020-12-08 15:52  5%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-12-08 15:52 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Bruce Richardson

08/12/2020 16:37, David Marchand:
> On Mon, Dec 7, 2020 at 6:33 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > When testing compilation and checking ABI compatibility,
> > there is no real need of static binaries eating disks.
> > The static linkage of applications are tested with GCC and Clang,
> > plus some examples are statically linked.
> > The after-installation build test is limited to "helloworld" example.
> > Note the meson static build test was already limited to "l3fwd" example.
> >
> > The ABI compatibility is checked on shared libraries, so no need
> > running this test a second time on builds intended for static linking.
> > However, limiting ABI check to "shared builds" means all test cases
> > must have a "shared build" occurence.
> > As a consequence the 32-bit build test is switched to shared linking.
> 
> I see no reason to tie the ABI check to default-library.

The only reason is that ABI check triggers binary installation,
which is big when statically linked.

> What about the mingw target?

ABI check is not required for Windows.
BTW there are issues with DLL support.

> What you want is to avoid doing duplicate ABI checks.
> This happens for the gcc/clang x86 builds, so I'd rather control the
> ABI checks out of the build() function (passing a new parameter?).

Yes, it would be cleaner to separate ABI check requirement
and static linking.
In v2, ABI check will be enabled explicitly when calling "build" function
for shared builds.



^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
  2020-12-07 17:33 10% [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries Thomas Monjalon
  2020-12-07 17:47  3% ` Bruce Richardson
@ 2020-12-08 15:37  4% ` David Marchand
  2020-12-08 15:52  5%   ` Thomas Monjalon
  2021-01-13 19:05 13% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-12-08 15:37 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Bruce Richardson

On Mon, Dec 7, 2020 at 6:33 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> When testing compilation and checking ABI compatibility,
> there is no real need of static binaries eating disks.
> The static linkage of applications are tested with GCC and Clang,
> plus some examples are statically linked.
> The after-installation build test is limited to "helloworld" example.
> Note the meson static build test was already limited to "l3fwd" example.
>
> The ABI compatibility is checked on shared libraries, so no need
> running this test a second time on builds intended for static linking.
> However, limiting ABI check to "shared builds" means all test cases
> must have a "shared build" occurence.
> As a consequence the 32-bit build test is switched to shared linking.

I see no reason to tie the ABI check to default-library.

What about the mingw target?


What you want is to avoid doing duplicate ABI checks.
This happens for the gcc/clang x86 builds, so I'd rather control the
ABI checks out of the build() function (passing a new parameter?).

-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check
  2020-12-08 15:22  9% ` Kinsella, Ray
@ 2020-12-08 15:32  4%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-12-08 15:32 UTC (permalink / raw)
  To: dev, Kinsella, Ray; +Cc: david.marchand, bruce.richardson, Neil Horman

08/12/2020 16:22, Kinsella, Ray:
> 
> On 07/12/2020 17:32, Thomas Monjalon wrote:
> > The scripts gen-abi.sh and check-abi.sh are updated
> > to print error messages to stderr so they are likely never ignored.
> > 
> > When called from test-meson-builds.sh, the standard messages on stdout
> > can be more quiet depending on the verbosity settings.
> > The beginning of the ABI check is announced in verbose mode.
> > The commands are printed in very verbose mode.
> > The check result details are available in verbose mode.
> 
> So there is a bit of a disconnect here - you change gen-abi/check-abi to 
> correctly direct errors to sterr.
> 
> You then however provide a method to ignore them in test_meson_build.sh.
> I thinking giving people a way of ignoring the indicated lines below, 
> is a bad plan. 
> 
> No problem with the changes to check-abi/gen-abi - but I think the changes 
> to test_meson_build.sh are a bad idea.

No the errors are not ignored.
Only stdout (report details) is redirected.

> >  		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> > -			$(readlink -f $builds_dir/$targetdir/install)
> > +			$(readlink -f $builds_dir/$targetdir/install) >&$verbose




^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check
  2020-12-07 17:32 36% [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check Thomas Monjalon
  2020-12-08 15:22  9% ` Kinsella, Ray
@ 2020-12-08 15:31  4% ` David Marchand
  2020-12-17  9:05 36% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
  2 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-12-08 15:31 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Bruce Richardson, Ray Kinsella, Neil Horman

On Mon, Dec 7, 2020 at 6:33 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
> index ab6748cfbc..381db2cdd1 100755
> --- a/devtools/check-abi.sh
> +++ b/devtools/check-abi.sh

[snip]

> @@ -46,23 +46,24 @@ for dump in $(find $refdir -name "*.dump"); do
>         fi
>         dump2=$(find $newdir -name $name)
>         if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
> -               echo "Error: can't find $name in $newdir"
> +               echo "Error: cannot find $name in $newdir" >&2
>                 error=1
>                 continue
>         fi
> +       echo abidiff $ABIDIFF_OPTIONS $dump $dump2

On error, this same command is repeated below, so I don't see the need
for this new debug message.


>         abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
>                 abiret=$?
> -               echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'"
> +               echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'" >&2
>                 error=1
>                 echo


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check
  2020-12-07 17:32 36% [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check Thomas Monjalon
@ 2020-12-08 15:22  9% ` Kinsella, Ray
  2020-12-08 15:32  4%   ` Thomas Monjalon
  2020-12-08 15:31  4% ` David Marchand
  2020-12-17  9:05 36% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-12-08 15:22 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: david.marchand, bruce.richardson, Neil Horman



On 07/12/2020 17:32, Thomas Monjalon wrote:
> The scripts gen-abi.sh and check-abi.sh are updated
> to print error messages to stderr so they are likely never ignored.
> 
> When called from test-meson-builds.sh, the standard messages on stdout
> can be more quiet depending on the verbosity settings.
> The beginning of the ABI check is announced in verbose mode.
> The commands are printed in very verbose mode.
> The check result details are available in verbose mode.

So there is a bit of a disconnect here - you change gen-abi/check-abi to 
correctly direct errors to sterr.

You then however provide a method to ignore them in test_meson_build.sh.
I thinking giving people a way of ignoring the indicated lines below, 
is a bad plan. 

No problem with the changes to check-abi/gen-abi - but I think the changes 
to test_meson_build.sh are a bad idea. 

> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  devtools/check-abi.sh         | 21 +++++++++++----------
>  devtools/gen-abi.sh           |  4 ++--
>  devtools/test-meson-builds.sh |  9 +++++++--
>  3 files changed, 20 insertions(+), 14 deletions(-)
> 
> diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
> index ab6748cfbc..381db2cdd1 100755
> --- a/devtools/check-abi.sh
> +++ b/devtools/check-abi.sh
> @@ -3,7 +3,7 @@
>  # Copyright (c) 2019 Red Hat, Inc.
>  
>  if [ $# != 2 ] && [ $# != 3 ]; then
> -	echo "Usage: $0 refdir newdir [warnonly]"
> +	echo "Usage: $0 refdir newdir [warnonly]" >&2
>  	exit 1
>  fi
>  
> @@ -13,23 +13,23 @@ warnonly=${3:-}
>  ABIDIFF_OPTIONS="--suppr $(dirname $0)/libabigail.abignore --no-added-syms"
>  
>  if [ ! -d $refdir ]; then
> -	echo "Error: reference directory '$refdir' does not exist."
> +	echo "Error: reference directory '$refdir' does not exist." >&2
>  	exit 1
>  fi
>  incdir=$(find $refdir -type d -a -name include)
>  if [ -z "$incdir" ] || [ ! -e "$incdir" ]; then
> -	echo "WARNING: could not identify a include directory for $refdir, expect false positives..."
> +	echo "WARNING: could not identify an include directory for $refdir, expect false positives..." >&2
>  else
>  	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir1 $incdir"
>  fi
>  
>  if [ ! -d $newdir ]; then
> -	echo "Error: directory to check '$newdir' does not exist."
> +	echo "Error: directory to check '$newdir' does not exist." >&2
>  	exit 1
>  fi
>  incdir2=$(find $newdir -type d -a -name include)
>  if [ -z "$incdir2" ] || [ ! -e "$incdir2" ]; then
> -	echo "WARNING: could not identify a include directory for $newdir, expect false positives..."
> +	echo "WARNING: could not identify an include directory for $newdir, expect false positives..." >&2
>  else
>  	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir2 $incdir2"
>  fi
> @@ -46,23 +46,24 @@ for dump in $(find $refdir -name "*.dump"); do
>  	fi
>  	dump2=$(find $newdir -name $name)
>  	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
> -		echo "Error: can't find $name in $newdir"
> +		echo "Error: cannot find $name in $newdir" >&2
>  		error=1
>  		continue
>  	fi
> +	echo abidiff $ABIDIFF_OPTIONS $dump $dump2
>  	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
>  		abiret=$?
> -		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'"
> +		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'" >&2
>  		error=1
>  		echo
>  		if [ $(($abiret & 3)) -ne 0 ]; then
> -			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue."
> +			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue." >&2
>  		fi
>  		if [ $(($abiret & 4)) -ne 0 ]; then
> -			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)."
> +			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)." >&2
>  		fi
>  		if [ $(($abiret & 8)) -ne 0 ]; then
> -			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI."
> +			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI." >&2>  		fi
>  		echo
>  	}
> diff --git a/devtools/gen-abi.sh b/devtools/gen-abi.sh
> index c44b0e228a..f15a3b9aaf 100755
> --- a/devtools/gen-abi.sh
> +++ b/devtools/gen-abi.sh
> @@ -3,13 +3,13 @@
>  # Copyright (c) 2019 Red Hat, Inc.
>  
>  if [ $# != 1 ]; then
> -	echo "Usage: $0 installdir"
> +	echo "Usage: $0 installdir" >&2
>  	exit 1
>  fi
>  
>  installdir=$1
>  if [ ! -d $installdir ]; then
> -	echo "Error: install directory '$installdir' does not exist."
> +	echo "Error: install directory '$installdir' does not exist." >&2
>  	exit 1
>  fi
>  
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index ed44d4ffb1..16a81b6241 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -194,10 +194,15 @@ build () # <directory> <target compiler | cross file> <meson options>
>  
>  		install_target $builds_dir/$targetdir \
>  			$(readlink -f $builds_dir/$targetdir/install)
> +		echo "Checking ABI compatibility of $targetdir" >&$verbose
> +		echo $srcdir/devtools/gen-abi.sh \
> +			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
>  		$srcdir/devtools/gen-abi.sh \
> -			$(readlink -f $builds_dir/$targetdir/install)
> +			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
> +		echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> +			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
>  		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> -			$(readlink -f $builds_dir/$targetdir/install)
> +			$(readlink -f $builds_dir/$targetdir/install) >&$verbose
>  	fi
>  }
>  
> 

^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH] ci: hook to Github Actions
  @ 2020-12-08 14:08  4%           ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-12-08 14:08 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Aaron Conole
  Cc: dev, Michael Santana, thomas, nd, Ruifeng Wang, Juraj Linkeš

On Thu, Nov 26, 2020 at 6:01 PM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
> > > Is there any guarantee that GitHub actions will be free forever?
> >
> > There is no "forever".
> I think we are spending our efforts on things that will not work for the community in the long run (unless the project spends money to buy credits)

That was not the initial goal of the patch, but GHA can be used by
developers who work on their github forks too, like for testing before
submitting to the public ml.

On spending efforts, the lab should be the priority.
My main concern was to get ABI checks back quickly since 21.02
proposals started to hit the list.
A ticket has been opened for the lab to handle this, but this can take
some time so in the interim we have GHA support.

I sent the v2 with ABI checks.


--
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
  2020-12-07 18:12  0%   ` Thomas Monjalon
@ 2020-12-08  9:33  0%     ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-12-08  9:33 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, david.marchand

On Mon, Dec 07, 2020 at 07:12:32PM +0100, Thomas Monjalon wrote:
> 07/12/2020 18:47, Bruce Richardson:
> > On Mon, Dec 07, 2020 at 06:33:19PM +0100, Thomas Monjalon wrote:
> > > When testing compilation and checking ABI compatibility,
> > > there is no real need of static binaries eating disks.
> > > The static linkage of applications are tested with GCC and Clang,
> > > plus some examples are statically linked.
> > > The after-installation build test is limited to "helloworld" example.
> > > Note the meson static build test was already limited to "l3fwd" example.
> > > 
> > > The ABI compatibility is checked on shared libraries, so no need
> > > running this test a second time on builds intended for static linking.
> > > However, limiting ABI check to "shared builds" means all test cases
> > > must have a "shared build" occurence.
> > > As a consequence the 32-bit build test is switched to shared linking.
> > > 
> > > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> > >  devtools/test-meson-builds.sh | 8 ++++++--
> > >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > I think this might be better as two patches - one for the ABI check changes
> > and a second for the example build changes with installed DPDK.
> 
> Yes could be 2 patches.
> 
> 
> > >  	for example in $examples; do
> > >  		echo "## Building $example"
> > > +		[ $example = helloworld ] && static=static || static= # save disk space
> > >  		$MAKE -C $DESTDIR/usr/local/share/dpdk/examples/$example \
> > > -			clean shared static >&$veryverbose
> > > +			clean shared $static >&$veryverbose
> > >  	done
> > >  fi
> > 
> > Just wonder are we likely to miss things with this change? Would changing
> > the order to do a clean at the end to free back up the disk space not
> > achieve much the same result while still saving disk space?
> 
> Not building static flavour of most examples is also faster.
> Ideally we should not rebuild an example if the libs did not change.
> 
> To the question "will we miss something", the difference between static
> and shared examples is just the pkg-config call in the Makefile.
> I think the risk is small.
> 
Yes, for the majority of the apps that is the case. However, the only
concern I have is for a number of the apps which link directly against a
driver or two. Looking at vm_power_manager example, which links against 3
drivers, I see that the extra flags are only added for shared builds so we
should be ok for that one anyway.

Therefore ok with this change exactly as you suggest.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
  2020-12-07 17:47  3% ` Bruce Richardson
@ 2020-12-07 18:12  0%   ` Thomas Monjalon
  2020-12-08  9:33  0%     ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-12-07 18:12 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, david.marchand

07/12/2020 18:47, Bruce Richardson:
> On Mon, Dec 07, 2020 at 06:33:19PM +0100, Thomas Monjalon wrote:
> > When testing compilation and checking ABI compatibility,
> > there is no real need of static binaries eating disks.
> > The static linkage of applications are tested with GCC and Clang,
> > plus some examples are statically linked.
> > The after-installation build test is limited to "helloworld" example.
> > Note the meson static build test was already limited to "l3fwd" example.
> > 
> > The ABI compatibility is checked on shared libraries, so no need
> > running this test a second time on builds intended for static linking.
> > However, limiting ABI check to "shared builds" means all test cases
> > must have a "shared build" occurence.
> > As a consequence the 32-bit build test is switched to shared linking.
> > 
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >  devtools/test-meson-builds.sh | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> I think this might be better as two patches - one for the ABI check changes
> and a second for the example build changes with installed DPDK.

Yes could be 2 patches.


> >  	for example in $examples; do
> >  		echo "## Building $example"
> > +		[ $example = helloworld ] && static=static || static= # save disk space
> >  		$MAKE -C $DESTDIR/usr/local/share/dpdk/examples/$example \
> > -			clean shared static >&$veryverbose
> > +			clean shared $static >&$veryverbose
> >  	done
> >  fi
> 
> Just wonder are we likely to miss things with this change? Would changing
> the order to do a clean at the end to free back up the disk space not
> achieve much the same result while still saving disk space?

Not building static flavour of most examples is also faster.
Ideally we should not rebuild an example if the libs did not change.

To the question "will we miss something", the difference between static
and shared examples is just the pkg-config call in the Makefile.
I think the risk is small.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
  2020-12-07 17:33 10% [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries Thomas Monjalon
@ 2020-12-07 17:47  3% ` Bruce Richardson
  2020-12-07 18:12  0%   ` Thomas Monjalon
  2020-12-08 15:37  4% ` David Marchand
  2021-01-13 19:05 13% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2020-12-07 17:47 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, david.marchand

On Mon, Dec 07, 2020 at 06:33:19PM +0100, Thomas Monjalon wrote:
> When testing compilation and checking ABI compatibility,
> there is no real need of static binaries eating disks.
> The static linkage of applications are tested with GCC and Clang,
> plus some examples are statically linked.
> The after-installation build test is limited to "helloworld" example.
> Note the meson static build test was already limited to "l3fwd" example.
> 
> The ABI compatibility is checked on shared libraries, so no need
> running this test a second time on builds intended for static linking.
> However, limiting ABI check to "shared builds" means all test cases
> must have a "shared build" occurence.
> As a consequence the 32-bit build test is switched to shared linking.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  devtools/test-meson-builds.sh | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)

I think this might be better as two patches - one for the ABI check changes
and a second for the example build changes with installed DPDK.

> 
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index 7280b7a93d..ed44d4ffb1 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -166,6 +166,9 @@ build () # <directory> <target compiler | cross file> <meson options>
>  	config $srcdir $builds_dir/$targetdir $cross --werror $*
>  	compile $builds_dir/$targetdir
>  	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> +		if echo $* | grep -qw -- '--default-library=static' ; then
> +			return # skip ABI check for static build
> +		fi
>  		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
>  		if [ ! -d $abirefdir/$targetdir ]; then
>  			# clone current sources
> @@ -230,7 +233,7 @@ if check_cc_flags '-m32' ; then
>  		export PKG_CONFIG_LIBDIR='/usr/lib/pkgconfig'
>  	fi
>  	target_override='i386-pc-linux-gnu'
> -	build build-32b cc -Dc_args='-m32' -Dc_link_args='-m32'
> +	build build-32b cc -Dc_args='-m32' -Dc_link_args='-m32' $use_shared
>  	target_override=
>  	unset PKG_CONFIG_LIBDIR
>  fi
> @@ -274,7 +277,8 @@ if pkg-config --define-prefix libdpdk >/dev/null 2>&1; then
>  	export PKGCONF="pkg-config --define-prefix"
>  	for example in $examples; do
>  		echo "## Building $example"
> +		[ $example = helloworld ] && static=static || static= # save disk space
>  		$MAKE -C $DESTDIR/usr/local/share/dpdk/examples/$example \
> -			clean shared static >&$veryverbose
> +			clean shared $static >&$veryverbose
>  	done
>  fi

Just wonder are we likely to miss things with this change? Would changing
the order to do a clean at the end to free back up the disk space not
achieve much the same result while still saving disk space?

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries
@ 2020-12-07 17:33 10% Thomas Monjalon
  2020-12-07 17:47  3% ` Bruce Richardson
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2020-12-07 17:33 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson

When testing compilation and checking ABI compatibility,
there is no real need of static binaries eating disks.
The static linkage of applications are tested with GCC and Clang,
plus some examples are statically linked.
The after-installation build test is limited to "helloworld" example.
Note the meson static build test was already limited to "l3fwd" example.

The ABI compatibility is checked on shared libraries, so no need
running this test a second time on builds intended for static linking.
However, limiting ABI check to "shared builds" means all test cases
must have a "shared build" occurence.
As a consequence the 32-bit build test is switched to shared linking.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 devtools/test-meson-builds.sh | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 7280b7a93d..ed44d4ffb1 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -166,6 +166,9 @@ build () # <directory> <target compiler | cross file> <meson options>
 	config $srcdir $builds_dir/$targetdir $cross --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+		if echo $* | grep -qw -- '--default-library=static' ; then
+			return # skip ABI check for static build
+		fi
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
 		if [ ! -d $abirefdir/$targetdir ]; then
 			# clone current sources
@@ -230,7 +233,7 @@ if check_cc_flags '-m32' ; then
 		export PKG_CONFIG_LIBDIR='/usr/lib/pkgconfig'
 	fi
 	target_override='i386-pc-linux-gnu'
-	build build-32b cc -Dc_args='-m32' -Dc_link_args='-m32'
+	build build-32b cc -Dc_args='-m32' -Dc_link_args='-m32' $use_shared
 	target_override=
 	unset PKG_CONFIG_LIBDIR
 fi
@@ -274,7 +277,8 @@ if pkg-config --define-prefix libdpdk >/dev/null 2>&1; then
 	export PKGCONF="pkg-config --define-prefix"
 	for example in $examples; do
 		echo "## Building $example"
+		[ $example = helloworld ] && static=static || static= # save disk space
 		$MAKE -C $DESTDIR/usr/local/share/dpdk/examples/$example \
-			clean shared static >&$veryverbose
+			clean shared $static >&$veryverbose
 	done
 fi
-- 
2.29.2


^ permalink raw reply	[relevance 10%]

* [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check
@ 2020-12-07 17:32 36% Thomas Monjalon
  2020-12-08 15:22  9% ` Kinsella, Ray
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2020-12-07 17:32 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson, Ray Kinsella, Neil Horman

The scripts gen-abi.sh and check-abi.sh are updated
to print error messages to stderr so they are likely never ignored.

When called from test-meson-builds.sh, the standard messages on stdout
can be more quiet depending on the verbosity settings.
The beginning of the ABI check is announced in verbose mode.
The commands are printed in very verbose mode.
The check result details are available in verbose mode.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 devtools/check-abi.sh         | 21 +++++++++++----------
 devtools/gen-abi.sh           |  4 ++--
 devtools/test-meson-builds.sh |  9 +++++++--
 3 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfbc..381db2cdd1 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -3,7 +3,7 @@
 # Copyright (c) 2019 Red Hat, Inc.
 
 if [ $# != 2 ] && [ $# != 3 ]; then
-	echo "Usage: $0 refdir newdir [warnonly]"
+	echo "Usage: $0 refdir newdir [warnonly]" >&2
 	exit 1
 fi
 
@@ -13,23 +13,23 @@ warnonly=${3:-}
 ABIDIFF_OPTIONS="--suppr $(dirname $0)/libabigail.abignore --no-added-syms"
 
 if [ ! -d $refdir ]; then
-	echo "Error: reference directory '$refdir' does not exist."
+	echo "Error: reference directory '$refdir' does not exist." >&2
 	exit 1
 fi
 incdir=$(find $refdir -type d -a -name include)
 if [ -z "$incdir" ] || [ ! -e "$incdir" ]; then
-	echo "WARNING: could not identify a include directory for $refdir, expect false positives..."
+	echo "WARNING: could not identify an include directory for $refdir, expect false positives..." >&2
 else
 	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir1 $incdir"
 fi
 
 if [ ! -d $newdir ]; then
-	echo "Error: directory to check '$newdir' does not exist."
+	echo "Error: directory to check '$newdir' does not exist." >&2
 	exit 1
 fi
 incdir2=$(find $newdir -type d -a -name include)
 if [ -z "$incdir2" ] || [ ! -e "$incdir2" ]; then
-	echo "WARNING: could not identify a include directory for $newdir, expect false positives..."
+	echo "WARNING: could not identify an include directory for $newdir, expect false positives..." >&2
 else
 	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir2 $incdir2"
 fi
@@ -46,23 +46,24 @@ for dump in $(find $refdir -name "*.dump"); do
 	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
-		echo "Error: can't find $name in $newdir"
+		echo "Error: cannot find $name in $newdir" >&2
 		error=1
 		continue
 	fi
+	echo abidiff $ABIDIFF_OPTIONS $dump $dump2
 	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
 		abiret=$?
-		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'"
+		echo "Error: ABI issue reported for 'abidiff $ABIDIFF_OPTIONS $dump $dump2'" >&2
 		error=1
 		echo
 		if [ $(($abiret & 3)) -ne 0 ]; then
-			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue."
+			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue." >&2
 		fi
 		if [ $(($abiret & 4)) -ne 0 ]; then
-			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)."
+			echo "ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue)." >&2
 		fi
 		if [ $(($abiret & 8)) -ne 0 ]; then
-			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI."
+			echo "ABIDIFF_ABI_INCOMPATIBLE_CHANGE, this change breaks the ABI." >&2
 		fi
 		echo
 	}
diff --git a/devtools/gen-abi.sh b/devtools/gen-abi.sh
index c44b0e228a..f15a3b9aaf 100755
--- a/devtools/gen-abi.sh
+++ b/devtools/gen-abi.sh
@@ -3,13 +3,13 @@
 # Copyright (c) 2019 Red Hat, Inc.
 
 if [ $# != 1 ]; then
-	echo "Usage: $0 installdir"
+	echo "Usage: $0 installdir" >&2
 	exit 1
 fi
 
 installdir=$1
 if [ ! -d $installdir ]; then
-	echo "Error: install directory '$installdir' does not exist."
+	echo "Error: install directory '$installdir' does not exist." >&2
 	exit 1
 fi
 
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index ed44d4ffb1..16a81b6241 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -194,10 +194,15 @@ build () # <directory> <target compiler | cross file> <meson options>
 
 		install_target $builds_dir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
+		echo "Checking ABI compatibility of $targetdir" >&$verbose
+		echo $srcdir/devtools/gen-abi.sh \
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install)
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
+		echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
+			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
-			$(readlink -f $builds_dir/$targetdir/install)
+			$(readlink -f $builds_dir/$targetdir/install) >&$verbose
 	fi
 }
 
-- 
2.29.2


^ permalink raw reply	[relevance 36%]

* [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks
  2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
@ 2020-12-04 17:36 20%   ` David Marchand
  2020-12-14 14:13  4%     ` Aaron Conole
  2020-12-11 20:07  3%   ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions Ferruh Yigit
  2020-12-14 14:12  0%   ` Aaron Conole
  2 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-12-04 17:36 UTC (permalink / raw)
  To: dev; +Cc: aconole, Michael Santana

v21 ABI will be maintained until v21.11.

Let's use the latest released libabigail 1.8.

In GitHub Actions, libabigail binaries and the ABI reference are stored
in two shared caches as all branches can use the same.

While at it, we can reproduce changes from the commit 0b8086ce3fe7
("devtools: remove useless files from ABI reference").
This will save some space in the CI caches.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .ci/linux-build.sh          |  5 ++++-
 .github/workflows/build.yml | 26 +++++++++++++++++++++++++-
 .travis.yml                 | 27 +++++++++++++++++++++++++++
 3 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ee8d07f865..d2c821adf3 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -86,10 +86,13 @@ if [ "$ABI_CHECKS" = "true" ]; then
     if [ ! -d reference ]; then
         refsrcdir=$(readlink -f $(pwd)/../dpdk-$REF_GIT_TAG)
         git clone --single-branch -b $REF_GIT_TAG $REF_GIT_REPO $refsrcdir
-        meson --werror $OPTS $refsrcdir $refsrcdir/build
+        meson $OPTS -Dexamples= $refsrcdir $refsrcdir/build
         ninja -C $refsrcdir/build
         DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
         devtools/gen-abi.sh reference
+        find reference/usr/local -name '*.a' -delete
+        rm -rf reference/usr/local/bin
+        rm -rf reference/usr/local/share
         echo $REF_GIT_TAG > reference/VERSION
     fi
 
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index bef6e52372..05eb59527f 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -15,10 +15,13 @@ jobs:
     runs-on: ${{ matrix.config.os }}
     env:
       AARCH64: ${{ matrix.config.cross == 'aarch64' }}
+      ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
       BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
       BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
+      LIBABIGAIL_VERSION: libabigail-1.8
+      REF_GIT_TAG: v20.11
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
 
     strategy:
@@ -31,7 +34,7 @@ jobs:
           - os: ubuntu-18.04
             compiler: gcc
             library: shared
-            checks: doc+tests
+            checks: abi+doc+tests
           - os: ubuntu-18.04
             compiler: clang
             library: static
@@ -60,6 +63,10 @@ jobs:
       run: |
         echo -n '::set-output name=ccache::'
         echo 'ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W)
+        echo -n '::set-output name=libabigail::'
+        echo 'libabigail-${{ matrix.config.os }}'
+        echo -n '::set-output name=abi::'
+        echo 'abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}'
     - name: Retrieve ccache cache
       uses: actions/cache@v2
       with:
@@ -67,10 +74,27 @@ jobs:
         key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
         restore-keys: |
           ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
+    - name: Retrieve libabigail cache
+      id: libabigail-cache
+      uses: actions/cache@v2
+      if: env.ABI_CHECKS == 'true'
+      with:
+        path: libabigail
+        key: ${{ steps.get_ref_keys.outputs.libabigail }}
+    - name: Retrieve ABI reference cache
+      uses: actions/cache@v2
+      if: env.ABI_CHECKS == 'true'
+      with:
+        path: reference
+        key: ${{ steps.get_ref_keys.outputs.abi }}
     - name: Install packages
       run: sudo apt install -y ccache libnuma-dev python3-setuptools
         python3-wheel python3-pip ninja-build libbsd-dev libpcap-dev
         libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
+    - name: Install libabigail build dependencies if no cache is available
+      if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
+      run: sudo apt install -y autoconf automake libtool pkg-config libxml2-dev
+          libdw-dev
     - name: Install i386 cross compiling packages
       if: env.BUILD_32BIT == 'true'
       run: sudo apt install -y gcc-multilib
diff --git a/.travis.yml b/.travis.yml
index d655e286c3..5aa7ad49f1 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -2,6 +2,9 @@
 language: c
 cache:
   ccache: true
+  directories:
+    - libabigail
+    - reference
 
 dist: bionic
 
@@ -18,6 +21,9 @@ _aarch64_packages: &aarch64_packages
   - *required_packages
   - [gcc-aarch64-linux-gnu, libc6-dev-arm64-cross, pkg-config-aarch64-linux-gnu]
 
+_libabigail_build_packages: &libabigail_build_packages
+  - [autoconf, automake, libtool, pkg-config, libxml2-dev, libdw-dev]
+
 _build_32b_packages: &build_32b_packages
   - *required_packages
   - [gcc-multilib]
@@ -28,6 +34,11 @@ _doc_packages: &doc_packages
 before_install: ./.ci/${TRAVIS_OS_NAME}-setup.sh
 script: ./.ci/${TRAVIS_OS_NAME}-build.sh
 
+env:
+  global:
+    - LIBABIGAIL_VERSION=libabigail-1.8
+    - REF_GIT_TAG=v20.11
+
 jobs:
   include:
   # x86_64 gcc jobs
@@ -45,6 +56,14 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
+  - env: DEF_LIB="shared" ABI_CHECKS=true
+    arch: amd64
+    compiler: gcc
+    addons:
+      apt:
+        packages:
+          - *required_packages
+          - *libabigail_build_packages
   # x86_64 clang jobs
   - env: DEF_LIB="static"
     arch: amd64
@@ -104,6 +123,14 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
+  - env: DEF_LIB="shared" ABI_CHECKS=true
+    arch: arm64
+    compiler: gcc
+    addons:
+      apt:
+        packages:
+          - *required_packages
+          - *libabigail_build_packages
   # aarch64 clang jobs
   - env: DEF_LIB="static"
     arch: arm64
-- 
2.23.0


^ permalink raw reply	[relevance 20%]

* [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions
  2020-11-24 21:57  4% [dpdk-dev] [PATCH] ci: hook to Github Actions David Marchand
  2020-11-25 13:44  0% ` Aaron Conole
@ 2020-12-04 17:36  4% ` David Marchand
  2020-12-04 17:36 20%   ` [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks David Marchand
                     ` (2 more replies)
  1 sibling, 3 replies; 200+ results
From: David Marchand @ 2020-12-04 17:36 UTC (permalink / raw)
  To: dev; +Cc: aconole, Michael Santana, Thomas Monjalon

With the recent changes in terms of free access to the Travis CI, let's
offer an alternative with GitHub Actions.
Running jobs on ARM is not supported unless using external runners, so
this commit only adds builds for x86_64 and cross compiling for i386 and
aarch64.

Differences with the Travis CI integration:
- Error logs are not dumped to the console when something goes wrong.
  Instead, they are gathered in a "catch-all" step and attached as
  artifacts.
- A cache entry is stored once and for all, but if no cache is found you
  can inherit from the default branch cache. The cache is 5GB large, for
  the whole git repository.
- The maximum retention of logs and artifacts is 3 months.
- /home/runner is world writable, so a workaround has been added for
  starting dpdk processes.
- Ilya, working on OVS GHA support, noticed that jobs can run with
  processors that don't have the same capabilities. For DPDK, this
  impacts the ccache content since everything was built with
  -march=native so far, and we will end up with binaries that can't run
  in a later build. The problem has not been seen in Travis CI (?) but
  it is safer to use a fixed "-Dmachine=default" in any case.
- Scheduling jobs is part of the configuration and takes the form of a
  crontab. A build is scheduled every Monday at 0:00 (UTC) to provide a
  default ccache for the week (useful for the ovsrobot).

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changelog since v1:
- changed shell variables value in CI scripts and Travis configuration
  (s/=[^\$]*1/=\1true), this makes it easier for GHA,
- forced compilation as 'default' to avoid random unit tests issues in
  GHA,
- scheduled a run per week on Monday at 0:00 UTC,
- updated the ccache key:
  - no need to depend on the default-library parameter since this
    parameter only impacts the linking of dpdk binaries,
  - the week when the cache is generated is added so that jobs in
    other branches can benefit from a recent cache (mimicking what we had
    for the robot in Travis),
- realigned documentation generation with what is done in Travis:
  generating the doc in all jobs was a waste of resources,

---
 .ci/linux-build.sh          |  17 +++---
 .github/workflows/build.yml | 100 ++++++++++++++++++++++++++++++++++++
 .travis.yml                 |  24 ++++-----
 MAINTAINERS                 |   1 +
 4 files changed, 123 insertions(+), 19 deletions(-)
 create mode 100644 .github/workflows/build.yml

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index d079801d78..ee8d07f865 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -12,7 +12,9 @@ on_error() {
         fi
     done
 }
-trap on_error EXIT
+# We capture the error logs as artifacts in Github Actions, no need to dump
+# them via a EXIT handler.
+[ -n "$GITHUB_WORKFLOW" ] || trap on_error EXIT
 
 install_libabigail() {
     version=$1
@@ -28,16 +30,16 @@ install_libabigail() {
     rm ${version}.tar.gz
 }
 
-if [ "$AARCH64" = "1" ]; then
+if [ "$AARCH64" = "true" ]; then
     # convert the arch specifier
     OPTS="$OPTS --cross-file config/arm/arm64_armv8_linux_gcc"
 fi
 
-if [ "$BUILD_DOCS" = "1" ]; then
+if [ "$BUILD_DOCS" = "true" ]; then
     OPTS="$OPTS -Denable_docs=true"
 fi
 
-if [ "$BUILD_32BIT" = "1" ]; then
+if [ "$BUILD_32BIT" = "true" ]; then
     OPTS="$OPTS -Dc_args=-m32 -Dc_link_args=-m32"
     export PKG_CONFIG_LIBDIR="/usr/lib32/pkgconfig"
 fi
@@ -48,16 +50,17 @@ else
     OPTS="$OPTS -Dexamples=all"
 fi
 
+OPTS="$OPTS -Dmachine=default"
 OPTS="$OPTS --default-library=$DEF_LIB"
 OPTS="$OPTS --buildtype=debugoptimized"
 meson build --werror $OPTS
 ninja -C build
 
-if [ "$AARCH64" != "1" ]; then
+if [ "$AARCH64" != "true" ]; then
     devtools/test-null.sh
 fi
 
-if [ "$ABI_CHECKS" = "1" ]; then
+if [ "$ABI_CHECKS" = "true" ]; then
     LIBABIGAIL_VERSION=${LIBABIGAIL_VERSION:-libabigail-1.6}
 
     if [ "$(cat libabigail/VERSION 2>/dev/null)" != "$LIBABIGAIL_VERSION" ]; then
@@ -95,6 +98,6 @@ if [ "$ABI_CHECKS" = "1" ]; then
     devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
 fi
 
-if [ "$RUN_TESTS" = "1" ]; then
+if [ "$RUN_TESTS" = "true" ]; then
     sudo meson test -C build --suite fast-tests -t 3
 fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 0000000000..bef6e52372
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,100 @@
+name: build
+
+on:
+  push:
+  schedule:
+    - cron: '0 0 * * 1'
+
+defaults:
+  run:
+    shell: bash --noprofile --norc -exo pipefail {0}
+
+jobs:
+  build:
+    name: ${{ join(matrix.config.*, '-') }}
+    runs-on: ${{ matrix.config.os }}
+    env:
+      AARCH64: ${{ matrix.config.cross == 'aarch64' }}
+      BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
+      BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
+      CC: ccache ${{ matrix.config.compiler }}
+      DEF_LIB: ${{ matrix.config.library }}
+      RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
+
+    strategy:
+      fail-fast: false
+      matrix:
+        config:
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: shared
+            checks: doc+tests
+          - os: ubuntu-18.04
+            compiler: clang
+            library: static
+          - os: ubuntu-18.04
+            compiler: clang
+            library: shared
+            checks: doc+tests
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+            cross: i386
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+            cross: aarch64
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: shared
+            cross: aarch64
+
+    steps:
+    - name: Checkout sources
+      uses: actions/checkout@v2
+    - name: Generate cache keys
+      id: get_ref_keys
+      run: |
+        echo -n '::set-output name=ccache::'
+        echo 'ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W)
+    - name: Retrieve ccache cache
+      uses: actions/cache@v2
+      with:
+        path: ~/.ccache
+        key: ${{ steps.get_ref_keys.outputs.ccache }}-${{ github.ref }}
+        restore-keys: |
+          ${{ steps.get_ref_keys.outputs.ccache }}-refs/heads/main
+    - name: Install packages
+      run: sudo apt install -y ccache libnuma-dev python3-setuptools
+        python3-wheel python3-pip ninja-build libbsd-dev libpcap-dev
+        libibverbs-dev libcrypto++-dev libfdt-dev libjansson-dev
+    - name: Install i386 cross compiling packages
+      if: env.BUILD_32BIT == 'true'
+      run: sudo apt install -y gcc-multilib
+    - name: Install aarch64 cross compiling packages
+      if: env.AARCH64 == 'true'
+      run: sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross
+        pkg-config-aarch64-linux-gnu
+    - name: Install doc generation packages
+      if: env.BUILD_DOCS == 'true'
+      run: sudo apt install -y doxygen graphviz python3-sphinx
+        python3-sphinx-rtd-theme
+    - name: Run setup
+      run: |
+        .ci/linux-setup.sh
+        # Workaround on $HOME permissions as EAL checks them for plugin loading
+        chmod o-w $HOME
+    - name: Build and test
+      run: .ci/linux-build.sh
+    - name: Upload logs on failure
+      if: failure()
+      uses: actions/upload-artifact@v2
+      with:
+        name: meson-logs-${{ join(matrix.config.*, '-') }}
+        path: |
+          build/meson-logs/testlog.txt
+          build/.ninja_log
+          build/meson-logs/meson-log.txt
diff --git a/.travis.yml b/.travis.yml
index 5e12db23b5..d655e286c3 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -34,10 +34,10 @@ jobs:
   - env: DEF_LIB="static"
     arch: amd64
     compiler: gcc
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     arch: amd64
     compiler: gcc
-  - env: DEF_LIB="shared" BUILD_DOCS=1
+  - env: DEF_LIB="shared" BUILD_DOCS=true
     arch: amd64
     compiler: gcc
     addons:
@@ -49,10 +49,10 @@ jobs:
   - env: DEF_LIB="static"
     arch: amd64
     compiler: clang
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     arch: amd64
     compiler: clang
-  - env: DEF_LIB="shared" BUILD_DOCS=1
+  - env: DEF_LIB="shared" BUILD_DOCS=true
     arch: amd64
     compiler: clang
     addons:
@@ -61,7 +61,7 @@ jobs:
           - *required_packages
           - *doc_packages
   # x86_64 cross-compiling 32-bits jobs
-  - env: DEF_LIB="static" BUILD_32BIT=1
+  - env: DEF_LIB="static" BUILD_32BIT=true
     arch: amd64
     compiler: gcc
     addons:
@@ -69,14 +69,14 @@ jobs:
         packages:
           - *build_32b_packages
   # x86_64 cross-compiling aarch64 jobs
-  - env: DEF_LIB="static" AARCH64=1
+  - env: DEF_LIB="static" AARCH64=true
     arch: amd64
     compiler: gcc
     addons:
       apt:
         packages:
           - *aarch64_packages
-  - env: DEF_LIB="shared" AARCH64=1
+  - env: DEF_LIB="shared" AARCH64=true
     arch: amd64
     compiler: gcc
     addons:
@@ -87,16 +87,16 @@ jobs:
   - env: DEF_LIB="static"
     arch: arm64
     compiler: gcc
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     arch: arm64
     compiler: gcc
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     dist: focal
     arch: arm64-graviton2
     virt: vm
     group: edge
     compiler: gcc
-  - env: DEF_LIB="shared" BUILD_DOCS=1
+  - env: DEF_LIB="shared" BUILD_DOCS=true
     arch: arm64
     compiler: gcc
     addons:
@@ -108,10 +108,10 @@ jobs:
   - env: DEF_LIB="static"
     arch: arm64
     compiler: clang
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     arch: arm64
     compiler: clang
-  - env: DEF_LIB="shared" RUN_TESTS=1
+  - env: DEF_LIB="shared" RUN_TESTS=true
     dist: focal
     arch: arm64-graviton2
     virt: vm
diff --git a/MAINTAINERS b/MAINTAINERS
index eafe9f8c46..f45c8c1b13 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -109,6 +109,7 @@ Public CI
 M: Aaron Conole <aconole@redhat.com>
 M: Michael Santana <maicolgabriel@hotmail.com>
 F: .travis.yml
+F: .github/workflows/build.yml
 F: .ci/
 
 ABI Policy & Versioning
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH] version: 21.02-rc0
@ 2020-11-30  9:23 10% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-30  9:23 UTC (permalink / raw)
  To: dev; +Cc: thomas

Start a new release cycle with empty release notes.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Here we go again!

Patches have been archived in patchwork, all deferred patches are back in
the NEW state.

I'll send a separate patch to re-enable the ABI checks in the CI.

Please maintainers, rebase your trees.

The current dates for this release:
- Proposal deadline (RFC/v1 patches): December 20, 2020
- API freeze (-rc1): January 15, 2021
- Release: February 5, 2021

---
 ABI_VERSION                            |   2 +-
 VERSION                                |   2 +-
 doc/guides/rel_notes/index.rst         |   1 +
 doc/guides/rel_notes/release_21_02.rst | 139 +++++++++++++++++++++++++
 5 files changed, 143 insertions(+), 3 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_21_02.rst

diff --git a/ABI_VERSION b/ABI_VERSION
index 204da679a1..a9ac8dacb0 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-21.0
+21.1
diff --git a/VERSION b/VERSION
index 4e3f998d00..30bbcd61a4 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-20.11.0
+21.02.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 31278d2a8a..05c9d837a4 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_21_02
     release_20_11
     release_20_08
     release_20_05
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
new file mode 100644
index 0000000000..39064afbe9
--- /dev/null
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -0,0 +1,139 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 21.02
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      make doc-guides-html
+
+      xdg-open build/doc/html/guides/rel_notes/release_21_02.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =========================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+* No ABI change that would break compatibility with 20.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
-- 
2.23.0


^ permalink raw reply	[relevance 10%]

* [dpdk-dev] [dpdk-announce] DPDK 20.11 released
@ 2020-11-27 21:37  4% Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-11-27 21:37 UTC (permalink / raw)
  To: announce

A new major release is available:
	https://fast.dpdk.org/rel/dpdk-20.11.tar.xz

Our Thanksgiving gift is the biggest DPDK release ever:
	2195 commits from 214 authors
	2665 files changed, 269546 insertions(+), 107426 deletions(-)

The branch 20.11 should be supported for at least two years,
making it recommended for system integration and deployment.
The maintainer of this new LTS is Kevin Traynor.

The new major ABI version is 21.
The next releases 21.02, 21.05 and 21.08 will be ABI compatible with 20.11.


Below are some new features, grouped by category.
* General
	- mbuf dynamic area increased from 16 to 36 bytes
	- ring zero copy
	- SIMD bitwidth limit API
	- Windows PCI netuio
	- moved igb_uio to dpdk-kmods/linux
	- removed Python 2 support
	- removed Make support
* Networking
	- FEC API
	- Rx buffer split
	- thread safety in flow API
	- shared action in flow API
	- flow sampling and mirroring
	- tunnel offload API
	- multi-port hairpin
	- Solarflare EF100 architecture
	- Wangxun txgbe driver
	- vhost-vDPA backend in virtio-user
	- removed vhost dequeue zero-copy
	- removed legacy ethdev filtering
	- SWX pipeline aligned with P4
* Baseband
	- Intel ACC100 driver
* Cryptography
	- raw datapath API
	- Broadcom BCMFS symmetric crypto driver
* RegEx
	- Marvell OCTEON TX2 regex driver
* Others
	- Intel DLB/DLB2 drivers
	- Intel DSA support in IOAT driver

More details in the release notes:
	https://doc.dpdk.org/guides/rel_notes/release_20_11.html


There are 64 new contributors (including authors, reviewers and testers).
Welcome to Aidan Goddard, Amit Bernstein, Andrey Vesnovaty, Artur Rojek,
Benoît Ganne, Brandon Lo, Brian Johnson, Brian Poole, Christophe Grosse,
Churchill Khangar, Conor Walsh, David Liu, Dawid Lukwinski,
Diogo Behrens, Dongdong Liu, Franck Lenormand, Galazka Krzysztof,
Guoyang Zhou, Haggai Eran, Harshitha Ramamurthy, Ibtisam Tariq,
Ido Segev, Jay Jayatheerthan, Jiawen Wu, Jie Zhou, John Alexander,
Julien Massonneau, Jørgen Østergaard Sloth, Khoa To, Li Zhang,
Lingli Chen, Liu Tianjiao, Maciej Rabeda, Marcel Cornu, Mike Ximing Chen,
Muthurajan Jayakumar, Nan Chen, Nick Connolly, Norbert Ciosek,
Omkar Maslekar, Padraig Connolly, Piotr Bronowski, Przemyslaw Ciesielski,
Qin Sun, Radha Mohan Chintakuntla, Rani Sharoni, Raveendra Padasalagi,
Robin Zhang, RongQing Li, Shay Amir, Steve Yang, Steven Lariau, Tom Rix,
Venkata Suresh Kumar P, Vijay Kumar Srivastava, Vikas Gupta,
Vimal Chungath, Vipul Ashri, Wei Huang, Wei Ling, Weqaar Janjua, Yi Yang,
Yogesh Jangra and Zhenghua Zhou.

Below is the number of commits per employer (with authors count):
	687     Intel (72)
	439     Nvidia (34)
	150     Huawei (11)
	123     Broadcom (15)
	116     Solarflare (1)
	104     Red Hat (6)
	 96     OKTET Labs (3)
	 79     Marvell (16)
	 71     Arm (7)
	 59     Trustnet (1)
	 52     Microsoft (3)
	 51     NXP (12)
	 27     Semihalf (1)
	 27     Samsung (2)
	 19     6WIND (5)
	 17     Cisco (4)
	 13     BIFIT (1)
	 11     Emumba (2)
	  7     Xilinx (1)
	  7     Chelsio (2)
	  5     Inspur (1)
	  4     MayaData (1)

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
	128     Ferruh Yigit <ferruh.yigit@intel.com>
	 68     Bruce Richardson <bruce.richardson@intel.com>
	 63     Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
	 62     David Marchand <david.marchand@redhat.com>
	 53     Ruifeng Wang <ruifeng.wang@arm.com>
	 40     Konstantin Ananyev <konstantin.ananyev@intel.com>
	 38     Ajit Khaparde <ajit.khaparde@broadcom.com>
	 37     Ori Kam <orika@nvidia.com>
	 33     Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>


The new features for 21.02 may be submitted during the next 3 weeks,
in order to be reviewed and integrated before mid-January.
DPDK 21.02 should be small in order to release in early February:
	https://core.dpdk.org/roadmap#dates
Please share your roadmap.


Thanks everyone, enjoy a well deserved rest.



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: add sample for ABI checks in contribution guide
  @ 2020-11-27 14:37  4% ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-27 14:37 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: John McNamara, Marko Kovacevic, dev

On Fri, Jul 3, 2020 at 7:15 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

Sample sounds odd to me, but applied as is.
Thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: clarify abi reference version to use in patches
  @ 2020-11-27 14:37  4% ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-27 14:37 UTC (permalink / raw)
  To: Ray Kinsella; +Cc: dev, Yigit, Ferruh, John McNamara, Marko Kovacevic

On Mon, Aug 10, 2020 at 11:24 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Clarify the ABI reference version (DPDK_ABI_REF_VERSION) tag, to use
> when testing builds with devtools/test_build[_meson].sh before

devtools/test-meson-builds.sh*

> submitting patches.
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Reviewed-by: David Marchand <david.marchand@redhat.com>

Applied, thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] eal: fix errno on service cores init failure
  2020-11-26 16:37  0%   ` Olivier Matz
@ 2020-11-26 16:42  0%     ` Van Haaren, Harry
  0 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2020-11-26 16:42 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Richardson, Bruce, Jerin Jacob, stable

> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Thursday, November 26, 2020 4:37 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; stable@dpdk.org
> Subject: Re: [PATCH] eal: fix errno on service cores init failure
> 
> Hi Harry,
> 
> On Thu, Nov 26, 2020 at 02:46:30PM +0000, Van Haaren, Harry wrote:
> > > -----Original Message-----
> > > From: Olivier Matz <olivier.matz@6wind.com>
> > > Sent: Thursday, November 26, 2020 2:25 PM
> > > To: dev@dpdk.org
> > > Cc: Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
> > > <jerin.jacob@caviumnetworks.com>; Van Haaren, Harry
> > > <harry.van.haaren@intel.com>; stable@dpdk.org
> > > Subject: [PATCH] eal: fix errno on service cores init failure
> > >
> > > Currently, when rte_service_init() fails at initialization, we
> > > see the following message:
> > >
> > >   Cannot init EAL: Exec format error
> > >
> > > This error code does describe the real issue. Instead, use the error
> > > code returned by the function.
> >
> > Should the above read as "does NOT describe" .. ?
> >
> > > Fixes: e39824500825 ("service: initialize with EAL")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> >
> > Few comments below, assuming agree on those, add my Ack on v2?
> >
> > Checked, -ENOMEM and -EALREADY are returned today, which seem
> > better descriptive terms. Thanks for fixing,
> >
> > Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
> >
> >
> > > ---
> > >  lib/librte_eal/freebsd/eal.c | 4 ++--
> > >  lib/librte_eal/linux/eal.c   | 4 ++--
> > >  2 files changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
> > > index d6ea023750..51478358c7 100644
> > > --- a/lib/librte_eal/freebsd/eal.c
> > > +++ b/lib/librte_eal/freebsd/eal.c
> > > @@ -906,7 +906,7 @@ rte_eal_init(int argc, char **argv)
> > >  	ret = rte_service_init();
> > >  	if (ret) {
> > >  		rte_eal_init_alert("rte_service_init() failed");
> > > -		rte_errno = ENOEXEC;
> > > +		rte_errno = -ret;
> > >  		return -1;
> > >  	}
> >
> > Here we set   rte_errno   as   -ret,  as in rte_service_init() we return the negative,
> e.g.   -ENOMEM.
> > Perhaps it is cleaner to to return ENOMEM from rte_service_init(), and avoid the
> duplicate negation?
> >
> > rte_service_init() is not exported publicly in the .map file, so is internal only, and
> hence not an ABI break.
> 
> I think returning -errno is common in dpdk, so I'll keep it like
> this. Or it can eventually return -1 and set rte_errno.

OK, fine with as is too, minor thing, thanks!

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: fix errno on service cores init failure
  2020-11-26 14:46  3% ` Van Haaren, Harry
@ 2020-11-26 16:37  0%   ` Olivier Matz
  2020-11-26 16:42  0%     ` Van Haaren, Harry
  0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2020-11-26 16:37 UTC (permalink / raw)
  To: Van Haaren, Harry; +Cc: dev, Richardson, Bruce, Jerin Jacob, stable

Hi Harry,

On Thu, Nov 26, 2020 at 02:46:30PM +0000, Van Haaren, Harry wrote:
> > -----Original Message-----
> > From: Olivier Matz <olivier.matz@6wind.com>
> > Sent: Thursday, November 26, 2020 2:25 PM
> > To: dev@dpdk.org
> > Cc: Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
> > <jerin.jacob@caviumnetworks.com>; Van Haaren, Harry
> > <harry.van.haaren@intel.com>; stable@dpdk.org
> > Subject: [PATCH] eal: fix errno on service cores init failure
> > 
> > Currently, when rte_service_init() fails at initialization, we
> > see the following message:
> > 
> >   Cannot init EAL: Exec format error
> > 
> > This error code does describe the real issue. Instead, use the error
> > code returned by the function.
> 
> Should the above read as "does NOT describe" .. ?
> 
> > Fixes: e39824500825 ("service: initialize with EAL")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> Few comments below, assuming agree on those, add my Ack on v2?
> 
> Checked, -ENOMEM and -EALREADY are returned today, which seem
> better descriptive terms. Thanks for fixing,
> 
> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
> 
> 
> > ---
> >  lib/librte_eal/freebsd/eal.c | 4 ++--
> >  lib/librte_eal/linux/eal.c   | 4 ++--
> >  2 files changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
> > index d6ea023750..51478358c7 100644
> > --- a/lib/librte_eal/freebsd/eal.c
> > +++ b/lib/librte_eal/freebsd/eal.c
> > @@ -906,7 +906,7 @@ rte_eal_init(int argc, char **argv)
> >  	ret = rte_service_init();
> >  	if (ret) {
> >  		rte_eal_init_alert("rte_service_init() failed");
> > -		rte_errno = ENOEXEC;
> > +		rte_errno = -ret;
> >  		return -1;
> >  	}
> 
> Here we set   rte_errno   as   -ret,  as in rte_service_init() we return the negative, e.g.   -ENOMEM.
> Perhaps it is cleaner to to return ENOMEM from rte_service_init(), and avoid the duplicate negation?
> 
> rte_service_init() is not exported publicly in the .map file, so is internal only, and hence not an ABI break.

I think returning -errno is common in dpdk, so I'll keep it like
this. Or it can eventually return -1 and set rte_errno.

> 
> 
> > @@ -922,7 +922,7 @@ rte_eal_init(int argc, char **argv)
> >  	 */
> >  	ret = rte_service_start_with_defaults();
> >  	if (ret < 0 && ret != -ENOTSUP) {
> > -		rte_errno = ENOEXEC;
> > +		rte_errno = -ret;
> >  		return -1;
> >  	}
> > 
> > diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
> > index a4161be630..32b48c3de9 100644
> > --- a/lib/librte_eal/linux/eal.c
> > +++ b/lib/librte_eal/linux/eal.c
> > @@ -1273,7 +1273,7 @@ rte_eal_init(int argc, char **argv)
> >  	ret = rte_service_init();
> >  	if (ret) {
> >  		rte_eal_init_alert("rte_service_init() failed");
> > -		rte_errno = ENOEXEC;
> > +		rte_errno = -ret;
> >  		return -1;
> >  	}
> > 
> > @@ -1295,7 +1295,7 @@ rte_eal_init(int argc, char **argv)
> >  	 */
> >  	ret = rte_service_start_with_defaults();
> >  	if (ret < 0 && ret != -ENOTSUP) {
> > -		rte_errno = ENOEXEC;
> > +		rte_errno = -ret;
> >  		return -1;
> >  	}
> > 
> > --
> > 2.25.1
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: fix errno on service cores init failure
  @ 2020-11-26 14:46  3% ` Van Haaren, Harry
  2020-11-26 16:37  0%   ` Olivier Matz
  0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2020-11-26 14:46 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: Richardson, Bruce, Jerin Jacob, stable

> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Thursday, November 26, 2020 2:25 PM
> To: dev@dpdk.org
> Cc: Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; stable@dpdk.org
> Subject: [PATCH] eal: fix errno on service cores init failure
> 
> Currently, when rte_service_init() fails at initialization, we
> see the following message:
> 
>   Cannot init EAL: Exec format error
> 
> This error code does describe the real issue. Instead, use the error
> code returned by the function.

Should the above read as "does NOT describe" .. ?

> Fixes: e39824500825 ("service: initialize with EAL")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

Few comments below, assuming agree on those, add my Ack on v2?

Checked, -ENOMEM and -EALREADY are returned today, which seem
better descriptive terms. Thanks for fixing,

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>


> ---
>  lib/librte_eal/freebsd/eal.c | 4 ++--
>  lib/librte_eal/linux/eal.c   | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
> index d6ea023750..51478358c7 100644
> --- a/lib/librte_eal/freebsd/eal.c
> +++ b/lib/librte_eal/freebsd/eal.c
> @@ -906,7 +906,7 @@ rte_eal_init(int argc, char **argv)
>  	ret = rte_service_init();
>  	if (ret) {
>  		rte_eal_init_alert("rte_service_init() failed");
> -		rte_errno = ENOEXEC;
> +		rte_errno = -ret;
>  		return -1;
>  	}

Here we set   rte_errno   as   -ret,  as in rte_service_init() we return the negative, e.g.   -ENOMEM.
Perhaps it is cleaner to to return ENOMEM from rte_service_init(), and avoid the duplicate negation?

rte_service_init() is not exported publicly in the .map file, so is internal only, and hence not an ABI break.


> @@ -922,7 +922,7 @@ rte_eal_init(int argc, char **argv)
>  	 */
>  	ret = rte_service_start_with_defaults();
>  	if (ret < 0 && ret != -ENOTSUP) {
> -		rte_errno = ENOEXEC;
> +		rte_errno = -ret;
>  		return -1;
>  	}
> 
> diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
> index a4161be630..32b48c3de9 100644
> --- a/lib/librte_eal/linux/eal.c
> +++ b/lib/librte_eal/linux/eal.c
> @@ -1273,7 +1273,7 @@ rte_eal_init(int argc, char **argv)
>  	ret = rte_service_init();
>  	if (ret) {
>  		rte_eal_init_alert("rte_service_init() failed");
> -		rte_errno = ENOEXEC;
> +		rte_errno = -ret;
>  		return -1;
>  	}
> 
> @@ -1295,7 +1295,7 @@ rte_eal_init(int argc, char **argv)
>  	 */
>  	ret = rte_service_start_with_defaults();
>  	if (ret < 0 && ret != -ENOTSUP) {
> -		rte_errno = ENOEXEC;
> +		rte_errno = -ret;
>  		return -1;
>  	}
> 
> --
> 2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ci: hook to Github Actions
  2020-11-24 21:57  4% [dpdk-dev] [PATCH] ci: hook to Github Actions David Marchand
@ 2020-11-25 13:44  0% ` Aaron Conole
    2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
  1 sibling, 1 reply; 200+ results
From: Aaron Conole @ 2020-11-25 13:44 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Michael Santana, Thomas Monjalon

David Marchand <david.marchand@redhat.com> writes:

> With the recent changes in terms of free access to the Travis CI, let's
> offer an alternative with Github Actions.
> Running jobs on ARM is not supported unless using external runners, so
> this commit only adds builds for x86_64 and cross compiling for i386 and
> aarch64.
>
> Differences with the Travis CI integration:
> - All jobs generate documentation.
>   This is not that heavy and the default timeout on actions is never
>   reached so no reason splitting this into multiple jobs.
> - Error logs are not dumped to the console when something goes wrong.
>   Instead, they are gathered in a "catch-all" step and attached as
>   artifacts.
> - A cache entry is stored once and for all, but if no cache is found you
>   can inherit from the default branch cache. The cache is 5GB large, for
>   the whole git repository.
> - The maximum retention of logs and artifacts is 3 months.
> - /home/runner is world writable, so a workaround has been added for
>   starting dpdk processes.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---

Thanks for working on this.  Sadly, I think we will have to abandon
Travis soon - given the new changes it is looking very awful.  Robot
already is starved for job time.

Since we don't have ARM test runs, I guess we will have to rely on
something else for that coverage now, but I like that there is coverage
included at least to compile.

I will need to update the robot to pull information from github actions,
so for now it will need to be manually checked (but here's an example of
a run: https://github.com/ovsrobot/dpdk/actions/runs/382073265).  What's
nice is the robot is already primed to run the jobs, so that's good.

Acked-by: Aaron Conole <aconole@redhat.com>

>  .ci/linux-build.sh          |  4 +-
>  .github/workflows/build.yml | 98 +++++++++++++++++++++++++++++++++++++
>  MAINTAINERS                 |  1 +
>  3 files changed, 102 insertions(+), 1 deletion(-)
>  create mode 100644 .github/workflows/build.yml
>
> diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
> index d079801d78..a2a0e5bf42 100755
> --- a/.ci/linux-build.sh
> +++ b/.ci/linux-build.sh
> @@ -12,7 +12,9 @@ on_error() {
>          fi
>      done
>  }
> -trap on_error EXIT
> +# We capture the error logs as artifacts in Github Actions, no need to dump
> +# them via a EXIT handler.
> +[ -n "$GITHUB_WORKFLOW" ] || trap on_error EXIT
>  
>  install_libabigail() {
>      version=$1
> diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
> new file mode 100644
> index 0000000000..e0a8f1ed52
> --- /dev/null
> +++ b/.github/workflows/build.yml
> @@ -0,0 +1,98 @@
> +name: build
> +
> +on: push
> +
> +defaults:
> +  run:
> +    shell: bash --noprofile --norc -exo pipefail {0}
> +
> +jobs:
> +  build:
> +    name: ${{ join(matrix.config.*, '-') }}
> +    runs-on: ${{ matrix.config.os }}
> +    env:
> +      PKGS: |
> +        ccache libnuma-dev python3-setuptools python3-wheel python3-pip \
> +        ninja-build libbsd-dev libpcap-dev libibverbs-dev libcrypto++-dev \
> +        libfdt-dev libjansson-dev doxygen graphviz python3-sphinx \
> +        python3-sphinx-rtd-theme
> +      CC: ccache ${{ matrix.config.compiler }}
> +      JOBNAME: ${{ join(matrix.config.*, '-') }}
> +
> +    strategy:
> +      fail-fast: false
> +      matrix:
> +        config:
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: shared
> +          - os: ubuntu-18.04
> +            compiler: clang
> +            library: static
> +          - os: ubuntu-18.04
> +            compiler: clang
> +            library: shared
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +            cross: i386
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: static
> +            cross: aarch64
> +          - os: ubuntu-18.04
> +            compiler: gcc
> +            library: shared
> +            cross: aarch64
> +
> +    steps:
> +    - uses: actions/checkout@v2
> +    - uses: actions/cache@v2
> +      with:
> +        path: ~/.ccache
> +        key: ${{ env.JOBNAME }}-${{ github.ref }}
> +        restore-keys: |
> +          ${{ env.JOBNAME }}-refs/heads/main
> +    - name: Install packages
> +      run: sudo apt install -y ${{ env.PKGS }}
> +    - name: Install i386 cross compiling packages
> +      if: matrix.config.cross == 'i386'
> +      run: sudo apt install -y gcc-multilib
> +    - name: Install aarch64 cross compiling packages
> +      if: matrix.config.cross == 'aarch64'
> +      run: |
> +        sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross \
> +          pkg-config-aarch64-linux-gnu
> +    - name: Prepare environment
> +      run: |
> +         .ci/linux-setup.sh
> +         # Workaround on $HOME permissions as EAL checks them for plugin loading
> +         chmod o-w $HOME
> +    - name: Build and test
> +      run: |
> +        export DEF_LIB=${{ matrix.config.library }}
> +        export BUILD_DOCS=1
> +        case '${{ matrix.config.cross }}' in
> +        'i386')
> +            export BUILD_32BIT=1
> +        ;;
> +        'aarch64')
> +            export AARCH64=1
> +        ;;
> +        '')
> +            export RUN_TESTS=1
> +        ;;
> +        esac
> +        .ci/linux-build.sh
> +    - name: Upload logs on failure
> +      if: failure()
> +      uses: actions/upload-artifact@v2
> +      with:
> +        name: meson-logs-${{ env.JOBNAME }}
> +        path: |
> +          build/meson-logs/testlog.txt
> +          build/.ninja_log
> +          build/meson-logs/meson-log.txt
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 214515060a..95b61085b7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -109,6 +109,7 @@ Public CI
>  M: Aaron Conole <aconole@redhat.com>
>  M: Michael Santana <maicolgabriel@hotmail.com>
>  F: .travis.yml
> +F: .github/workflows/build.yml
>  F: .ci/
>  
>  ABI Policy & Versioning


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] ci: hook to Github Actions
@ 2020-11-24 21:57  4% David Marchand
  2020-11-25 13:44  0% ` Aaron Conole
  2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2020-11-24 21:57 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana, Thomas Monjalon

With the recent changes in terms of free access to the Travis CI, let's
offer an alternative with Github Actions.
Running jobs on ARM is not supported unless using external runners, so
this commit only adds builds for x86_64 and cross compiling for i386 and
aarch64.

Differences with the Travis CI integration:
- All jobs generate documentation.
  This is not that heavy and the default timeout on actions is never
  reached so no reason splitting this into multiple jobs.
- Error logs are not dumped to the console when something goes wrong.
  Instead, they are gathered in a "catch-all" step and attached as
  artifacts.
- A cache entry is stored once and for all, but if no cache is found you
  can inherit from the default branch cache. The cache is 5GB large, for
  the whole git repository.
- The maximum retention of logs and artifacts is 3 months.
- /home/runner is world writable, so a workaround has been added for
  starting dpdk processes.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .ci/linux-build.sh          |  4 +-
 .github/workflows/build.yml | 98 +++++++++++++++++++++++++++++++++++++
 MAINTAINERS                 |  1 +
 3 files changed, 102 insertions(+), 1 deletion(-)
 create mode 100644 .github/workflows/build.yml

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index d079801d78..a2a0e5bf42 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -12,7 +12,9 @@ on_error() {
         fi
     done
 }
-trap on_error EXIT
+# We capture the error logs as artifacts in Github Actions, no need to dump
+# them via a EXIT handler.
+[ -n "$GITHUB_WORKFLOW" ] || trap on_error EXIT
 
 install_libabigail() {
     version=$1
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 0000000000..e0a8f1ed52
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,98 @@
+name: build
+
+on: push
+
+defaults:
+  run:
+    shell: bash --noprofile --norc -exo pipefail {0}
+
+jobs:
+  build:
+    name: ${{ join(matrix.config.*, '-') }}
+    runs-on: ${{ matrix.config.os }}
+    env:
+      PKGS: |
+        ccache libnuma-dev python3-setuptools python3-wheel python3-pip \
+        ninja-build libbsd-dev libpcap-dev libibverbs-dev libcrypto++-dev \
+        libfdt-dev libjansson-dev doxygen graphviz python3-sphinx \
+        python3-sphinx-rtd-theme
+      CC: ccache ${{ matrix.config.compiler }}
+      JOBNAME: ${{ join(matrix.config.*, '-') }}
+
+    strategy:
+      fail-fast: false
+      matrix:
+        config:
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: shared
+          - os: ubuntu-18.04
+            compiler: clang
+            library: static
+          - os: ubuntu-18.04
+            compiler: clang
+            library: shared
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+            cross: i386
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: static
+            cross: aarch64
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: shared
+            cross: aarch64
+
+    steps:
+    - uses: actions/checkout@v2
+    - uses: actions/cache@v2
+      with:
+        path: ~/.ccache
+        key: ${{ env.JOBNAME }}-${{ github.ref }}
+        restore-keys: |
+          ${{ env.JOBNAME }}-refs/heads/main
+    - name: Install packages
+      run: sudo apt install -y ${{ env.PKGS }}
+    - name: Install i386 cross compiling packages
+      if: matrix.config.cross == 'i386'
+      run: sudo apt install -y gcc-multilib
+    - name: Install aarch64 cross compiling packages
+      if: matrix.config.cross == 'aarch64'
+      run: |
+        sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross \
+          pkg-config-aarch64-linux-gnu
+    - name: Prepare environment
+      run: |
+         .ci/linux-setup.sh
+         # Workaround on $HOME permissions as EAL checks them for plugin loading
+         chmod o-w $HOME
+    - name: Build and test
+      run: |
+        export DEF_LIB=${{ matrix.config.library }}
+        export BUILD_DOCS=1
+        case '${{ matrix.config.cross }}' in
+        'i386')
+            export BUILD_32BIT=1
+        ;;
+        'aarch64')
+            export AARCH64=1
+        ;;
+        '')
+            export RUN_TESTS=1
+        ;;
+        esac
+        .ci/linux-build.sh
+    - name: Upload logs on failure
+      if: failure()
+      uses: actions/upload-artifact@v2
+      with:
+        name: meson-logs-${{ env.JOBNAME }}
+        path: |
+          build/meson-logs/testlog.txt
+          build/.ninja_log
+          build/meson-logs/meson-log.txt
diff --git a/MAINTAINERS b/MAINTAINERS
index 214515060a..95b61085b7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -109,6 +109,7 @@ Public CI
 M: Aaron Conole <aconole@redhat.com>
 M: Michael Santana <maicolgabriel@hotmail.com>
 F: .travis.yml
+F: .github/workflows/build.yml
 F: .ci/
 
 ABI Policy & Versioning
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v1] doc: update release notes for 20.11
@ 2020-11-24 20:40  7% John McNamara
  0 siblings, 0 replies; 200+ results
From: John McNamara @ 2020-11-24 20:40 UTC (permalink / raw)
  To: dev; +Cc: thomas, John McNamara

Fix grammar, spelling and formatting of DPDK 20.11 release notes.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst | 178 +++++++++++++------------
 1 file changed, 94 insertions(+), 84 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index ea70289af..2ce47614c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -59,7 +59,7 @@ New Features
 
   Added ``rte_write32_wc`` and ``rte_write32_wc_relaxed`` APIs
   that enable write combining stores (depending on architecture).
-  The functions are provided as a generic stubs and
+  The functions are provided as a generic stub and
   x86 specific implementation.
 
 * **Added prefetch with intention to write APIs.**
@@ -108,45 +108,50 @@ New Features
 * **Added the FEC API, for a generic FEC query and config.**
 
   Added the FEC API which provides functions for query FEC capabilities and
-  current FEC mode from device. Also, API for configuring FEC mode is also provided.
+  current FEC mode from device. An API for configuring FEC mode is also provided.
 
 * **Added thread safety to rte_flow functions.**
 
-  Added ``RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE`` device flag to indicate
-  whether PMD supports thread safe operations. If PMD doesn't set the flag,
-  rte_flow API level functions will protect the flow operations with mutex.
+  Added the ``RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE`` device flag to indicate
+  whether a PMD supports thread safe operations. If the PMD doesn't set the flag,
+  the rte_flow API level functions will protect the flow operations with a mutex.
 
 * **Added flow-based traffic sampling support.**
 
-  Added new action: ``RTE_FLOW_ACTION_TYPE_SAMPLE`` to duplicate the matching
-  packets with specified ratio, and apply with own set of actions with a fate
-  action. When the ratio is set to 1 then the packets will be 100% mirrored.
+  Added a new action ``RTE_FLOW_ACTION_TYPE_SAMPLE`` that will sample the
+  incoming traffic and send a duplicated traffic with the specified ratio to
+  the application, while the original packet will continue to the target
+  destination.
+
+  The packets sampling is '1/ratio'. A ratio value set to 1 means that the
+  packets will be completely mirrored. The sample packet can be assigned with
+  a different set of actions than the original packet.
 
 * **Added support of shared action in flow API.**
 
-  Added shared action support to utilize single flow action in multiple flow
-  rules. An update of shared action configuration alters the behavior of all
+  Added shared action support to use single flow actions in multiple flow
+  rules. An update to the shared action configuration alters the behavior of all
   flow rules using it.
 
-  * Added new action: ``RTE_FLOW_ACTION_TYPE_SHARED`` to use shared action
-    as flow action.
-  * Added new flow APIs to create/update/destroy/query shared action.
+  * Added a new action: ``RTE_FLOW_ACTION_TYPE_SHARED`` to use shared action
+    as a flow action.
+  * Added new flow APIs to create/update/destroy/query shared actions.
 
-* **Flow rules allowed to use private PMD items / actions.**
+* **Added support to flow rules to allow private PMD items/actions.**
 
-  * Flow rule verification was updated to accept private PMD
+  * Flow rule verification has been  updated to accept private PMD
     items and actions.
 
-* **Added generic API to offload tunneled traffic and restore missed packet.**
+* **Added a generic API to offload tunneled traffic and restore missed packets.**
 
-  * Added a new hardware independent helper to flow API that
+  * Added a new hardware independent helper to the flow API that
     offloads tunneled traffic and restores missed packets.
 
 * **Updated the ethdev library to support hairpin between two ports.**
 
-  New APIs are introduced to support binding / unbinding 2 ports hairpin.
-  Hairpin Tx part flow rules can be inserted explicitly.
-  New API is added to get the hairpin peer ports list.
+  New APIs have been introduced to support binding / unbinding of 2 ports in a
+  hairpin configuration. The hairpin Tx part flow rules can be inserted
+  explicitly. A new API has been added to get the hairpin peer ports list.
 
 * **Updated the Amazon ena driver.**
 
@@ -175,12 +180,12 @@ New Features
 
 * **Added hns3 FEC PMD, for supporting query and config FEC mode.**
 
-  Added the FEC PMD which provides functions for query FEC capabilities and
-  current FEC mode from device. Also, PMD for configuring FEC mode is also provided.
+  Added the FEC PMD which provides functions for querying FEC capabilities and
+  current FEC mode from a device. A PMD for configuring FEC mode is also provided.
 
-* **Updated Intel iavf driver.**
+* **Updated the Intel iavf driver.**
 
-  Updated iavf PMD with new features and improvements, including:
+  Updated the iavf PMD with new features and improvements, including:
 
   * Added support for flexible descriptor metadata extraction.
   * Added support for outer IP hash of GTPC and GTPU.
@@ -189,12 +194,12 @@ New Features
 
 * **Updated Intel ice driver.**
 
-  * Used write combining stores.
-  * Added ACL filter support for Intel DCF.
+  * Added support for write combining stores.
+  * Added ACL filter support for the Intel DCF.
 
-* **Updated Mellanox mlx5 driver.**
+* **Updated the Mellanox mlx5 driver.**
 
-  Updated Mellanox mlx5 driver with new features and improvements, including:
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
 
   * Added vectorized Multi-Packet Rx Queue burst.
   * Added support for 2 new miniCQE formats: Flow Tag and L3/L4 header.
@@ -204,9 +209,9 @@ New Features
   * Added support for the new VLAN fields ``has_vlan`` in the Ethernet item
     and ``has_more_vlan`` in the VLAN item.
   * Updated the supported timeout for Age action to the maximal value supported
-    by rte_flow API.
-  * Added support of Age action query.
-  * Added support of multi-ports hairpin.
+    by the rte_flow API.
+  * Added support for Age action query.
+  * Added support for multi-ports hairpin.
   * Allow unknown link speed.
 
   Updated Mellanox mlx5 vDPA driver:
@@ -221,7 +226,7 @@ New Features
   * Added Alveo SN1000 SmartNICs (EF100 architecture) support including
     flow API transfer rules for switch HW offload
   * Added ARMv8 support
-  * Claimed flow API native thread safety
+  * Added flow API native thread safety
 
 * **Added Wangxun txgbe PMD.**
 
@@ -231,9 +236,9 @@ New Features
 
 * **Updated Virtio driver.**
 
-  * Added support for Vhost-vDPA backend to Virtio-user PMD.
+  * Added support for Vhost-vDPA backend to the Virtio-user PMD.
   * Changed default link speed to unknown.
-  * Added support for 200G link speed.
+  * Added support for the 200G link speed.
 
 * **Updated Intel i40e driver.**
 
@@ -249,40 +254,40 @@ New Features
 
 * **Updated Memif PMD.**
 
-  * Added support for abstract socket address.
+  * Added support for abstract socket addresses.
   * Changed default socket address type to abstract.
 
 * **Added Ice Lake (Gen4) support for Intel NTB.**
 
-  Added NTB device support (4th generation) for Intel Ice Lake platform.
+  Added NTB device support (4th generation) for the Intel Ice Lake platform.
 
 * **Added UDP/IPv4 GRO support for VxLAN and non-VxLAN packets.**
 
   For VxLAN packets, added inner UDP/IPv4 support.
   For non-VxLAN packets, added UDP/IPv4 support.
 
-* **Extended flow-perf application.**
+* **Extended the flow-perf application.**
 
-  * Started supporting user order instead of bit mask:
+  * Added support for user order instead of bit mask.
     Now the user can create any structure of rte_flow
-    using flow performance application with any order,
-    moreover the app also now starts to support inner
+    using the flow performance application with any order.
+    Moreover the app also now starts to support inner
     items matching as well.
   * Added header modify actions.
   * Added flag action.
   * Added raw encap/decap actions.
   * Added VXLAN encap/decap actions.
-  * Added ICMP(code/type/identifier/sequence number) and ICMP6(code/type) matching items.
+  * Added ICMP (code/type/identifier/sequence number) and ICMP6 (code/type) matching items.
   * Added option to set port mask for insertion/deletion:
     ``--portmask=N``
-    where N represents the hexadecimal bitmask of ports used.
+    where N represents the hexadecimal bitmask of the ports used.
 
 * **Added raw data-path APIs for cryptodev library.**
 
-  Cryptodev is added with raw data-path APIs to accelerate external
-  libraries or applications which need to avail fast cryptodev
-  enqueue/dequeue operations but does not necessarily depends on
-  mbufs and cryptodev operation mempools.
+  Added raw data-path APIs to Cryptodev to help accelerate external libraries
+  or applications which need to avail of fast cryptodev enqueue/dequeue
+  operations but which do not necessarily need to depend on mbufs and
+  cryptodev operation mempools.
 
 * **Updated the aesni_mb crypto PMD.**
 
@@ -319,7 +324,7 @@ New Features
   * Updated the OCTEON TX2 crypto PMD lookaside protocol offload for IPsec with
     IPv6 support.
 
-* **Updated QAT crypto PMD.**
+* **Updated the QAT crypto PMD.**
 
   * Added Raw Data-path APIs support.
 
@@ -332,18 +337,18 @@ New Features
 * **Updated rte_security library to support SDAP.**
 
   ``rte_security_pdcp_xform`` in ``rte_security`` lib is updated to enable
-  5G NR processing of SDAP header in PMDs.
+  5G NR processing of SDAP headers in PMDs.
 
 * **Added Marvell OCTEON TX2 regex PMD.**
 
-  Added a new PMD driver for hardware regex offload block for OCTEON TX2 SoC.
+  Added a new PMD driver for the hardware regex offload block for OCTEON TX2 SoC.
 
   See the :doc:`../regexdevs/octeontx2` for more details.
 
 * **Updated Software Eventdev driver.**
 
   Added performance tuning arguments to allow tuning the scheduler for
-  better throughtput in high core count use cases.
+  better throughput in high core count use cases.
 
 * **Added a new driver for the Intel Dynamic Load Balancer v1.0 device.**
 
@@ -355,12 +360,14 @@ New Features
   Added the new ``dlb2`` eventdev driver for the Intel DLB V2.0 device. See the
   :doc:`../eventdevs/dlb2` eventdev guide for more details on this new driver.
 
-* **Updated ioat rawdev driver**
+* **Updated ioat rawdev driver.**
 
   The ioat rawdev driver has been updated and enhanced. Changes include:
 
-  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.
-    For more information, see https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
+  * Added support for Intel\ |reg| Data Streaming Accelerator hardware.  For
+    more information, see `Introducing the Intel Data Streaming Accelerator
+    (Intel DSA)
+    <https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator>`_.
   * Added support for the fill operation via the API ``rte_ioat_enqueue_fill()``,
     where the hardware fills an area of memory with a repeating pattern.
   * Added a per-device configuration flag to disable management
@@ -369,7 +376,7 @@ New Features
     and renamed the ``rte_ioat_completed_copies()`` API to ``rte_ioat_completed_ops()``
     to better reflect the APIs' purposes, and remove the implication that
     they are limited to copy operations only.
-    [Note: The old API is still provided but marked as deprecated in the code]
+    Note: The old API is still provided but marked as deprecated in the code.
   * Added a new API ``rte_ioat_fence()`` to add a fence between operations.
     This API replaces the ``fence`` flag parameter in the ``rte_ioat_enqueue_copies()`` function,
     and is clearer as there is no ambiguity as to whether the flag should be
@@ -377,11 +384,12 @@ New Features
 
 * **Updated the pipeline library for alignment with the P4 language.**
 
-  Added new Software Switch (SWX) pipeline type that provides more
-  flexibility through API and feature alignment with the P4 language.
+  Added a new Software Switch (SWX) pipeline type that provides more
+  flexibility through APIs and feature alignment with the P4 language.
+  Some enhancements are:
 
   * The packet headers, meta-data, actions, tables and pipelines are
-    dynamically defined instead of selected from pre-defined set.
+    dynamically defined instead of selected from a pre-defined set.
   * The actions and the pipeline are defined with instructions.
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
@@ -401,9 +409,9 @@ New Features
 * **Added support to update subport bandwidth dynamically.**
 
    * Added new API ``rte_sched_port_subport_profile_add`` to add new
-     subport bandwidth profile to subport porfile table at runtime.
+     subport bandwidth profiles to the subport profile table at runtime.
 
-   * Added support to update subport rate dynamically.
+   * Added support to update the subport rate dynamically.
 
 * **Updated FIPS validation sample application.**
 
@@ -420,8 +428,8 @@ New Features
 
 * **Updated vhost sample application.**
 
-  Added vhost asynchronous APIs support, which demonstrated how the application
-  leverage IOAT DMA channel with vhost asynchronous APIs.
+  Added vhost asynchronous APIs support, which demonstrates how the application
+  can leverage IOAT DMA channels with vhost asynchronous APIs.
   See the :doc:`../sample_app_ug/vhost` for more details.
 
 
@@ -437,16 +445,18 @@ Removed Items
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* build: Support for the Make build system was removed for compiling DPDK,
+* build: Support for the Make build system has been removed from DPDK.
   Meson is now the primary build system.
   Sample applications can still be built with Make standalone, using pkg-config.
 
 * vhost: Dequeue zero-copy support has been removed.
 
 * kernel: The module ``igb_uio`` has been moved to the git repository
-  ``dpdk-kmods`` in a new directory ``linux/igb_uio``.
+  `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_ in a new directory
+  ``linux/igb_uio``.
 
-* Removed Python 2 support since it was EOL'd in January 2020.
+* Removed Python 2 support since it was sunsetted in January 2020. See
+  `Sunsetting Python 2 <https://www.python.org/doc/sunset-python-2/>`_
 
 * Removed TEP termination sample application.
 
@@ -466,11 +476,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* build macros: The macros defining ``RTE_MACHINE_CPUFLAG_*`` are removed.
-  The information provided by these macros is available through standard
+* build macros: The macros defining ``RTE_MACHINE_CPUFLAG_*`` have been removed.
+  The information provided by these macros is now available through standard
   compiler macros.
 
-* eal: Replaced the function ``rte_get_master_lcore()`` to
+* eal: Replaced the function ``rte_get_master_lcore()`` with
   ``rte_get_main_lcore()``. The old function is deprecated.
 
   The iterator for worker lcores is also changed:
@@ -478,7 +488,7 @@ API Changes
   ``RTE_LCORE_FOREACH_WORKER``.
 
 * eal: The definitions related to including and excluding devices
-  has been changed from blacklist/whitelist to block/allow list.
+  have been changed from blacklist/whitelist to block/allow list.
   There are compatibility macros and command line mapping to accept
   the old values but applications and scripts are strongly encouraged
   to migrate to the new names.
@@ -494,11 +504,11 @@ API Changes
 
 * mem: Removed the unioned field ``phys_addr`` from
   the structures ``rte_memseg`` and ``rte_memzone``.
-  The field ``iova`` is remaining from the old unions.
+  The field ``iova`` remains from the old unions.
 
 * mempool: Removed the unioned fields ``phys_addr`` and ``physaddr`` from
   the structures ``rte_mempool_memhdr`` and ``rte_mempool_objhdr``.
-  The field ``iova`` is remaining from the old unions.
+  The field ``iova`` remains from the old unions.
   The flag name ``MEMPOOL_F_NO_PHYS_CONTIG`` is removed,
   while the aliased flag ``MEMPOOL_F_NO_IOVA_CONTIG`` is kept.
 
@@ -508,11 +518,11 @@ API Changes
   having ``iova`` in their names instead of ``dma_addr`` or ``mtophys``.
 
 * mbuf: Removed the unioned field ``buf_physaddr`` from ``rte_mbuf``.
-  The field ``buf_iova`` is remaining from the old union.
+  The field ``buf_iova`` remains from the old union.
 
 * mbuf: Removed the unioned field ``refcnt_atomic`` from
   the structures ``rte_mbuf`` and ``rte_mbuf_ext_shared_info``.
-  The field ``refcnt`` is remaining from the old unions.
+  The field ``refcnt`` remains from the old unions.
 
 * mbuf: Removed the unioned fields ``userdata`` and ``udata64``
   from the structure ``rte_mbuf``. It is replaced with dynamic fields.
@@ -558,7 +568,7 @@ API Changes
 
 * ethdev: Modified field type of ``base`` and ``nb_queue`` in struct
   ``rte_eth_dcb_tc_queue_mapping`` from ``uint8_t`` to ``uint16_t``.
-  As the data of ``uint8_t`` will be truncated when queue number under
+  As the data of ``uint8_t`` will be truncated when queue number in
   a TC is greater than 256.
 
 * ethdev: Removed the legacy filter API, including
@@ -574,7 +584,7 @@ API Changes
   instead of ``rte_vhost_driver_start`` by crypto applications.
 
 * cryptodev: The structure ``rte_crypto_sym_vec`` is updated to support both
-  cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
+  cpu_crypto synchronous operations and asynchronous raw data-path APIs.
 
 * cryptodev: ``RTE_CRYPTO_AEAD_LIST_END`` from ``enum rte_crypto_aead_algorithm``,
   ``RTE_CRYPTO_CIPHER_LIST_END`` from ``enum rte_crypto_cipher_algorithm`` and
@@ -592,12 +602,12 @@ API Changes
   ``RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES`` to
   ``RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS``.
 
-* security: ``hfn_ovrd`` field in ``rte_security_pdcp_xform`` is changed from
+* security: The ``hfn_ovrd`` field in ``rte_security_pdcp_xform`` is changed from
   ``uint32_t`` to ``uint8_t`` so that a new field ``sdap_enabled`` can be added
   to support SDAP.
 
 * security: The API ``rte_security_session_create`` is updated to take two
-  mempool objects one for session and other for session private data.
+  mempool objects: one for session and other for session private data.
   So the application need to create two mempools and get the size of session
   private data using API ``rte_security_session_get_size`` for private session
   mempool.
@@ -645,10 +655,10 @@ API Changes
   * ``pkt`` is not freed, no matter whether it is GSOed, leaving to the caller.
 
 * acl: ``RTE_ACL_CLASSIFY_NUM`` enum value has been removed.
-  This enum value was not used inside DPDK, while it prevented to add new
+  This enum value was not used inside DPDK, while it prevented the addition of new
   classify algorithms without causing an ABI breakage.
 
-* sched: Added ``subport_profile_id`` as argument
+* sched: Added ``subport_profile_id`` as an argument
   to function ``rte_sched_subport_config``.
 
 * sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
@@ -670,11 +680,11 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* eal: Removed the not implemented function ``rte_dump_registers()``.
+* eal: Removed the unimplemented function ``rte_dump_registers()``.
 
 * ``ethdev`` changes
 
-  * Following device operation function pointers moved
+  * The following device operation function pointers moved
     from ``struct eth_dev_ops`` to ``struct rte_eth_dev``:
 
     * ``eth_rx_queue_count_t       rx_queue_count;``
@@ -682,8 +692,8 @@ ABI Changes
     * ``eth_rx_descriptor_status_t rx_descriptor_status;``
     * ``eth_tx_descriptor_status_t tx_descriptor_status;``
 
-  * ``struct eth_dev_ops`` is no more accessible by applications,
-    which was already internal data structure.
+  * ``struct eth_dev_ops`` is no longer accessible by applications,
+    which was already an internal data structure.
 
   * ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
 
@@ -704,11 +714,11 @@ ABI Changes
   * Added new field ``has_vlan`` to structure ``rte_flow_item_eth``,
     indicating that packet header contains at least one VLAN.
 
-  * Added new field ``has_more_vlan`` to structure
+  * Added new field ``has_more_vlan`` to the structure
     ``rte_flow_item_vlan``, indicating that packet header contains
     at least one more VLAN, after this VLAN.
 
-* eventdev: Following structures are modified to support DLB/DLB2 PMDs
+* eventdev: The following structures are modified to support DLB/DLB2 PMDs
   and future extensions:
 
   * ``rte_event_dev_info``
-- 
2.25.1


^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-24 13:00  4%             ` Andrew Rybchenko
@ 2020-11-24 13:01  0%               ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2020-11-24 13:01 UTC (permalink / raw)
  To: Ferruh Yigit, Ori Kam, Ray Kinsella, Neil Horman
  Cc: dev, NBU-Contact-Thomas Monjalon

On 11/24/20 4:00 PM, Andrew Rybchenko wrote:
> On 11/24/20 3:56 PM, Ferruh Yigit wrote:
>> On 11/24/2020 11:43 AM, Ori Kam wrote:
>>> Hi
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Sent: Monday, November 23, 2020 5:51 PM
>>>> Subject: Re: [PATCH] doc: announce flow API matching pattern struct
>>>> changes
>>>>
>>>> On 11/23/2020 2:25 PM, Andrew Rybchenko wrote:
>>>>> On 11/23/20 5:17 PM, Ferruh Yigit wrote:
>>>>>> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
>>>>>>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>>>>>>>> Proposing to replace protocol header fields in the
>>>>>>>> ``rte_flow_item_*``
>>>>>>>> structures with the protocol structs, like:
>>>>>>>>
>>>>>>>> Current ``struct rte_flow_item_eth``,
>>>>>>>>
>>>>>>>> struct rte_flow_item_eth {
>>>>>>>>       struct rte_ether_addr dst;
>>>>>>>>       struct rte_ether_addr src;
>>>>>>>>       rte_be16_t type;
>>>>>>>>       uint32_t has_vlan:1;
>>>>>>>>       uint32_t reserved:31;
>>>>>>>> }
>>>>>>>>
>>>>>>>> will become
>>>>>>>>
>>>>>>>> struct rte_flow_item_eth {
>>>>>>>>       struct rte_ether_hdr hdr;
>>>>>>>>       uint32_t has_vlan:1;
>>>>>>>>       uint32_t reserved:31;
>>>>>>>> }
>>>>>>>>
>>>>>>>> This is both for documenting the intention and to be sure
>>>>>>>> ``rte_flow_item_*`` always starts with complete protocol header.
>>>>>>>>
>>>>>>>> Already many ``rte_flow_item_*`` structs implemented to have
>>>>>>>> protocol
>>>>>>>> struct, target is convert all to this usage.
>>>>>>>>
>>>>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>>>
>>>>>>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>>
>>>>>>> a minor note below
>>>>>>>
>>>>>>>> ---
>>>>>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>>>>>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>>> Cc: Ori Kam <orika@nvidia.com>
>>>>>>>> ---
>>>>>>>>     doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>>>>     1 file changed, 7 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> index 96986fabd598..a2fa0c196472 100644
>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> @@ -88,6 +88,13 @@ Deprecation Notices
>>>>>>>>       will be limited to maximum 256 queues.
>>>>>>>>       Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS``
>>>>>>>> will be
>>>>>>>> removed.
>>>>>>>>     +* ethdev: The flow API matching pattern structures, ``struct
>>>>>>>> rte_flow_item_*``,
>>>>>>>> +  should start with relevant protocol header.
>>>>>>>> +  Some matching pattern structures implements this by duplicating
>>>>>>>> protocol header
>>>>>>>> +  fields in the struct. To clarify the intention and to be sure
>>>>>>>> protocol header
>>>>>>>> +  is intact, will replace those fields with relevant protocol
>>>>>>>> header struct.
>>>>>>>> +  Target is v21.02 release and this should not change the ABI.
>>>>>>>> +
>>>>>>>>     * sched: To allow more traffic classes, flexible mapping of
>>>>>>>> pipe
>>>>>>>> queues to
>>>>>>>>       traffic classes, and subport level configuration of pipes and
>>>>>>>> queues
>>>>>>>>       changes will be made to macros, data structures and API
>>>>>>>> functions defined
>>>>>>>>
>>>>>>>
>>>>>>> Just want to highlight that even API could be kept using
>>>>>>> unnamed union for hdr and unnamed structure for existing
>>>>>>> protocol header fields.
>>>>>>>
>>>>>>
>>>>>> Then we may never clean the protocol header fields out of it,
>>>>>> yes this will impact the user but I believe the impact is small and
>>>>>> trivial,
>>>>>> I prefer replacing fields with protocol struct.
>>>>>
>>>>> The problem that API breakages are bad and, for example, OvS uses
>>>>> these
>>>>> fields.
>>>>>
>>>>> May be API breakage should be postponed to 21.11?
>>>>>
>>>>
>>>> Agree but it is not as bad as ABI break, if user is already
>>>> compiling their
>>>> code, it is not too bad to adjust the struct for changes, and the
>>>> changes are
>>>> straightforward.
>>>>
>>> I'm not sure which is worse ABI or API, API is more straight forward
>>> but all apps must be modified,
>>> while ABI is hidden and happens only in rare cases.
>>> In a addition it may result in large number of changes (simple but
>>> large number)
>>>
>>>> But if, somehow, application needs to support multiple version of
>>>> the DPDK it
>>>> can be headache.
>>>>
>>>
>>> Agree,
>>>
>>>> We may go with your suggestion until 21.11, and do the cleanup on
>>>> 21.11, will
>>>> it
>>>> work?
>>> +1 also when considering my next line,
>>>
>>> One more point to consider what happens to struct that are not
>>> according to spec,
>>> for example mpls, geneve where the struct is different than the item.
>>>
>>
>> At least for mpls & geneve, the ABI still looks same so change is
>> still possible, but a few fields seems merged which means the change
>> will require more updates in the user application and the drivers.
>> Anyway, agree to postpone change to the 21.11.
>>
>> I will send a v2.
> 
> I hope it is still possible to add hdr fields without ABI/ABI breakage
> in 20.02.
> 

21.02 of course


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-24 12:56  3%           ` Ferruh Yigit
@ 2020-11-24 13:00  4%             ` Andrew Rybchenko
  2020-11-24 13:01  0%               ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-11-24 13:00 UTC (permalink / raw)
  To: Ferruh Yigit, Ori Kam, Ray Kinsella, Neil Horman
  Cc: dev, NBU-Contact-Thomas Monjalon

On 11/24/20 3:56 PM, Ferruh Yigit wrote:
> On 11/24/2020 11:43 AM, Ori Kam wrote:
>> Hi
>>
>>> -----Original Message-----
>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Sent: Monday, November 23, 2020 5:51 PM
>>> Subject: Re: [PATCH] doc: announce flow API matching pattern struct
>>> changes
>>>
>>> On 11/23/2020 2:25 PM, Andrew Rybchenko wrote:
>>>> On 11/23/20 5:17 PM, Ferruh Yigit wrote:
>>>>> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
>>>>>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>>>>>>> Proposing to replace protocol header fields in the
>>>>>>> ``rte_flow_item_*``
>>>>>>> structures with the protocol structs, like:
>>>>>>>
>>>>>>> Current ``struct rte_flow_item_eth``,
>>>>>>>
>>>>>>> struct rte_flow_item_eth {
>>>>>>>       struct rte_ether_addr dst;
>>>>>>>       struct rte_ether_addr src;
>>>>>>>       rte_be16_t type;
>>>>>>>       uint32_t has_vlan:1;
>>>>>>>       uint32_t reserved:31;
>>>>>>> }
>>>>>>>
>>>>>>> will become
>>>>>>>
>>>>>>> struct rte_flow_item_eth {
>>>>>>>       struct rte_ether_hdr hdr;
>>>>>>>       uint32_t has_vlan:1;
>>>>>>>       uint32_t reserved:31;
>>>>>>> }
>>>>>>>
>>>>>>> This is both for documenting the intention and to be sure
>>>>>>> ``rte_flow_item_*`` always starts with complete protocol header.
>>>>>>>
>>>>>>> Already many ``rte_flow_item_*`` structs implemented to have
>>>>>>> protocol
>>>>>>> struct, target is convert all to this usage.
>>>>>>>
>>>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>>
>>>>>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>
>>>>>> a minor note below
>>>>>>
>>>>>>> ---
>>>>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>>>>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>> Cc: Ori Kam <orika@nvidia.com>
>>>>>>> ---
>>>>>>>     doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>>>     1 file changed, 7 insertions(+)
>>>>>>>
>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>> index 96986fabd598..a2fa0c196472 100644
>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>> @@ -88,6 +88,13 @@ Deprecation Notices
>>>>>>>       will be limited to maximum 256 queues.
>>>>>>>       Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS``
>>>>>>> will be
>>>>>>> removed.
>>>>>>>     +* ethdev: The flow API matching pattern structures, ``struct
>>>>>>> rte_flow_item_*``,
>>>>>>> +  should start with relevant protocol header.
>>>>>>> +  Some matching pattern structures implements this by duplicating
>>>>>>> protocol header
>>>>>>> +  fields in the struct. To clarify the intention and to be sure
>>>>>>> protocol header
>>>>>>> +  is intact, will replace those fields with relevant protocol
>>>>>>> header struct.
>>>>>>> +  Target is v21.02 release and this should not change the ABI.
>>>>>>> +
>>>>>>>     * sched: To allow more traffic classes, flexible mapping of
>>>>>>> pipe
>>>>>>> queues to
>>>>>>>       traffic classes, and subport level configuration of pipes and
>>>>>>> queues
>>>>>>>       changes will be made to macros, data structures and API
>>>>>>> functions defined
>>>>>>>
>>>>>>
>>>>>> Just want to highlight that even API could be kept using
>>>>>> unnamed union for hdr and unnamed structure for existing
>>>>>> protocol header fields.
>>>>>>
>>>>>
>>>>> Then we may never clean the protocol header fields out of it,
>>>>> yes this will impact the user but I believe the impact is small and
>>>>> trivial,
>>>>> I prefer replacing fields with protocol struct.
>>>>
>>>> The problem that API breakages are bad and, for example, OvS uses
>>>> these
>>>> fields.
>>>>
>>>> May be API breakage should be postponed to 21.11?
>>>>
>>>
>>> Agree but it is not as bad as ABI break, if user is already
>>> compiling their
>>> code, it is not too bad to adjust the struct for changes, and the
>>> changes are
>>> straightforward.
>>>
>> I'm not sure which is worse ABI or API, API is more straight forward
>> but all apps must be modified,
>> while ABI is hidden and happens only in rare cases.
>> In a addition it may result in large number of changes (simple but
>> large number)
>>
>>> But if, somehow, application needs to support multiple version of
>>> the DPDK it
>>> can be headache.
>>>
>>
>> Agree,
>>
>>> We may go with your suggestion until 21.11, and do the cleanup on
>>> 21.11, will
>>> it
>>> work?
>> +1 also when considering my next line,
>>
>> One more point to consider what happens to struct that are not
>> according to spec,
>> for example mpls, geneve where the struct is different than the item.
>>
>
> At least for mpls & geneve, the ABI still looks same so change is
> still possible, but a few fields seems merged which means the change
> will require more updates in the user application and the drivers.
> Anyway, agree to postpone change to the 21.11.
>
> I will send a v2.

I hope it is still possible to add hdr fields without ABI/ABI breakage
in 20.02.


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-24 11:43  4%         ` Ori Kam
@ 2020-11-24 12:56  3%           ` Ferruh Yigit
  2020-11-24 13:00  4%             ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-24 12:56 UTC (permalink / raw)
  To: Ori Kam, Andrew Rybchenko, Ray Kinsella, Neil Horman
  Cc: dev, NBU-Contact-Thomas Monjalon

On 11/24/2020 11:43 AM, Ori Kam wrote:
> Hi
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Monday, November 23, 2020 5:51 PM
>> Subject: Re: [PATCH] doc: announce flow API matching pattern struct changes
>>
>> On 11/23/2020 2:25 PM, Andrew Rybchenko wrote:
>>> On 11/23/20 5:17 PM, Ferruh Yigit wrote:
>>>> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
>>>>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>>>>>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
>>>>>> structures with the protocol structs, like:
>>>>>>
>>>>>> Current ``struct rte_flow_item_eth``,
>>>>>>
>>>>>> struct rte_flow_item_eth {
>>>>>>       struct rte_ether_addr dst;
>>>>>>       struct rte_ether_addr src;
>>>>>>       rte_be16_t type;
>>>>>>       uint32_t has_vlan:1;
>>>>>>       uint32_t reserved:31;
>>>>>> }
>>>>>>
>>>>>> will become
>>>>>>
>>>>>> struct rte_flow_item_eth {
>>>>>>       struct rte_ether_hdr hdr;
>>>>>>       uint32_t has_vlan:1;
>>>>>>       uint32_t reserved:31;
>>>>>> }
>>>>>>
>>>>>> This is both for documenting the intention and to be sure
>>>>>> ``rte_flow_item_*`` always starts with complete protocol header.
>>>>>>
>>>>>> Already many ``rte_flow_item_*`` structs implemented to have protocol
>>>>>> struct, target is convert all to this usage.
>>>>>>
>>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>
>>>>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>
>>>>> a minor note below
>>>>>
>>>>>> ---
>>>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>>>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Cc: Ori Kam <orika@nvidia.com>
>>>>>> ---
>>>>>>     doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>>     1 file changed, 7 insertions(+)
>>>>>>
>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>> index 96986fabd598..a2fa0c196472 100644
>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>> @@ -88,6 +88,13 @@ Deprecation Notices
>>>>>>       will be limited to maximum 256 queues.
>>>>>>       Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be
>>>>>> removed.
>>>>>>     +* ethdev: The flow API matching pattern structures, ``struct
>>>>>> rte_flow_item_*``,
>>>>>> +  should start with relevant protocol header.
>>>>>> +  Some matching pattern structures implements this by duplicating
>>>>>> protocol header
>>>>>> +  fields in the struct. To clarify the intention and to be sure
>>>>>> protocol header
>>>>>> +  is intact, will replace those fields with relevant protocol
>>>>>> header struct.
>>>>>> +  Target is v21.02 release and this should not change the ABI.
>>>>>> +
>>>>>>     * sched: To allow more traffic classes, flexible mapping of pipe
>>>>>> queues to
>>>>>>       traffic classes, and subport level configuration of pipes and
>>>>>> queues
>>>>>>       changes will be made to macros, data structures and API
>>>>>> functions defined
>>>>>>
>>>>>
>>>>> Just want to highlight that even API could be kept using
>>>>> unnamed union for hdr and unnamed structure for existing
>>>>> protocol header fields.
>>>>>
>>>>
>>>> Then we may never clean the protocol header fields out of it,
>>>> yes this will impact the user but I believe the impact is small and
>>>> trivial,
>>>> I prefer replacing fields with protocol struct.
>>>
>>> The problem that API breakages are bad and, for example, OvS uses these
>>> fields.
>>>
>>> May be API breakage should be postponed to 21.11?
>>>
>>
>> Agree but it is not as bad as ABI break, if user is already compiling their
>> code, it is not too bad to adjust the struct for changes, and the changes are
>> straightforward.
>>
> I'm not sure which is worse ABI or API, API is more straight forward but all apps must be modified,
> while ABI is hidden and happens only in rare cases.
> In a addition it may result in large number of changes (simple but large number)
> 
>> But if, somehow, application needs to support multiple version of the DPDK it
>> can be headache.
>>
> 
> Agree,
> 
>> We may go with your suggestion until 21.11, and do the cleanup on 21.11, will
>> it
>> work?
> +1 also when considering my next line,
> 
> One more point to consider what happens to struct that are not according to spec,
> for example mpls, geneve where the struct is different than the item.
> 

At least for mpls & geneve, the ABI still looks same so change is still 
possible, but a few fields seems merged which means the change will require more 
updates in the user application and the drivers. Anyway, agree to postpone 
change to the 21.11.

I will send a v2.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-23 15:51  3%       ` Ferruh Yigit
@ 2020-11-24 11:43  4%         ` Ori Kam
  2020-11-24 12:56  3%           ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2020-11-24 11:43 UTC (permalink / raw)
  To: Ferruh Yigit, Andrew Rybchenko, Ray Kinsella, Neil Horman
  Cc: dev, NBU-Contact-Thomas Monjalon

Hi

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, November 23, 2020 5:51 PM
> Subject: Re: [PATCH] doc: announce flow API matching pattern struct changes
> 
> On 11/23/2020 2:25 PM, Andrew Rybchenko wrote:
> > On 11/23/20 5:17 PM, Ferruh Yigit wrote:
> >> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
> >>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
> >>>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
> >>>> structures with the protocol structs, like:
> >>>>
> >>>> Current ``struct rte_flow_item_eth``,
> >>>>
> >>>> struct rte_flow_item_eth {
> >>>>      struct rte_ether_addr dst;
> >>>>      struct rte_ether_addr src;
> >>>>      rte_be16_t type;
> >>>>      uint32_t has_vlan:1;
> >>>>      uint32_t reserved:31;
> >>>> }
> >>>>
> >>>> will become
> >>>>
> >>>> struct rte_flow_item_eth {
> >>>>      struct rte_ether_hdr hdr;
> >>>>      uint32_t has_vlan:1;
> >>>>      uint32_t reserved:31;
> >>>> }
> >>>>
> >>>> This is both for documenting the intention and to be sure
> >>>> ``rte_flow_item_*`` always starts with complete protocol header.
> >>>>
> >>>> Already many ``rte_flow_item_*`` structs implemented to have protocol
> >>>> struct, target is convert all to this usage.
> >>>>
> >>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>
> >>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>
> >>> a minor note below
> >>>
> >>>> ---
> >>>> Cc: Thomas Monjalon <thomas@monjalon.net>
> >>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Cc: Ori Kam <orika@nvidia.com>
> >>>> ---
> >>>>    doc/guides/rel_notes/deprecation.rst | 7 +++++++
> >>>>    1 file changed, 7 insertions(+)
> >>>>
> >>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>> b/doc/guides/rel_notes/deprecation.rst
> >>>> index 96986fabd598..a2fa0c196472 100644
> >>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>> @@ -88,6 +88,13 @@ Deprecation Notices
> >>>>      will be limited to maximum 256 queues.
> >>>>      Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be
> >>>> removed.
> >>>>    +* ethdev: The flow API matching pattern structures, ``struct
> >>>> rte_flow_item_*``,
> >>>> +  should start with relevant protocol header.
> >>>> +  Some matching pattern structures implements this by duplicating
> >>>> protocol header
> >>>> +  fields in the struct. To clarify the intention and to be sure
> >>>> protocol header
> >>>> +  is intact, will replace those fields with relevant protocol
> >>>> header struct.
> >>>> +  Target is v21.02 release and this should not change the ABI.
> >>>> +
> >>>>    * sched: To allow more traffic classes, flexible mapping of pipe
> >>>> queues to
> >>>>      traffic classes, and subport level configuration of pipes and
> >>>> queues
> >>>>      changes will be made to macros, data structures and API
> >>>> functions defined
> >>>>
> >>>
> >>> Just want to highlight that even API could be kept using
> >>> unnamed union for hdr and unnamed structure for existing
> >>> protocol header fields.
> >>>
> >>
> >> Then we may never clean the protocol header fields out of it,
> >> yes this will impact the user but I believe the impact is small and
> >> trivial,
> >> I prefer replacing fields with protocol struct.
> >
> > The problem that API breakages are bad and, for example, OvS uses these
> > fields.
> >
> > May be API breakage should be postponed to 21.11?
> >
> 
> Agree but it is not as bad as ABI break, if user is already compiling their
> code, it is not too bad to adjust the struct for changes, and the changes are
> straightforward.
> 
I'm not sure which is worse ABI or API, API is more straight forward but all apps must be modified,
while ABI is hidden and happens only in rare cases.
In a addition it may result in large number of changes (simple but large number)

> But if, somehow, application needs to support multiple version of the DPDK it
> can be headache.
> 

Agree, 

> We may go with your suggestion until 21.11, and do the cleanup on 21.11, will
> it
> work?
+1 also when considering my next line,

One more point to consider what happens to struct that are not according to spec,
for example mpls, geneve where the struct is different than the item.



^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2] build: alias default build as generic
  2020-11-20 12:27  3% [dpdk-dev] [PATCH v1 1/1] build: alias default build as generic Juraj Linkeš
@ 2020-11-24  7:52  3% ` Juraj Linkeš
  0 siblings, 0 replies; 200+ results
From: Juraj Linkeš @ 2020-11-24  7:52 UTC (permalink / raw)
  To: thomas, bruce.richardson, Honnappa.Nagarahalli; +Cc: dev, Juraj Linkeš

The current machine='default' build name is not descriptive. The actual
default build is machine='native'. Add an alternative string which does
the same build and better describes what we're building:
machine='generic'. Leave machine='default' for backwards compatibility.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
 config/arm/meson.build                    |  5 +++--
 config/meson.build                        | 13 +++++++------
 devtools/test-meson-builds.sh             | 12 ++++++------
 doc/guides/prog_guide/build-sdk-meson.rst |  4 ++--
 meson_options.txt                         |  2 +-
 5 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 42b4e43c7..d4066ade8 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -1,12 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation.
 # Copyright(c) 2017 Cavium, Inc
+# Copyright(c) 2020 PANTHEON.tech s.r.o.
 
 # for checking defines we need to use the correct compiler flags
 march_opt = '-march=@0@'.format(machine)
 
 arm_force_native_march = false
-arm_force_default_march = (machine == 'default')
+arm_force_generic_march = (machine == 'generic')
 
 flags_common_default = [
 	# Accelarate rte_memcpy. Be sure to run unit test (memcpy_perf_autotest)
@@ -148,7 +149,7 @@ else
 	cmd_generic = ['generic', '', '', 'default', '']
 	cmd_output = cmd_generic # Set generic by default
 	machine_args = [] # Clear previous machine args
-	if arm_force_default_march and not meson.is_cross_build()
+	if arm_force_generic_march and not meson.is_cross_build()
 		machine = impl_generic
 		impl_pn = 'default'
 	elif not meson.is_cross_build()
diff --git a/config/meson.build b/config/meson.build
index a29693b88..3db2f55e0 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -70,21 +70,22 @@ else
 	machine = get_option('machine')
 endif
 
-# machine type 'default' is special, it defaults to the per arch agreed common
-# minimal baseline needed for DPDK.
+# machine type 'generic' is special, it selects the per arch agreed common
+# minimal baseline needed for DPDK. Machine type 'default' is also supported
+# with the same meaning for backwards compatibility.
 # That might not be the most optimized, but the most portable version while
 # still being able to support the CPU features required for DPDK.
 # This can be bumped up by the DPDK project, but it can never be an
 # invariant like 'native'
-if machine == 'default'
+if machine == 'default' or machine == 'generic'
 	if host_machine.cpu_family().startswith('x86')
-		# matches the old pre-meson build systems default
+		# matches the old pre-meson build systems generic machine
 		machine = 'corei7'
 	elif host_machine.cpu_family().startswith('arm')
 		machine = 'armv7-a'
 	elif host_machine.cpu_family().startswith('aarch')
-		# arm64 manages defaults in config/arm/meson.build
-		machine = 'default'
+		# arm64 manages generic config in config/arm/meson.build
+		machine = 'generic'
 	elif host_machine.cpu_family().startswith('ppc')
 		machine = 'power8'
 	endif
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 3ce49368c..11aa9bf11 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -209,11 +209,11 @@ done
 # test compilation with minimal x86 instruction set
 # Set the install path for libraries to "lib" explicitly to prevent problems
 # with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
-default_machine='nehalem'
-if ! check_cc_flags "-march=$default_machine" ; then
-	default_machine='corei7'
+generic_machine='nehalem'
+if ! check_cc_flags "-march=$generic_machine" ; then
+	generic_machine='corei7'
 fi
-build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
+build build-x86-generic cc -Dlibdir=lib -Dmachine=$generic_machine $use_shared
 
 # 32-bit with default compiler
 if check_cc_flags '-m32' ; then
@@ -253,10 +253,10 @@ for f in $srcdir/config/ppc/ppc* ; do
 	build build-$(basename $f | cut -d'-' -f-2) $f $use_shared
 done
 
-# Test installation of the x86-default target, to be used for checking
+# Test installation of the x86-generic target, to be used for checking
 # the sample apps build using the pkg-config file for cflags and libs
 load_env cc
-build_path=$(readlink -f $builds_dir/build-x86-default)
+build_path=$(readlink -f $builds_dir/build-x86-generic)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
 if [ -z "$DPDK_ABI_REF_VERSION" ]; then
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 3429e2647..c7e12eedf 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -85,7 +85,7 @@ Project-specific options are passed used -Doption=value::
 
 	meson -Denable_docs=true fullbuild  # build and install docs
 
-	meson -Dmachine=default  # use builder-independent baseline -march
+	meson -Dmachine=generic  # use builder-independent baseline -march
 
 	meson -Ddisable_drivers=event/*,net/tap  # disable tap driver and all
 					# eventdev PMDs for a smaller build
@@ -114,7 +114,7 @@ Examples of setting some of the same options using meson configure::
         re-scan from meson.
 
 .. note::
-        machine=default uses a config that works on all supported architectures
+        machine=generic uses a config that works on all supported architectures
         regardless of the capabilities of the machine where the build is happening.
 
 As well as those settings taken from ``meson configure``, other options
diff --git a/meson_options.txt b/meson_options.txt
index e384e6dbb..ebd28d8b8 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -21,7 +21,7 @@ option('kernel_dir', type: 'string', value: '',
 option('lib_musdk_dir', type: 'string', value: '',
 	description: 'path to the MUSDK library installation directory')
 option('machine', type: 'string', value: 'native',
-	description: 'set the target machine type')
+	description: 'set the target machine type. Special values: "generic" is a build usable on all machines of the build machine architecture, "native" lets the compiler pick the architecture of the build machine.')
 option('max_ethports', type: 'integer', value: 32,
 	description: 'maximum number of Ethernet devices')
 option('max_lcores', type: 'integer', value: 128,
-- 
2.20.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-23 14:25  0%     ` Andrew Rybchenko
@ 2020-11-23 15:51  3%       ` Ferruh Yigit
  2020-11-24 11:43  4%         ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-23 15:51 UTC (permalink / raw)
  To: Andrew Rybchenko, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam

On 11/23/2020 2:25 PM, Andrew Rybchenko wrote:
> On 11/23/20 5:17 PM, Ferruh Yigit wrote:
>> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
>>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>>>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
>>>> structures with the protocol structs, like:
>>>>
>>>> Current ``struct rte_flow_item_eth``,
>>>>
>>>> struct rte_flow_item_eth {
>>>>      struct rte_ether_addr dst;
>>>>      struct rte_ether_addr src;
>>>>      rte_be16_t type;
>>>>      uint32_t has_vlan:1;
>>>>      uint32_t reserved:31;
>>>> }
>>>>
>>>> will become
>>>>
>>>> struct rte_flow_item_eth {
>>>>      struct rte_ether_hdr hdr;
>>>>      uint32_t has_vlan:1;
>>>>      uint32_t reserved:31;
>>>> }
>>>>
>>>> This is both for documenting the intention and to be sure
>>>> ``rte_flow_item_*`` always starts with complete protocol header.
>>>>
>>>> Already many ``rte_flow_item_*`` structs implemented to have protocol
>>>> struct, target is convert all to this usage.
>>>>
>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>
>>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>
>>> a minor note below
>>>
>>>> ---
>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Cc: Ori Kam <orika@nvidia.com>
>>>> ---
>>>>    doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>    1 file changed, 7 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>> b/doc/guides/rel_notes/deprecation.rst
>>>> index 96986fabd598..a2fa0c196472 100644
>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>> @@ -88,6 +88,13 @@ Deprecation Notices
>>>>      will be limited to maximum 256 queues.
>>>>      Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be
>>>> removed.
>>>>    +* ethdev: The flow API matching pattern structures, ``struct
>>>> rte_flow_item_*``,
>>>> +  should start with relevant protocol header.
>>>> +  Some matching pattern structures implements this by duplicating
>>>> protocol header
>>>> +  fields in the struct. To clarify the intention and to be sure
>>>> protocol header
>>>> +  is intact, will replace those fields with relevant protocol
>>>> header struct.
>>>> +  Target is v21.02 release and this should not change the ABI.
>>>> +
>>>>    * sched: To allow more traffic classes, flexible mapping of pipe
>>>> queues to
>>>>      traffic classes, and subport level configuration of pipes and
>>>> queues
>>>>      changes will be made to macros, data structures and API
>>>> functions defined
>>>>
>>>
>>> Just want to highlight that even API could be kept using
>>> unnamed union for hdr and unnamed structure for existing
>>> protocol header fields.
>>>
>>
>> Then we may never clean the protocol header fields out of it,
>> yes this will impact the user but I believe the impact is small and
>> trivial,
>> I prefer replacing fields with protocol struct.
> 
> The problem that API breakages are bad and, for example, OvS uses these
> fields.
> 
> May be API breakage should be postponed to 21.11?
> 

Agree but it is not as bad as ABI break, if user is already compiling their 
code, it is not too bad to adjust the struct for changes, and the changes are 
straightforward.

But if, somehow, application needs to support multiple version of the DPDK it 
can be headache.

We may go with your suggestion until 21.11, and do the cleanup on 21.11, will it 
work?

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-23 14:17  0%   ` Ferruh Yigit
@ 2020-11-23 14:25  0%     ` Andrew Rybchenko
  2020-11-23 15:51  3%       ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-11-23 14:25 UTC (permalink / raw)
  To: Ferruh Yigit, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam

On 11/23/20 5:17 PM, Ferruh Yigit wrote:
> On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
>> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
>>> structures with the protocol structs, like:
>>>
>>> Current ``struct rte_flow_item_eth``,
>>>
>>> struct rte_flow_item_eth {
>>>     struct rte_ether_addr dst;
>>>     struct rte_ether_addr src;
>>>     rte_be16_t type;
>>>     uint32_t has_vlan:1;
>>>     uint32_t reserved:31;
>>> }
>>>
>>> will become
>>>
>>> struct rte_flow_item_eth {
>>>     struct rte_ether_hdr hdr;
>>>     uint32_t has_vlan:1;
>>>     uint32_t reserved:31;
>>> }
>>>
>>> This is both for documenting the intention and to be sure
>>> ``rte_flow_item_*`` always starts with complete protocol header.
>>>
>>> Already many ``rte_flow_item_*`` structs implemented to have protocol
>>> struct, target is convert all to this usage.
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>
>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>> a minor note below
>>
>>> ---
>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>> Cc: Ori Kam <orika@nvidia.com>
>>> ---
>>>   doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>   1 file changed, 7 insertions(+)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>> b/doc/guides/rel_notes/deprecation.rst
>>> index 96986fabd598..a2fa0c196472 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -88,6 +88,13 @@ Deprecation Notices
>>>     will be limited to maximum 256 queues.
>>>     Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be
>>> removed.
>>>   +* ethdev: The flow API matching pattern structures, ``struct
>>> rte_flow_item_*``,
>>> +  should start with relevant protocol header.
>>> +  Some matching pattern structures implements this by duplicating
>>> protocol header
>>> +  fields in the struct. To clarify the intention and to be sure
>>> protocol header
>>> +  is intact, will replace those fields with relevant protocol
>>> header struct.
>>> +  Target is v21.02 release and this should not change the ABI.
>>> +
>>>   * sched: To allow more traffic classes, flexible mapping of pipe
>>> queues to
>>>     traffic classes, and subport level configuration of pipes and
>>> queues
>>>     changes will be made to macros, data structures and API
>>> functions defined
>>>
>>
>> Just want to highlight that even API could be kept using
>> unnamed union for hdr and unnamed structure for existing
>> protocol header fields.
>>
>
> Then we may never clean the protocol header fields out of it,
> yes this will impact the user but I believe the impact is small and
> trivial,
> I prefer replacing fields with protocol struct.

The problem that API breakages are bad and, for example, OvS uses these
fields.

May be API breakage should be postponed to 21.11?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-23 13:50  0% ` Andrew Rybchenko
@ 2020-11-23 14:17  0%   ` Ferruh Yigit
  2020-11-23 14:25  0%     ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-23 14:17 UTC (permalink / raw)
  To: Andrew Rybchenko, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam

On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
>> structures with the protocol structs, like:
>>
>> Current ``struct rte_flow_item_eth``,
>>
>> struct rte_flow_item_eth {
>> 	struct rte_ether_addr dst;
>> 	struct rte_ether_addr src;
>> 	rte_be16_t type;
>> 	uint32_t has_vlan:1;
>> 	uint32_t reserved:31;
>> }
>>
>> will become
>>
>> struct rte_flow_item_eth {
>> 	struct rte_ether_hdr hdr;
>> 	uint32_t has_vlan:1;
>> 	uint32_t reserved:31;
>> }
>>
>> This is both for documenting the intention and to be sure
>> ``rte_flow_item_*`` always starts with complete protocol header.
>>
>> Already many ``rte_flow_item_*`` structs implemented to have protocol
>> struct, target is convert all to this usage.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> a minor note below
> 
>> ---
>> Cc: Thomas Monjalon <thomas@monjalon.net>
>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Cc: Ori Kam <orika@nvidia.com>
>> ---
>>   doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 96986fabd598..a2fa0c196472 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -88,6 +88,13 @@ Deprecation Notices
>>     will be limited to maximum 256 queues.
>>     Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>>   
>> +* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
>> +  should start with relevant protocol header.
>> +  Some matching pattern structures implements this by duplicating protocol header
>> +  fields in the struct. To clarify the intention and to be sure protocol header
>> +  is intact, will replace those fields with relevant protocol header struct.
>> +  Target is v21.02 release and this should not change the ABI.
>> +
>>   * sched: To allow more traffic classes, flexible mapping of pipe queues to
>>     traffic classes, and subport level configuration of pipes and queues
>>     changes will be made to macros, data structures and API functions defined
>>
> 
> Just want to highlight that even API could be kept using
> unnamed union for hdr and unnamed structure for existing
> protocol header fields.
> 

Then we may never clean the protocol header fields out of it,
yes this will impact the user but I believe the impact is small and trivial,
I prefer replacing fields with protocol struct.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
  2020-11-23 13:40  5% [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes Ferruh Yigit
@ 2020-11-23 13:50  0% ` Andrew Rybchenko
  2020-11-23 14:17  0%   ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-11-23 13:50 UTC (permalink / raw)
  To: Ferruh Yigit, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam

On 11/23/20 4:40 PM, Ferruh Yigit wrote:
> Proposing to replace protocol header fields in the ``rte_flow_item_*``
> structures with the protocol structs, like:
> 
> Current ``struct rte_flow_item_eth``,
> 
> struct rte_flow_item_eth {
> 	struct rte_ether_addr dst;
> 	struct rte_ether_addr src;
> 	rte_be16_t type;
> 	uint32_t has_vlan:1;
> 	uint32_t reserved:31;
> }
> 
> will become
> 
> struct rte_flow_item_eth {
> 	struct rte_ether_hdr hdr;
> 	uint32_t has_vlan:1;
> 	uint32_t reserved:31;
> }
> 
> This is both for documenting the intention and to be sure
> ``rte_flow_item_*`` always starts with complete protocol header.
> 
> Already many ``rte_flow_item_*`` structs implemented to have protocol
> struct, target is convert all to this usage.
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

a minor note below

> ---
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: Ori Kam <orika@nvidia.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 96986fabd598..a2fa0c196472 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -88,6 +88,13 @@ Deprecation Notices
>    will be limited to maximum 256 queues.
>    Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>  
> +* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
> +  should start with relevant protocol header.
> +  Some matching pattern structures implements this by duplicating protocol header
> +  fields in the struct. To clarify the intention and to be sure protocol header
> +  is intact, will replace those fields with relevant protocol header struct.
> +  Target is v21.02 release and this should not change the ABI.
> +
>  * sched: To allow more traffic classes, flexible mapping of pipe queues to
>    traffic classes, and subport level configuration of pipes and queues
>    changes will be made to macros, data structures and API functions defined
> 

Just want to highlight that even API could be kept using
unnamed union for hdr and unnamed structure for existing
protocol header fields.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
@ 2020-11-23 13:40  5% Ferruh Yigit
  2020-11-23 13:50  0% ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-23 13:40 UTC (permalink / raw)
  To: Ray Kinsella, Neil Horman
  Cc: Ferruh Yigit, dev, Thomas Monjalon, Andrew Rybchenko, Ori Kam

Proposing to replace protocol header fields in the ``rte_flow_item_*``
structures with the protocol structs, like:

Current ``struct rte_flow_item_eth``,

struct rte_flow_item_eth {
	struct rte_ether_addr dst;
	struct rte_ether_addr src;
	rte_be16_t type;
	uint32_t has_vlan:1;
	uint32_t reserved:31;
}

will become

struct rte_flow_item_eth {
	struct rte_ether_hdr hdr;
	uint32_t has_vlan:1;
	uint32_t reserved:31;
}

This is both for documenting the intention and to be sure
``rte_flow_item_*`` always starts with complete protocol header.

Already many ``rte_flow_item_*`` structs implemented to have protocol
struct, target is convert all to this usage.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/deprecation.rst | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 96986fabd598..a2fa0c196472 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -88,6 +88,13 @@ Deprecation Notices
   will be limited to maximum 256 queues.
   Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
 
+* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
+  should start with relevant protocol header.
+  Some matching pattern structures implements this by duplicating protocol header
+  fields in the struct. To clarify the intention and to be sure protocol header
+  is intact, will replace those fields with relevant protocol header struct.
+  Target is v21.02 release and this should not change the ABI.
+
 * sched: To allow more traffic classes, flexible mapping of pipe queues to
   traffic classes, and subport level configuration of pipes and queues
   changes will be made to macros, data structures and API functions defined
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [dpdk-techboard] Minutes of Technical Board Meeting, 2020-11-18
  2020-11-23 10:00  0%   ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
@ 2020-11-23 11:16  0%     ` Morten Brørup
  0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2020-11-23 11:16 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Bruce Richardson, dev, techboard

> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Monday, November 23, 2020 11:00 AM
> 
> 23/11/2020 10:30, Morten Brørup:
> > Bruce,
> >
> > Here's my input as a developer of hardware appliances. It is my
> opinion, and as such may contradict the trend towards making DPDK a
> library, rather than a development kit.
> >
> > > DPDK build configuration - future enhancements
> > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > There are multiple requests (sometimes controversial) for new
> abilities
> > > to add into DPDK build system.
> > > In particular, request from few different teams:
> > >   - add ability to enable/disable individual apps/libs
> > >   - override some build settings for specific libs/drivers
> >
> > My wish list, in prioritized order:
> >
> > 1. The ability to remove features to reduce complexity - and thus the
> likelihood of bugs!
> >
> > Remember to consider this in application context.
> >
> > Background: Our previous firmware used the Linux kernel, and some
> loadable modules. We ran into a lot of extremely rare and unexpected
> cases where the Linux kernel network stack did something completely
> unusual, and our firmware needed to consider all these exceptional
> cases. This is one of the key reasons we switched to DPDK - the fast
> path libraries are clean and simple, and don't do anything we didn't
> ask them to do.
> >
> > DPDK example: If support for segmented packets are considered
> "required" by DPDK libraries and drivers, is it also required for
> applications to support segmented packet? If the application doesn’t
> need segmented packets, can it safely assume that no DPDK libraries or
> drivers create segmented packets under any circumstances? If support
> for segmented packets is a compile time option, there is an implicit
> guarantee that they don't appear.
> 
> The primary rule in DPDK is that the application remains in control.
> If the application does not call the API function for a feature,
> it won't be enabled. So no need to remove the unused libraries.

I think that this principle - the application remaining in control - is extremely important for DPDK, and we must always remember this principle when adding features to DPDK.

However, being able to disable some features at compile time elevates the certainty that these features are not being unexpectedly used from "trust" to "absolute certainty".

The DPDK core and libraries are growing in complexity, and I am starting to worry about this. Once bitten twice shy.

By the way, I consider the Dynamic MBUF concept a great enhancement in this area. The cleanup part of the Dynamic MBUF patch set made non-essential fields in the mbuf truly optional.

> 
> 
> > 2. The ability to remove/tweak features to improve *application*
> performance in specific environments would be good.
> >
> > E.g. removing support for multiple mbuf pools would free up an mbuf
> field (m->pool) for application use.
> > So would removing support for segmented packets (m->nb_segs, m-
> >next).
> >
> > Both of these modifications would also reduce complexity, although
> they would increase source code complexity in all the libraries and
> drivers needing to support a multidimensional matrix of features. (I
> highly doubt that all libraries support the combination of all features
> today... I remember having to argue strongly for the DPDK eBPF library
> to support reading data inside segmented packets.)
> 
> Because code must remain simple, the mbuf layout is fixed
> (except dynamic fields).

The mbuf layout could remain fixed (so vector implementations can rely on the layout), but the removed fields would become unused and available for application use instead, thus improving the application performance.

In the example of removing support for multiple mbuf pools, the functions free()'ing mbufs in the DPDK mbuf library and the DPDK drivers would be simpler, thus improving the performance. Removing support for segmented packets would also allow simpler (and thus higher performing) implementations of a few DPDK core functions.

It is somewhat difficult to formulate in writing, but I will try rephrasing my original point: Tweaking DPDK can provide performance improvements in the application itself, not only in the DPDK libraries/drivers.

> 
> 
> > 3. Removing cruft that has no effect on performance or similar, is
> "nice to have".
> >
> > E.g. drivers for hardware that we do not use.
> >
> > > As a first step to move forward - produce design doc of current
> build
> > > system.
> > > Discuss further enhancements based on that doc.
> >
> > > While planning changes to the build system backward compatibility
> > > with 20.11 should be considered.
> >
> > Backward compatibility is not a high priority for us. It is an
> extremely rare event for us to upgrade to a new version of any external
> software (Linux Kernel, DPDK and other libraries) or build tools,
> because we consider switching any of it to another version high effort
> (e.g. it requires extensive testing). In this perspective, having to
> change some details in the build system is a relatively small effort.
> >
> > With this said, the documentation of each DPDK release should include
> a chapter describing what an application developer should do different
> than with the previous release. E.g. the Release Note enumerates the
> key modifications as bullet points, but it is not always obvious how
> that affects an application being developed. (DPDK generally has great
> documentation, but is somewhat lacking in this area.)
> >
> > I know that ABI Stability is supposed to make much of this go away,
> but DPDK is clearly not there yet.
> >
> > > AR to Bruce to create initial version of the DD.
> > >
> >
> > The following may be scope creep, so just consider it me thinking out
> loud:
> >
> > Consider a general design documents in the form of a "life of an
> mbuf" document, describing how mbufs are pre-allocated for driver RX
> descriptors, and then handed over to the application trough the receive
> function, and then possibly going through defragmentation and
> reordering libraries, and then handed over to another driver's transmit
> function, which uses the mbufs to set up TX descriptors, and after
> transmission frees the mbufs to their original pool, where they are
> ultimately allocated again by a driver to refill its RX descriptor
> pool.
> >
> > The document can start off with the simple case with a single non-
> segmented, non-fragmented, in-order packet. And then it can be extended
> with variations, e.g. adding the description of segmented packets would
> explain how the m->nb_segs and m->next are being used when the packet
> is handled by the drivers and libraries.
> >
> > In the context of being able to enable/disable libraries and
> features, the purpose of this document would be to help showing
> interdependencies.
> 
> I agree we need this kind of doc.
> It could be part of the prog guide.
> Feel free to draft a skeleton.
> 
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-techboard] Minutes of Technical Board Meeting, 2020-11-18
  2020-11-23  9:30  2% ` Morten Brørup
@ 2020-11-23 10:00  0%   ` Thomas Monjalon
  2020-11-23 11:16  0%     ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-11-23 10:00 UTC (permalink / raw)
  To: Morten Brørup; +Cc: Bruce Richardson, dev, techboard

23/11/2020 10:30, Morten Brørup:
> Bruce,
> 
> Here's my input as a developer of hardware appliances. It is my opinion, and as such may contradict the trend towards making DPDK a library, rather than a development kit.
> 
> > DPDK build configuration - future enhancements
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > There are multiple requests (sometimes controversial) for new abilities
> > to add into DPDK build system.
> > In particular, request from few different teams:
> >   - add ability to enable/disable individual apps/libs
> >   - override some build settings for specific libs/drivers
> 
> My wish list, in prioritized order:
> 
> 1. The ability to remove features to reduce complexity - and thus the likelihood of bugs!
> 
> Remember to consider this in application context.
> 
> Background: Our previous firmware used the Linux kernel, and some loadable modules. We ran into a lot of extremely rare and unexpected cases where the Linux kernel network stack did something completely unusual, and our firmware needed to consider all these exceptional cases. This is one of the key reasons we switched to DPDK - the fast path libraries are clean and simple, and don't do anything we didn't ask them to do.
> 
> DPDK example: If support for segmented packets are considered "required" by DPDK libraries and drivers, is it also required for applications to support segmented packet? If the application doesn’t need segmented packets, can it safely assume that no DPDK libraries or drivers create segmented packets under any circumstances? If support for segmented packets is a compile time option, there is an implicit guarantee that they don't appear.

The primary rule in DPDK is that the application remains in control.
If the application does not call the API function for a feature,
it won't be enabled. So no need to remove the unused libraries.


> 2. The ability to remove/tweak features to improve *application* performance in specific environments would be good.
> 
> E.g. removing support for multiple mbuf pools would free up an mbuf field (m->pool) for application use.
> So would removing support for segmented packets (m->nb_segs, m->next).
> 
> Both of these modifications would also reduce complexity, although they would increase source code complexity in all the libraries and drivers needing to support a multidimensional matrix of features. (I highly doubt that all libraries support the combination of all features today... I remember having to argue strongly for the DPDK eBPF library to support reading data inside segmented packets.)

Because code must remain simple, the mbuf layout is fixed
(except dynamic fields).


> 3. Removing cruft that has no effect on performance or similar, is "nice to have".
> 
> E.g. drivers for hardware that we do not use.
> 
> > As a first step to move forward - produce design doc of current build
> > system.
> > Discuss further enhancements based on that doc.
> 
> > While planning changes to the build system backward compatibility
> > with 20.11 should be considered.
> 
> Backward compatibility is not a high priority for us. It is an extremely rare event for us to upgrade to a new version of any external software (Linux Kernel, DPDK and other libraries) or build tools, because we consider switching any of it to another version high effort (e.g. it requires extensive testing). In this perspective, having to change some details in the build system is a relatively small effort.
> 
> With this said, the documentation of each DPDK release should include a chapter describing what an application developer should do different than with the previous release. E.g. the Release Note enumerates the key modifications as bullet points, but it is not always obvious how that affects an application being developed. (DPDK generally has great documentation, but is somewhat lacking in this area.)
> 
> I know that ABI Stability is supposed to make much of this go away, but DPDK is clearly not there yet.
> 
> > AR to Bruce to create initial version of the DD.
> > 
> 
> The following may be scope creep, so just consider it me thinking out loud:
> 
> Consider a general design documents in the form of a "life of an mbuf" document, describing how mbufs are pre-allocated for driver RX descriptors, and then handed over to the application trough the receive function, and then possibly going through defragmentation and reordering libraries, and then handed over to another driver's transmit function, which uses the mbufs to set up TX descriptors, and after transmission frees the mbufs to their original pool, where they are ultimately allocated again by a driver to refill its RX descriptor pool.
> 
> The document can start off with the simple case with a single non-segmented, non-fragmented, in-order packet. And then it can be extended with variations, e.g. adding the description of segmented packets would explain how the m->nb_segs and m->next are being used when the packet is handled by the drivers and libraries.
> 
> In the context of being able to enable/disable libraries and features, the purpose of this document would be to help showing interdependencies.

I agree we need this kind of doc.
It could be part of the prog guide.
Feel free to draft a skeleton.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] Minutes of Technical Board Meeting, 2020-11-18
  @ 2020-11-23  9:30  2% ` Morten Brørup
  2020-11-23 10:00  0%   ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-23  9:30 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: techboard

Bruce,

Here's my input as a developer of hardware appliances. It is my opinion, and as such may contradict the trend towards making DPDK a library, rather than a development kit.

> DPDK build configuration - future enhancements
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> There are multiple requests (sometimes controversial) for new abilities
> to add into DPDK build system.
> In particular, request from few different teams:
>   - add ability to enable/disable individual apps/libs
>   - override some build settings for specific libs/drivers

My wish list, in prioritized order:

1. The ability to remove features to reduce complexity - and thus the likelihood of bugs!

Remember to consider this in application context.

Background: Our previous firmware used the Linux kernel, and some loadable modules. We ran into a lot of extremely rare and unexpected cases where the Linux kernel network stack did something completely unusual, and our firmware needed to consider all these exceptional cases. This is one of the key reasons we switched to DPDK - the fast path libraries are clean and simple, and don't do anything we didn't ask them to do.

DPDK example: If support for segmented packets are considered "required" by DPDK libraries and drivers, is it also required for applications to support segmented packet? If the application doesn’t need segmented packets, can it safely assume that no DPDK libraries or drivers create segmented packets under any circumstances? If support for segmented packets is a compile time option, there is an implicit guarantee that they don't appear.

2. The ability to remove/tweak features to improve *application* performance in specific environments would be good.

E.g. removing support for multiple mbuf pools would free up an mbuf field (m->pool) for application use.
So would removing support for segmented packets (m->nb_segs, m->next).

Both of these modifications would also reduce complexity, although they would increase source code complexity in all the libraries and drivers needing to support a multidimensional matrix of features. (I highly doubt that all libraries support the combination of all features today... I remember having to argue strongly for the DPDK eBPF library to support reading data inside segmented packets.)

3. Removing cruft that has no effect on performance or similar, is "nice to have".

E.g. drivers for hardware that we do not use.

> As a first step to move forward - produce design doc of current build
> system.
> Discuss further enhancements based on that doc.

> While planning changes to the build system backward compatibility
> with 20.11 should be considered.

Backward compatibility is not a high priority for us. It is an extremely rare event for us to upgrade to a new version of any external software (Linux Kernel, DPDK and other libraries) or build tools, because we consider switching any of it to another version high effort (e.g. it requires extensive testing). In this perspective, having to change some details in the build system is a relatively small effort.

With this said, the documentation of each DPDK release should include a chapter describing what an application developer should do different than with the previous release. E.g. the Release Note enumerates the key modifications as bullet points, but it is not always obvious how that affects an application being developed. (DPDK generally has great documentation, but is somewhat lacking in this area.)

I know that ABI Stability is supposed to make much of this go away, but DPDK is clearly not there yet.

> AR to Bruce to create initial version of the DD.
> 

The following may be scope creep, so just consider it me thinking out loud:

Consider a general design documents in the form of a "life of an mbuf" document, describing how mbufs are pre-allocated for driver RX descriptors, and then handed over to the application trough the receive function, and then possibly going through defragmentation and reordering libraries, and then handed over to another driver's transmit function, which uses the mbufs to set up TX descriptors, and after transmission frees the mbufs to their original pool, where they are ultimately allocated again by a driver to refill its RX descriptor pool.

The document can start off with the simple case with a single non-segmented, non-fragmented, in-order packet. And then it can be extended with variations, e.g. adding the description of segmented packets would explain how the m->nb_segs and m->next are being used when the packet is handled by the drivers and libraries.

In the context of being able to enable/disable libraries and features, the purpose of this document would be to help showing interdependencies.


Med venlig hilsen / kind regards
- Morten Brørup




^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
  2020-11-16 16:23  3%   ` Ferruh Yigit
@ 2020-11-22 13:28  0%     ` Jack Min
  0 siblings, 0 replies; 200+ results
From: Jack Min @ 2020-11-22 13:28 UTC (permalink / raw)
  To: Ferruh Yigit, Xiaoyu Min, Jingjing Wu, Beilei Xing
  Cc: dev, NBU-Contact-Thomas Monjalon, Andrew Rybchenko, Ori Kam, Dekel Peled

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, November 17, 2020 00:23
> To: Xiaoyu Min <jackmin@mellanox.com>; Jingjing Wu <jingjing.wu@intel.com>;
> Beilei Xing <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Jack Min <jackmin@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> <arybchenko@solarflare.com>; Ori Kam <orika@nvidia.com>; Dekel Peled
> <dekelp@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
> 
> On 11/16/2020 7:55 AM, Xiaoyu Min wrote:
> > From: Xiaoyu Min <jackmin@nvidia.com>
> >
> > The rte_flow_item_vlan items are refined.
> > The structs do not exactly represent the packet bits captured on the
> > wire anymore so should only copy real header instead of the whole struct.
> >
> > Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
> >
> > Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN
> items")
> >
> > Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
> > ---
> >   drivers/net/iavf/iavf_fdir.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> > index d683a468c1..7054bde0b9 100644
> > --- a/drivers/net/iavf/iavf_fdir.c
> > +++ b/drivers/net/iavf/iavf_fdir.c
> > @@ -541,7 +541,7 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
> >   				VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> ETH, ETHERTYPE);
> >
> >   				rte_memcpy(hdr->buffer,
> > -					eth_spec, sizeof(*eth_spec));
> > +					eth_spec, sizeof(struct rte_ether_hdr));
> 
> This requires 'struct rte_flow_item_eth' should have 'struct rte_ether_hdr' as
> first element, and I suspect this usage exists in a few more locations, but I
> wonder if this assumption is real and documented in somewhere?
> I am not just talking about 'struct rte_flow_item_eth', but for all
> 'rte_flow_item_*'...
> 
I think this is not documented and this assumption is not real.
I've created one ticket on Bugzilla (https://bugs.dpdk.org/show_bug.cgi?id=581) to track.

> 
> 
> btw, while checking for the 'struct rte_flow_item_eth', pahole shows it is using
> 20 bytes, and I suspect this is not the intention with the reserved field:
> 
> struct rte_flow_item_eth {
> 	struct rte_ether_addr      dst;                  /*     0     6 */
> 	struct rte_ether_addr      src;                  /*     6     6 */
> 	uint16_t                   type;                 /*    12     2 */
> 
> 	/* Bitfield combined with previous fields */
> 
> 	uint32_t                   has_vlan:1;           /*    12:15  4 */
> 
> 	/* XXX 31 bits hole, try to pack */
> 
> 	uint32_t                   reserved:31;          /*    16: 1  4 */
> 
> 	/* size: 20, cachelines: 1, members: 5 */
> 	/* bit holes: 1, sum bit holes: 31 bits */
> 	/* bit_padding: 1 bits */
> 	/* last cacheline: 20 bytes */
> };
> 
> 'has_vlan' seems combined with previous field to make together 32 bits. So the
> 'reserved' field is occupying a new 32 bits all by itself.
> 
> What about changing the struct as following, while we can change the ABI:
> struct rte_flow_item_eth {
> 	struct rte_ether_addr      dst;                  /*     0     6 */
> 	struct rte_ether_addr      src;                  /*     6     6 */
> 	uint16_t                   type;                 /*    12     2 */
> 	uint16_t                   has_vlan:1;           /*    14:15  2 */
> 	uint16_t                   reserved:15;          /*    14: 0  2 */
> 
> 	/* size: 16, cachelines: 1, members: 5 */
> 	/* last cacheline: 16 bytes */
> };
> 

Well we probably need to discuss this in next release. 
It's too late to change this API at this moment.

-Jack


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v1 1/1] build: alias default build as generic
@ 2020-11-20 12:27  3% Juraj Linkeš
  2020-11-24  7:52  3% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
  0 siblings, 1 reply; 200+ results
From: Juraj Linkeš @ 2020-11-20 12:27 UTC (permalink / raw)
  To: thomas, bruce.richardson, Honnappa.Nagarahalli; +Cc: dev, Juraj Linkeš

The current machine='default' build name is not descriptive. The actual
default build is machine='native'. Add an alternative string which does
the same build and better describes what we're building:
machine='generic'. Leave machine='default' for backwards compatibility.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
 config/arm/meson.build                    |  5 +++--
 config/meson.build                        | 13 +++++++------
 devtools/test-meson-builds.sh             | 12 ++++++------
 doc/guides/prog_guide/build-sdk-meson.rst |  4 ++--
 meson_options.txt                         |  2 +-
 5 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 42b4e43c7..d4066ade8 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -1,12 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation.
 # Copyright(c) 2017 Cavium, Inc
+# Copyright(c) 2020 PANTHEON.tech s.r.o.
 
 # for checking defines we need to use the correct compiler flags
 march_opt = '-march=@0@'.format(machine)
 
 arm_force_native_march = false
-arm_force_default_march = (machine == 'default')
+arm_force_generic_march = (machine == 'generic')
 
 flags_common_default = [
 	# Accelarate rte_memcpy. Be sure to run unit test (memcpy_perf_autotest)
@@ -148,7 +149,7 @@ else
 	cmd_generic = ['generic', '', '', 'default', '']
 	cmd_output = cmd_generic # Set generic by default
 	machine_args = [] # Clear previous machine args
-	if arm_force_default_march and not meson.is_cross_build()
+	if arm_force_generic_march and not meson.is_cross_build()
 		machine = impl_generic
 		impl_pn = 'default'
 	elif not meson.is_cross_build()
diff --git a/config/meson.build b/config/meson.build
index a29693b88..3db2f55e0 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -70,21 +70,22 @@ else
 	machine = get_option('machine')
 endif
 
-# machine type 'default' is special, it defaults to the per arch agreed common
-# minimal baseline needed for DPDK.
+# machine type 'generic' is special, it selects the per arch agreed common
+# minimal baseline needed for DPDK. Machine type 'default' is also supported
+# with the same meaning for backwards compatibility.
 # That might not be the most optimized, but the most portable version while
 # still being able to support the CPU features required for DPDK.
 # This can be bumped up by the DPDK project, but it can never be an
 # invariant like 'native'
-if machine == 'default'
+if machine == 'default' or machine == 'generic'
 	if host_machine.cpu_family().startswith('x86')
-		# matches the old pre-meson build systems default
+		# matches the old pre-meson build systems generic machine
 		machine = 'corei7'
 	elif host_machine.cpu_family().startswith('arm')
 		machine = 'armv7-a'
 	elif host_machine.cpu_family().startswith('aarch')
-		# arm64 manages defaults in config/arm/meson.build
-		machine = 'default'
+		# arm64 manages generic config in config/arm/meson.build
+		machine = 'generic'
 	elif host_machine.cpu_family().startswith('ppc')
 		machine = 'power8'
 	endif
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 3ce49368c..11aa9bf11 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -209,11 +209,11 @@ done
 # test compilation with minimal x86 instruction set
 # Set the install path for libraries to "lib" explicitly to prevent problems
 # with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
-default_machine='nehalem'
-if ! check_cc_flags "-march=$default_machine" ; then
-	default_machine='corei7'
+generic_machine='nehalem'
+if ! check_cc_flags "-march=$generic_machine" ; then
+	generic_machine='corei7'
 fi
-build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
+build build-x86-generic cc -Dlibdir=lib -Dmachine=$generic_machine $use_shared
 
 # 32-bit with default compiler
 if check_cc_flags '-m32' ; then
@@ -253,10 +253,10 @@ for f in $srcdir/config/ppc/ppc* ; do
 	build build-$(basename $f | cut -d'-' -f-2) $f $use_shared
 done
 
-# Test installation of the x86-default target, to be used for checking
+# Test installation of the x86-generic target, to be used for checking
 # the sample apps build using the pkg-config file for cflags and libs
 load_env cc
-build_path=$(readlink -f $builds_dir/build-x86-default)
+build_path=$(readlink -f $builds_dir/build-x86-generic)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
 if [ -z "$DPDK_ABI_REF_VERSION" ]; then
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 3429e2647..c7e12eedf 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -85,7 +85,7 @@ Project-specific options are passed used -Doption=value::
 
 	meson -Denable_docs=true fullbuild  # build and install docs
 
-	meson -Dmachine=default  # use builder-independent baseline -march
+	meson -Dmachine=generic  # use builder-independent baseline -march
 
 	meson -Ddisable_drivers=event/*,net/tap  # disable tap driver and all
 					# eventdev PMDs for a smaller build
@@ -114,7 +114,7 @@ Examples of setting some of the same options using meson configure::
         re-scan from meson.
 
 .. note::
-        machine=default uses a config that works on all supported architectures
+        machine=generic uses a config that works on all supported architectures
         regardless of the capabilities of the machine where the build is happening.
 
 As well as those settings taken from ``meson configure``, other options
diff --git a/meson_options.txt b/meson_options.txt
index e384e6dbb..bb4c0279e 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -21,7 +21,7 @@ option('kernel_dir', type: 'string', value: '',
 option('lib_musdk_dir', type: 'string', value: '',
 	description: 'path to the MUSDK library installation directory')
 option('machine', type: 'string', value: 'native',
-	description: 'set the target machine type')
+	description: 'set the target machine type. Special values: 'generic' is a build usable on all machines of the build machine architecture, 'native' lets the compiler pick the architecture of the build machine.')
 option('max_ethports', type: 'integer', value: 32,
 	description: 'maximum number of Ethernet devices')
 option('max_lcores', type: 'integer', value: 128,
-- 
2.20.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC] remove unused functions
@ 2020-11-19  3:52  1% Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-19  3:52 UTC (permalink / raw)
  To: Jerin Jacob, Cristian Dumitrescu, Hemant Agrawal, Sachin Saxena,
	Ray Kinsella, Neil Horman, Rosen Xu, Jingjing Wu, Beilei Xing,
	Nithin Dabilpuram, Ajit Khaparde, Raveendra Padasalagi,
	Vikas Gupta, Gagandeep Singh, Somalapuram Amaranath, Akhil Goyal,
	Jay Zhou, Timothy McDaniel, Liang Ma, Peter Mccarthy,
	Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
	Pavel Belous, Rasesh Mody, Shahed Shaikh, Somnath Kotur,
	Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Jeff Guo, Haiyue Wang, Marcin Wojtas,
	Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
	Qi Zhang, Xiao Wang, Qiming Yang, Alfredo Cardigliano,
	Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
	Liron Himi, Stephen Hemminger, K. Y. Srinivasan, Haiyang Zhang,
	Long Li, Heinrich Kuhn, Harman Kalra, Kiran Kumar K,
	Andrew Rybchenko, Jasvinder Singh, Jiawen Wu, Jian Wang,
	Tianfei zhang, Ori Kam, Guy Kaneti, Anatoly Burakov,
	Maxime Coquelin, Chenbo Xia
  Cc: Ferruh Yigit, dev

Removing unused functions, reported by cppcheck.

Easy way to remove clutter, since the code is already in the git repo,
they can be added back when needed.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 app/test-eventdev/parser.c                    |   88 -
 app/test-eventdev/parser.h                    |    6 -
 app/test/test_table_pipeline.c                |   36 -
 drivers/bus/dpaa/base/fman/fman_hw.c          |  182 -
 drivers/bus/dpaa/base/fman/netcfg_layer.c     |   11 -
 drivers/bus/dpaa/base/qbman/bman.c            |   34 -
 drivers/bus/dpaa/base/qbman/bman_driver.c     |   16 -
 drivers/bus/dpaa/base/qbman/process.c         |   94 -
 drivers/bus/dpaa/base/qbman/qman.c            |  778 ----
 drivers/bus/dpaa/base/qbman/qman_priv.h       |    9 -
 drivers/bus/dpaa/dpaa_bus.c                   |   20 -
 drivers/bus/dpaa/include/fsl_bman.h           |   15 -
 drivers/bus/dpaa/include/fsl_fman.h           |   28 -
 drivers/bus/dpaa/include/fsl_qman.h           |  307 --
 drivers/bus/dpaa/include/fsl_usd.h            |   11 -
 drivers/bus/dpaa/include/netcfg.h             |    6 -
 drivers/bus/dpaa/rte_dpaa_bus.h               |   13 -
 drivers/bus/dpaa/version.map                  |   10 -
 drivers/bus/fslmc/fslmc_bus.c                 |   19 -
 drivers/bus/fslmc/mc/dpbp.c                   |  141 -
 drivers/bus/fslmc/mc/dpci.c                   |  320 --
 drivers/bus/fslmc/mc/dpcon.c                  |  241 --
 drivers/bus/fslmc/mc/dpdmai.c                 |  144 -
 drivers/bus/fslmc/mc/dpio.c                   |  191 -
 drivers/bus/fslmc/mc/fsl_dpbp.h               |   20 -
 drivers/bus/fslmc/mc/fsl_dpci.h               |   49 -
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   37 -
 drivers/bus/fslmc/mc/fsl_dpdmai.h             |   20 -
 drivers/bus/fslmc/mc/fsl_dpio.h               |   26 -
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    7 -
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |    3 -
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |    2 -
 .../fslmc/qbman/include/fsl_qbman_portal.h    |  463 ---
 drivers/bus/fslmc/qbman/qbman_debug.c         |    5 -
 drivers/bus/fslmc/qbman/qbman_portal.c        |  437 ---
 drivers/bus/fslmc/rte_fslmc.h                 |   10 -
 drivers/bus/fslmc/version.map                 |    6 -
 drivers/bus/ifpga/ifpga_common.c              |   23 -
 drivers/bus/ifpga/ifpga_common.h              |    3 -
 drivers/common/dpaax/dpaa_of.c                |   27 -
 drivers/common/dpaax/dpaa_of.h                |    5 -
 drivers/common/dpaax/dpaax_iova_table.c       |   39 -
 drivers/common/dpaax/dpaax_iova_table.h       |    2 -
 drivers/common/dpaax/version.map              |    1 -
 drivers/common/iavf/iavf_common.c             |  425 ---
 drivers/common/iavf/iavf_prototype.h          |   17 -
 drivers/common/octeontx2/otx2_mbox.c          |   13 -
 drivers/common/octeontx2/otx2_mbox.h          |    1 -
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c          |   19 -
 drivers/crypto/bcmfs/bcmfs_sym_pmd.h          |    3 -
 drivers/crypto/bcmfs/bcmfs_vfio.c             |   24 -
 drivers/crypto/bcmfs/bcmfs_vfio.h             |    4 -
 drivers/crypto/caam_jr/caam_jr_pvt.h          |    1 -
 drivers/crypto/caam_jr/caam_jr_uio.c          |   28 -
 drivers/crypto/ccp/ccp_dev.c                  |   65 -
 drivers/crypto/ccp/ccp_dev.h                  |    8 -
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  401 --
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   52 -
 drivers/crypto/virtio/virtio_pci.c            |   13 -
 drivers/crypto/virtio/virtio_pci.h            |    5 -
 drivers/event/dlb/dlb_priv.h                  |    2 -
 drivers/event/dlb/dlb_xstats.c                |    7 -
 drivers/event/dlb2/dlb2_priv.h                |    2 -
 drivers/event/dlb2/dlb2_xstats.c              |    7 -
 drivers/event/opdl/opdl_ring.c                |  210 --
 drivers/event/opdl/opdl_ring.h                |  236 --
 drivers/net/ark/ark_ddm.c                     |   13 -
 drivers/net/ark/ark_ddm.h                     |    1 -
 drivers/net/ark/ark_pktchkr.c                 |   52 -
 drivers/net/ark/ark_pktchkr.h                 |    3 -
 drivers/net/ark/ark_pktdir.c                  |   22 -
 drivers/net/ark/ark_pktdir.h                  |    3 -
 drivers/net/ark/ark_pktgen.c                  |   27 -
 drivers/net/ark/ark_pktgen.h                  |    2 -
 drivers/net/ark/ark_udm.c                     |   15 -
 drivers/net/ark/ark_udm.h                     |    2 -
 drivers/net/atlantic/hw_atl/hw_atl_b0.c       |   14 -
 drivers/net/atlantic/hw_atl/hw_atl_b0.h       |    2 -
 drivers/net/atlantic/hw_atl/hw_atl_llh.c      |  318 --
 drivers/net/atlantic/hw_atl/hw_atl_llh.h      |  153 -
 drivers/net/atlantic/hw_atl/hw_atl_utils.c    |   36 -
 drivers/net/atlantic/hw_atl/hw_atl_utils.h    |    4 -
 drivers/net/bnx2x/ecore_sp.c                  |   17 -
 drivers/net/bnx2x/ecore_sp.h                  |    2 -
 drivers/net/bnx2x/elink.c                     | 1367 -------
 drivers/net/bnx2x/elink.h                     |   57 -
 drivers/net/bnxt/tf_core/bitalloc.c           |  156 -
 drivers/net/bnxt/tf_core/bitalloc.h           |   26 -
 drivers/net/bnxt/tf_core/stack.c              |   25 -
 drivers/net/bnxt/tf_core/stack.h              |   12 -
 drivers/net/bnxt/tf_core/tf_core.c            |  241 --
 drivers/net/bnxt/tf_core/tf_core.h            |   81 -
 drivers/net/bnxt/tf_core/tf_msg.c             |   40 -
 drivers/net/bnxt/tf_core/tf_msg.h             |   31 -
 drivers/net/bnxt/tf_core/tf_session.c         |   33 -
 drivers/net/bnxt/tf_core/tf_session.h         |   16 -
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   53 -
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |   14 -
 drivers/net/bnxt/tf_core/tf_tcam.c            |    7 -
 drivers/net/bnxt/tf_core/tf_tcam.h            |   17 -
 drivers/net/bnxt/tf_core/tfp.c                |   27 -
 drivers/net/bnxt/tf_core/tfp.h                |    4 -
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |   78 -
 drivers/net/bnxt/tf_ulp/ulp_port_db.c         |   31 -
 drivers/net/bnxt/tf_ulp/ulp_port_db.h         |   14 -
 drivers/net/bnxt/tf_ulp/ulp_utils.c           |   11 -
 drivers/net/bnxt/tf_ulp/ulp_utils.h           |    3 -
 drivers/net/bonding/eth_bond_private.h        |    4 -
 drivers/net/bonding/rte_eth_bond.h            |   38 -
 drivers/net/bonding/rte_eth_bond_api.c        |   39 -
 drivers/net/bonding/rte_eth_bond_pmd.c        |   22 -
 drivers/net/cxgbe/base/common.h               |    5 -
 drivers/net/cxgbe/base/t4_hw.c                |   41 -
 drivers/net/dpaa/fmlib/fm_vsp.c               |   19 -
 drivers/net/dpaa/fmlib/fm_vsp_ext.h           |    3 -
 drivers/net/dpaa2/mc/dpdmux.c                 |  725 ----
 drivers/net/dpaa2/mc/dpni.c                   |  818 +----
 drivers/net/dpaa2/mc/dprtc.c                  |  365 --
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  108 -
 drivers/net/dpaa2/mc/fsl_dpni.h               |  134 -
 drivers/net/dpaa2/mc/fsl_dprtc.h              |   57 -
 drivers/net/e1000/base/e1000_82542.c          |   97 -
 drivers/net/e1000/base/e1000_82543.c          |   78 -
 drivers/net/e1000/base/e1000_82543.h          |    4 -
 drivers/net/e1000/base/e1000_82571.c          |   35 -
 drivers/net/e1000/base/e1000_82571.h          |    1 -
 drivers/net/e1000/base/e1000_82575.c          |  298 --
 drivers/net/e1000/base/e1000_82575.h          |    8 -
 drivers/net/e1000/base/e1000_api.c            |  530 ---
 drivers/net/e1000/base/e1000_api.h            |   40 -
 drivers/net/e1000/base/e1000_base.c           |   78 -
 drivers/net/e1000/base/e1000_base.h           |    1 -
 drivers/net/e1000/base/e1000_ich8lan.c        |  266 --
 drivers/net/e1000/base/e1000_ich8lan.h        |    3 -
 drivers/net/e1000/base/e1000_mac.c            |   14 -
 drivers/net/e1000/base/e1000_mac.h            |    1 -
 drivers/net/e1000/base/e1000_manage.c         |  192 -
 drivers/net/e1000/base/e1000_manage.h         |    2 -
 drivers/net/e1000/base/e1000_nvm.c            |  129 -
 drivers/net/e1000/base/e1000_nvm.h            |    5 -
 drivers/net/e1000/base/e1000_phy.c            |  201 -
 drivers/net/e1000/base/e1000_phy.h            |    4 -
 drivers/net/e1000/base/e1000_vf.c             |   19 -
 drivers/net/e1000/base/e1000_vf.h             |    1 -
 drivers/net/ena/base/ena_com.c                |  222 --
 drivers/net/ena/base/ena_com.h                |  144 -
 drivers/net/ena/base/ena_eth_com.c            |   11 -
 drivers/net/ena/base/ena_eth_com.h            |    2 -
 drivers/net/fm10k/base/fm10k_api.c            |  104 -
 drivers/net/fm10k/base/fm10k_api.h            |   11 -
 drivers/net/fm10k/base/fm10k_tlv.c            |  183 -
 drivers/net/fm10k/base/fm10k_tlv.h            |    1 -
 drivers/net/i40e/base/i40e_common.c           | 2989 ++-------------
 drivers/net/i40e/base/i40e_dcb.c              |   43 -
 drivers/net/i40e/base/i40e_dcb.h              |    3 -
 drivers/net/i40e/base/i40e_diag.c             |  146 -
 drivers/net/i40e/base/i40e_diag.h             |   30 -
 drivers/net/i40e/base/i40e_lan_hmc.c          |  264 --
 drivers/net/i40e/base/i40e_lan_hmc.h          |    6 -
 drivers/net/i40e/base/i40e_nvm.c              |  988 -----
 drivers/net/i40e/base/i40e_prototype.h        |  202 -
 drivers/net/i40e/base/meson.build             |    1 -
 drivers/net/iavf/iavf.h                       |    2 -
 drivers/net/iavf/iavf_vchnl.c                 |   72 -
 drivers/net/ice/base/ice_acl.c                |  108 -
 drivers/net/ice/base/ice_acl.h                |   13 -
 drivers/net/ice/base/ice_common.c             | 2084 ++---------
 drivers/net/ice/base/ice_common.h             |   70 -
 drivers/net/ice/base/ice_dcb.c                |  161 -
 drivers/net/ice/base/ice_dcb.h                |   11 -
 drivers/net/ice/base/ice_fdir.c               |  262 --
 drivers/net/ice/base/ice_fdir.h               |   16 -
 drivers/net/ice/base/ice_flex_pipe.c          |  103 -
 drivers/net/ice/base/ice_flex_pipe.h          |    4 -
 drivers/net/ice/base/ice_flow.c               |  207 --
 drivers/net/ice/base/ice_flow.h               |   15 -
 drivers/net/ice/base/ice_nvm.c                |  200 -
 drivers/net/ice/base/ice_nvm.h                |    8 -
 drivers/net/ice/base/ice_sched.c              | 1440 +-------
 drivers/net/ice/base/ice_sched.h              |   78 -
 drivers/net/ice/base/ice_switch.c             | 1646 +--------
 drivers/net/ice/base/ice_switch.h             |   62 -
 drivers/net/igc/base/igc_api.c                |  598 ---
 drivers/net/igc/base/igc_api.h                |   41 -
 drivers/net/igc/base/igc_base.c               |   78 -
 drivers/net/igc/base/igc_base.h               |    1 -
 drivers/net/igc/base/igc_hw.h                 |    3 -
 drivers/net/igc/base/igc_i225.c               |  159 -
 drivers/net/igc/base/igc_i225.h               |    4 -
 drivers/net/igc/base/igc_mac.c                |  853 -----
 drivers/net/igc/base/igc_mac.h                |   22 -
 drivers/net/igc/base/igc_manage.c             |  262 --
 drivers/net/igc/base/igc_manage.h             |    4 -
 drivers/net/igc/base/igc_nvm.c                |  679 ----
 drivers/net/igc/base/igc_nvm.h                |   16 -
 drivers/net/igc/base/igc_osdep.c              |   25 -
 drivers/net/igc/base/igc_phy.c                | 3256 +----------------
 drivers/net/igc/base/igc_phy.h                |   49 -
 drivers/net/ionic/ionic.h                     |    2 -
 drivers/net/ionic/ionic_dev.c                 |   39 -
 drivers/net/ionic/ionic_dev.h                 |    4 -
 drivers/net/ionic/ionic_lif.c                 |   11 -
 drivers/net/ionic/ionic_lif.h                 |    1 -
 drivers/net/ionic/ionic_main.c                |   33 -
 drivers/net/ionic/ionic_rx_filter.c           |   14 -
 drivers/net/ionic/ionic_rx_filter.h           |    1 -
 drivers/net/mlx5/mlx5.h                       |    1 -
 drivers/net/mlx5/mlx5_utils.c                 |   21 -
 drivers/net/mlx5/mlx5_utils.h                 |   25 -
 drivers/net/mvneta/mvneta_ethdev.c            |   18 -
 drivers/net/netvsc/hn_rndis.c                 |   31 -
 drivers/net/netvsc/hn_rndis.h                 |    1 -
 drivers/net/netvsc/hn_var.h                   |    3 -
 drivers/net/netvsc/hn_vf.c                    |   25 -
 drivers/net/nfp/nfpcore/nfp_cpp.h             |  213 --
 drivers/net/nfp/nfpcore/nfp_cppcore.c         |  218 --
 drivers/net/nfp/nfpcore/nfp_mip.c             |    6 -
 drivers/net/nfp/nfpcore/nfp_mip.h             |    1 -
 drivers/net/nfp/nfpcore/nfp_mutex.c           |   93 -
 drivers/net/nfp/nfpcore/nfp_nsp.c             |   41 -
 drivers/net/nfp/nfpcore/nfp_nsp.h             |   16 -
 drivers/net/nfp/nfpcore/nfp_nsp_cmds.c        |   79 -
 drivers/net/nfp/nfpcore/nfp_nsp_eth.c         |  206 --
 drivers/net/nfp/nfpcore/nfp_resource.c        |   12 -
 drivers/net/nfp/nfpcore/nfp_resource.h        |    7 -
 drivers/net/nfp/nfpcore/nfp_rtsym.c           |   34 -
 drivers/net/nfp/nfpcore/nfp_rtsym.h           |    4 -
 drivers/net/octeontx/base/octeontx_bgx.c      |   54 -
 drivers/net/octeontx/base/octeontx_bgx.h      |    2 -
 drivers/net/octeontx/base/octeontx_pkivf.c    |   22 -
 drivers/net/octeontx/base/octeontx_pkivf.h    |    1 -
 drivers/net/octeontx2/otx2_ethdev.c           |   26 -
 drivers/net/octeontx2/otx2_ethdev.h           |    3 -
 drivers/net/octeontx2/otx2_ethdev_debug.c     |   55 -
 drivers/net/octeontx2/otx2_flow.h             |    2 -
 drivers/net/octeontx2/otx2_flow_utils.c       |   18 -
 drivers/net/pfe/base/pfe.h                    |   12 -
 drivers/net/pfe/pfe_hal.c                     |  144 -
 drivers/net/pfe/pfe_hif_lib.c                 |   20 -
 drivers/net/pfe/pfe_hif_lib.h                 |    1 -
 drivers/net/qede/base/ecore.h                 |    3 -
 drivers/net/qede/base/ecore_cxt.c             |  229 --
 drivers/net/qede/base/ecore_cxt.h             |   27 -
 drivers/net/qede/base/ecore_dcbx.c            |  266 --
 drivers/net/qede/base/ecore_dcbx_api.h        |   27 -
 drivers/net/qede/base/ecore_dev.c             |  306 --
 drivers/net/qede/base/ecore_dev_api.h         |  127 -
 drivers/net/qede/base/ecore_hw.c              |   16 -
 drivers/net/qede/base/ecore_hw.h              |   10 -
 drivers/net/qede/base/ecore_init_fw_funcs.c   |  616 ----
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  227 --
 drivers/net/qede/base/ecore_int.c             |  193 -
 drivers/net/qede/base/ecore_int.h             |   13 -
 drivers/net/qede/base/ecore_int_api.h         |   60 -
 drivers/net/qede/base/ecore_iov_api.h         |  469 ---
 drivers/net/qede/base/ecore_l2.c              |  103 -
 drivers/net/qede/base/ecore_l2_api.h          |   24 -
 drivers/net/qede/base/ecore_mcp.c             | 1121 +-----
 drivers/net/qede/base/ecore_mcp.h             |   37 -
 drivers/net/qede/base/ecore_mcp_api.h         |  449 ---
 drivers/net/qede/base/ecore_sp_commands.c     |   89 -
 drivers/net/qede/base/ecore_sp_commands.h     |   21 -
 drivers/net/qede/base/ecore_sriov.c           |  767 ----
 drivers/net/qede/base/ecore_vf.c              |   48 -
 drivers/net/qede/base/ecore_vf_api.h          |   40 -
 drivers/net/qede/qede_debug.c                 |  532 ---
 drivers/net/qede/qede_debug.h                 |   97 -
 drivers/net/sfc/sfc_kvargs.c                  |   37 -
 drivers/net/sfc/sfc_kvargs.h                  |    2 -
 drivers/net/softnic/parser.c                  |  218 --
 drivers/net/softnic/parser.h                  |   10 -
 .../net/softnic/rte_eth_softnic_cryptodev.c   |   15 -
 .../net/softnic/rte_eth_softnic_internals.h   |   28 -
 drivers/net/softnic/rte_eth_softnic_thread.c  |  183 -
 drivers/net/txgbe/base/txgbe_eeprom.c         |   72 -
 drivers/net/txgbe/base/txgbe_eeprom.h         |    2 -
 drivers/raw/ifpga/base/opae_eth_group.c       |   25 -
 drivers/raw/ifpga/base/opae_eth_group.h       |    1 -
 drivers/raw/ifpga/base/opae_hw_api.c          |  212 --
 drivers/raw/ifpga/base/opae_hw_api.h          |   36 -
 drivers/raw/ifpga/base/opae_i2c.c             |   12 -
 drivers/raw/ifpga/base/opae_i2c.h             |    4 -
 drivers/raw/ifpga/base/opae_ifpga_hw_api.c    |   99 -
 drivers/raw/ifpga/base/opae_ifpga_hw_api.h    |   15 -
 drivers/regex/mlx5/mlx5_regex.h               |    2 -
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   25 -
 drivers/regex/mlx5/mlx5_rxp.c                 |   45 -
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   58 -
 .../regex/octeontx2/otx2_regexdev_hw_access.h |    2 -
 drivers/regex/octeontx2/otx2_regexdev_mbox.c  |   28 -
 drivers/regex/octeontx2/otx2_regexdev_mbox.h  |    3 -
 examples/ip_pipeline/cryptodev.c              |    8 -
 examples/ip_pipeline/cryptodev.h              |    3 -
 examples/ip_pipeline/link.c                   |   21 -
 examples/ip_pipeline/link.h                   |    3 -
 examples/ip_pipeline/parser.c                 |  202 -
 examples/ip_pipeline/parser.h                 |    7 -
 examples/pipeline/obj.c                       |   21 -
 examples/pipeline/obj.h                       |    3 -
 lib/librte_eal/linux/eal_memory.c             |    8 -
 lib/librte_vhost/fd_man.c                     |   15 -
 lib/librte_vhost/fd_man.h                     |    2 -
 302 files changed, 833 insertions(+), 38856 deletions(-)
 delete mode 100644 drivers/net/i40e/base/i40e_diag.c
 delete mode 100644 drivers/net/i40e/base/i40e_diag.h

diff --git a/app/test-eventdev/parser.c b/app/test-eventdev/parser.c
index 24f1855e9a..131f7383d9 100644
--- a/app/test-eventdev/parser.c
+++ b/app/test-eventdev/parser.c
@@ -37,44 +37,6 @@ get_hex_val(char c)
 	}
 }
 
-int
-parser_read_arg_bool(const char *p)
-{
-	p = skip_white_spaces(p);
-	int result = -EINVAL;
-
-	if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
-		((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
-		p += 3;
-		result = 1;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'n')) ||
-		((p[0] == 'O') && (p[1] == 'N'))) {
-		p += 2;
-		result = 1;
-	}
-
-	if (((p[0] == 'n') && (p[1] == 'o')) ||
-		((p[0] == 'N') && (p[1] == 'O'))) {
-		p += 2;
-		result = 0;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
-		((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
-		p += 3;
-		result = 0;
-	}
-
-	p = skip_white_spaces(p);
-
-	if (p[0] != '\0')
-		return -EINVAL;
-
-	return result;
-}
-
 int
 parser_read_uint64(uint64_t *value, const char *p)
 {
@@ -115,24 +77,6 @@ parser_read_uint64(uint64_t *value, const char *p)
 	return 0;
 }
 
-int
-parser_read_int32(int32_t *value, const char *p)
-{
-	char *next;
-	int32_t val;
-
-	p = skip_white_spaces(p);
-	if (!isdigit(*p))
-		return -EINVAL;
-
-	val = strtol(p, &next, 10);
-	if (p == next)
-		return -EINVAL;
-
-	*value = val;
-	return 0;
-}
-
 int
 parser_read_uint64_hex(uint64_t *value, const char *p)
 {
@@ -169,22 +113,6 @@ parser_read_uint32(uint32_t *value, const char *p)
 	return 0;
 }
 
-int
-parser_read_uint32_hex(uint32_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT32_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 parser_read_uint16(uint16_t *value, const char *p)
 {
@@ -201,22 +129,6 @@ parser_read_uint16(uint16_t *value, const char *p)
 	return 0;
 }
 
-int
-parser_read_uint16_hex(uint16_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT16_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 parser_read_uint8(uint8_t *value, const char *p)
 {
diff --git a/app/test-eventdev/parser.h b/app/test-eventdev/parser.h
index 673ff22d78..94856e66e3 100644
--- a/app/test-eventdev/parser.h
+++ b/app/test-eventdev/parser.h
@@ -28,20 +28,14 @@ skip_digits(const char *src)
 	return i;
 }
 
-int parser_read_arg_bool(const char *p);
-
 int parser_read_uint64(uint64_t *value, const char *p);
 int parser_read_uint32(uint32_t *value, const char *p);
 int parser_read_uint16(uint16_t *value, const char *p);
 int parser_read_uint8(uint8_t *value, const char *p);
 
 int parser_read_uint64_hex(uint64_t *value, const char *p);
-int parser_read_uint32_hex(uint32_t *value, const char *p);
-int parser_read_uint16_hex(uint16_t *value, const char *p);
 int parser_read_uint8_hex(uint8_t *value, const char *p);
 
-int parser_read_int32(int32_t *value, const char *p);
-
 int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
 
 int parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens);
diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c
index aabf4375db..4e5926a7c0 100644
--- a/app/test/test_table_pipeline.c
+++ b/app/test/test_table_pipeline.c
@@ -61,46 +61,10 @@ rte_pipeline_port_out_action_handler port_action_stub(struct rte_mbuf **pkts,
 
 #endif
 
-rte_pipeline_table_action_handler_hit
-table_action_0x00(struct rte_pipeline *p, struct rte_mbuf **pkts,
-	uint64_t pkts_mask, struct rte_pipeline_table_entry **entry, void *arg);
-
-rte_pipeline_table_action_handler_hit
-table_action_stub_hit(struct rte_pipeline *p, struct rte_mbuf **pkts,
-	uint64_t pkts_mask, struct rte_pipeline_table_entry **entry, void *arg);
-
 static int
 table_action_stub_miss(struct rte_pipeline *p, struct rte_mbuf **pkts,
 	uint64_t pkts_mask, struct rte_pipeline_table_entry *entry, void *arg);
 
-rte_pipeline_table_action_handler_hit
-table_action_0x00(__rte_unused struct rte_pipeline *p,
-	__rte_unused struct rte_mbuf **pkts,
-	uint64_t pkts_mask,
-	__rte_unused struct rte_pipeline_table_entry **entry,
-	__rte_unused void *arg)
-{
-	printf("Table Action, setting pkts_mask to 0x00\n");
-	pkts_mask = ~0x00;
-	rte_pipeline_ah_packet_drop(p, pkts_mask);
-	return 0;
-}
-
-rte_pipeline_table_action_handler_hit
-table_action_stub_hit(__rte_unused struct rte_pipeline *p,
-	__rte_unused struct rte_mbuf **pkts,
-	uint64_t pkts_mask,
-	__rte_unused struct rte_pipeline_table_entry **entry,
-	__rte_unused void *arg)
-{
-	printf("STUB Table Action Hit - doing nothing\n");
-	printf("STUB Table Action Hit - setting mask to 0x%"PRIx64"\n",
-		override_hit_mask);
-	pkts_mask = (~override_hit_mask) & 0x3;
-	rte_pipeline_ah_packet_drop(p, pkts_mask);
-	return 0;
-}
-
 static int
 table_action_stub_miss(struct rte_pipeline *p,
 	__rte_unused struct rte_mbuf **pkts,
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4ab49f7853..b69b133a90 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -56,74 +56,6 @@ fman_if_reset_mcast_filter_table(struct fman_if *p)
 		out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
 }
 
-static
-uint32_t get_mac_hash_code(uint64_t eth_addr)
-{
-	uint64_t	mask1, mask2;
-	uint32_t	xorVal = 0;
-	uint8_t		i, j;
-
-	for (i = 0; i < 6; i++) {
-		mask1 = eth_addr & (uint64_t)0x01;
-		eth_addr >>= 1;
-
-		for (j = 0; j < 7; j++) {
-			mask2 = eth_addr & (uint64_t)0x01;
-			mask1 ^= mask2;
-			eth_addr >>= 1;
-		}
-
-		xorVal |= (mask1 << (5 - i));
-	}
-
-	return xorVal;
-}
-
-int
-fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
-{
-	uint64_t eth_addr;
-	void *hashtable_ctrl;
-	uint32_t hash;
-
-	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
-
-	eth_addr = ETH_ADDR_TO_UINT64(eth);
-
-	if (!(eth_addr & GROUP_ADDRESS))
-		return -1;
-
-	hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
-	hash = hash | HASH_CTRL_MCAST_EN;
-
-	hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
-	out_be32(hashtable_ctrl, hash);
-
-	return 0;
-}
-
-int
-fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
-{
-	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
-	void *mac_reg =
-		&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
-	u32 val = in_be32(mac_reg);
-
-	eth[0] = (val & 0x000000ff) >> 0;
-	eth[1] = (val & 0x0000ff00) >> 8;
-	eth[2] = (val & 0x00ff0000) >> 16;
-	eth[3] = (val & 0xff000000) >> 24;
-
-	mac_reg =  &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
-	val = in_be32(mac_reg);
-
-	eth[4] = (val & 0x000000ff) >> 0;
-	eth[5] = (val & 0x0000ff00) >> 8;
-
-	return 0;
-}
-
 void
 fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
 {
@@ -180,38 +112,6 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
 	return 0;
 }
 
-void
-fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
-{
-	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
-	u32 value = 0;
-	void *cmdcfg;
-
-	assert(fman_ccsr_map_fd != -1);
-
-	/* Set Rx Ignore Pause Frames */
-	cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
-	if (enable)
-		value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
-	else
-		value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
-
-	out_be32(cmdcfg, value);
-}
-
-void
-fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len)
-{
-	struct __fman_if *__if = container_of(p, struct __fman_if, __if);
-	unsigned int *maxfrm;
-
-	assert(fman_ccsr_map_fd != -1);
-
-	/* Set Max frame length */
-	maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
-	out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
-}
-
 void
 fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
 {
@@ -422,23 +322,6 @@ fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
 	return 0;
 }
 
-int
-fman_if_get_fdoff(struct fman_if *fm_if)
-{
-	u32 fmbm_rebm;
-	int fdoff;
-
-	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-
-	assert(fman_ccsr_map_fd != -1);
-
-	fmbm_rebm = in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm);
-
-	fdoff = (fmbm_rebm >> FMAN_SP_EXT_BUF_MARG_START_SHIFT) & 0x1ff;
-
-	return fdoff;
-}
-
 void
 fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
 {
@@ -451,28 +334,6 @@ fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
 	out_be32(fmbm_refqid, err_fqid);
 }
 
-int
-fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
-{
-	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-	int val = 0;
-	int iceof_mask = 0x001f0000;
-	int icsz_mask = 0x0000001f;
-	int iciof_mask = 0x00000f00;
-
-	assert(fman_ccsr_map_fd != -1);
-
-	unsigned int *fmbm_ricp =
-		&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
-	val = in_be32(fmbm_ricp);
-
-	icp->iceof = (val & iceof_mask) >> 12;
-	icp->iciof = (val & iciof_mask) >> 4;
-	icp->icsz = (val & icsz_mask) << 4;
-
-	return 0;
-}
-
 int
 fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp)
@@ -526,19 +387,6 @@ fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
 	out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
 }
 
-uint16_t
-fman_if_get_maxfrm(struct fman_if *fm_if)
-{
-	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-	unsigned int *reg_maxfrm;
-
-	assert(fman_ccsr_map_fd != -1);
-
-	reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
-
-	return (in_be32(reg_maxfrm) | 0x0000FFFF);
-}
-
 /* MSB in fmbm_rebm register
  * 0 - If BMI cannot store the frame in a single buffer it may select a buffer
  *     of smaller size and store the frame in scatter gather (S/G) buffers
@@ -580,36 +428,6 @@ fman_if_set_sg(struct fman_if *fm_if, int enable)
 	out_be32(fmbm_rebm, (in_be32(fmbm_rebm) & ~fmbm_mask) | val);
 }
 
-void
-fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
-{
-	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-	unsigned int *fmqm_pndn;
-
-	assert(fman_ccsr_map_fd != -1);
-
-	fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
-
-	out_be32(fmqm_pndn, nia);
-}
-
-void
-fman_if_discard_rx_errors(struct fman_if *fm_if)
-{
-	struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-	unsigned int *fmbm_rfsdm, *fmbm_rfsem;
-
-	fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
-	out_be32(fmbm_rfsem, 0);
-
-	/* Configure the discard mask to discard the error packets which have
-	 * DMA errors, Frame size error, Header error etc. The mask 0x010EE3F0
-	 * is to configured discard all the errors which come in the FD[STATUS]
-	 */
-	fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
-	out_be32(fmbm_rfsdm, 0x010EE3F0);
-}
-
 void
 fman_if_receive_rx_errors(struct fman_if *fm_if,
 	unsigned int err_eq)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index b7009f2299..1d6460f1d1 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -148,14 +148,3 @@ netcfg_acquire(void)
 
 	return NULL;
 }
-
-void
-netcfg_release(struct netcfg_info *cfg_ptr)
-{
-	rte_free(cfg_ptr);
-	/* Close socket for shared interfaces */
-	if (skfd >= 0) {
-		close(skfd);
-		skfd = -1;
-	}
-}
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
index 8a6290734f..95215bb24e 100644
--- a/drivers/bus/dpaa/base/qbman/bman.c
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -321,41 +321,7 @@ int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 	return ret;
 }
 
-int bman_query_pools(struct bm_pool_state *state)
-{
-	struct bman_portal *p = get_affine_portal();
-	struct bm_mc_result *mcr;
-
-	bm_mc_start(&p->p);
-	bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
-	while (!(mcr = bm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
-		    BM_MCR_VERB_CMD_QUERY);
-	*state = mcr->query;
-	state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
-	state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
-	state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
-	state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
-	return 0;
-}
-
 u32 bman_query_free_buffers(struct bman_pool *pool)
 {
 	return bm_pool_free_buffers(pool->params.bpid);
 }
-
-int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
-{
-	u32 bpid;
-
-	bpid = bman_get_params(pool)->bpid;
-
-	return bm_pool_set(bpid, thresholds);
-}
-
-int bman_shutdown_pool(u32 bpid)
-{
-	struct bman_portal *p = get_affine_portal();
-	return bm_shutdown_pool(&p->p, bpid);
-}
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index 750b756b93..8763ac6215 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -109,11 +109,6 @@ static int fsl_bman_portal_finish(void)
 	return ret;
 }
 
-int bman_thread_fd(void)
-{
-	return bmfd;
-}
-
 int bman_thread_init(void)
 {
 	/* Convert from contiguous/virtual cpu numbering to real cpu when
@@ -127,17 +122,6 @@ int bman_thread_finish(void)
 	return fsl_bman_portal_finish();
 }
 
-void bman_thread_irq(void)
-{
-	qbman_invoke_irq(pcfg.irq);
-	/* Now we need to uninhibit interrupts. This is the only code outside
-	 * the regular portal driver that manipulates any portal register, so
-	 * rather than breaking that encapsulation I am simply hard-coding the
-	 * offset to the inhibit register here.
-	 */
-	out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
-}
-
 int bman_init_ccsr(const struct device_node *node)
 {
 	static int ccsr_map_fd;
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 9bc92681cd..9ce8ac8b12 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -204,100 +204,6 @@ struct dpaa_ioctl_raw_portal {
 #define DPAA_IOCTL_FREE_RAW_PORTAL \
 	_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
 
-static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
-{
-	int ret = check_fd();
-
-	if (ret)
-		return ret;
-
-	ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
-	if (ret) {
-		perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
-		return ret;
-	}
-	return 0;
-}
-
-static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
-{
-	int ret = check_fd();
-
-	if (ret)
-		return ret;
-
-	ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
-	if (ret) {
-		perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
-		return ret;
-	}
-	return 0;
-}
-
-int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
-{
-	struct dpaa_ioctl_raw_portal input;
-	int ret;
-
-	input.type = dpaa_portal_qman;
-	input.index = portal->index;
-	input.enable_stash = portal->enable_stash;
-	input.cpu = portal->cpu;
-	input.cache = portal->cache;
-	input.window = portal->window;
-	input.sdest = portal->sdest;
-
-	ret =  process_portal_allocate(&input);
-	if (ret)
-		return ret;
-	portal->index = input.index;
-	portal->cinh = input.cinh;
-	portal->cena  = input.cena;
-	return 0;
-}
-
-int qman_free_raw_portal(struct dpaa_raw_portal *portal)
-{
-	struct dpaa_ioctl_raw_portal input;
-
-	input.type = dpaa_portal_qman;
-	input.index = portal->index;
-	input.cinh = portal->cinh;
-	input.cena = portal->cena;
-
-	return process_portal_free(&input);
-}
-
-int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
-{
-	struct dpaa_ioctl_raw_portal input;
-	int ret;
-
-	input.type = dpaa_portal_bman;
-	input.index = portal->index;
-	input.enable_stash = 0;
-
-	ret =  process_portal_allocate(&input);
-	if (ret)
-		return ret;
-	portal->index = input.index;
-	portal->cinh = input.cinh;
-	portal->cena  = input.cena;
-	return 0;
-}
-
-int bman_free_raw_portal(struct dpaa_raw_portal *portal)
-{
-	struct dpaa_ioctl_raw_portal input;
-
-	input.type = dpaa_portal_bman;
-	input.index = portal->index;
-	input.cinh = portal->cinh;
-	input.cena = portal->cena;
-
-	return process_portal_free(&input);
-}
-
 #define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
 	_IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
 
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 447c091770..a8deecf689 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -199,14 +199,6 @@ static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
 	return -ENOMEM;
 }
 
-static void clear_fq_table_entry(u32 entry)
-{
-	spin_lock(&fq_hash_table_lock);
-	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
-	qman_fq_lookup_table[entry] = NULL;
-	spin_unlock(&fq_hash_table_lock);
-}
-
 static inline struct qman_fq *get_fq_table_entry(u32 entry)
 {
 	DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
@@ -235,13 +227,6 @@ static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
 	fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
 }
 
-static inline void cpu_to_hw_fd(struct qm_fd *fd)
-{
-	fd->addr = cpu_to_be40(fd->addr);
-	fd->status = cpu_to_be32(fd->status);
-	fd->opaque = cpu_to_be32(fd->opaque);
-}
-
 static inline void hw_fd_to_cpu(struct qm_fd *fd)
 {
 	fd->addr = be40_to_cpu(fd->addr);
@@ -285,15 +270,6 @@ static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
 	return IRQ_HANDLED;
 }
 
-/* This inner version is used privately by qman_create_affine_portal(), as well
- * as by the exported qman_stop_dequeues().
- */
-static inline void qman_stop_dequeues_ex(struct qman_portal *p)
-{
-	if (!(p->dqrr_disable_ref++))
-		qm_dqrr_set_maxfill(&p->p, 0);
-}
-
 static int drain_mr_fqrni(struct qm_portal *p)
 {
 	const struct qm_mr_entry *msg;
@@ -1173,17 +1149,6 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits)
 	return 0;
 }
 
-u16 qman_affine_channel(int cpu)
-{
-	if (cpu < 0) {
-		struct qman_portal *portal = get_affine_portal();
-
-		cpu = portal->config->cpu;
-	}
-	DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
-	return affine_channels[cpu];
-}
-
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs,
 				 struct qman_portal *p)
@@ -1247,14 +1212,6 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 	return rx_number;
 }
 
-void qman_clear_irq(void)
-{
-	struct qman_portal *p = get_affine_portal();
-	u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
-		~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
-	qm_isr_status_clear(&p->p, clear);
-}
-
 u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
 			void **bufs)
 {
@@ -1370,51 +1327,6 @@ void qman_dqrr_consume(struct qman_fq *fq,
 	qm_dqrr_next(&p->p);
 }
 
-int qman_poll_dqrr(unsigned int limit)
-{
-	struct qman_portal *p = get_affine_portal();
-	int ret;
-
-	ret = __poll_portal_fast(p, limit);
-	return ret;
-}
-
-void qman_poll(void)
-{
-	struct qman_portal *p = get_affine_portal();
-
-	if ((~p->irq_sources) & QM_PIRQ_SLOW) {
-		if (!(p->slowpoll--)) {
-			u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
-			u32 active = __poll_portal_slow(p, is);
-
-			if (active) {
-				qm_isr_status_clear(&p->p, active);
-				p->slowpoll = SLOW_POLL_BUSY;
-			} else
-				p->slowpoll = SLOW_POLL_IDLE;
-		}
-	}
-	if ((~p->irq_sources) & QM_PIRQ_DQRI)
-		__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
-}
-
-void qman_stop_dequeues(void)
-{
-	struct qman_portal *p = get_affine_portal();
-
-	qman_stop_dequeues_ex(p);
-}
-
-void qman_start_dequeues(void)
-{
-	struct qman_portal *p = get_affine_portal();
-
-	DPAA_ASSERT(p->dqrr_disable_ref > 0);
-	if (!(--p->dqrr_disable_ref))
-		qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
-}
-
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
 {
 	struct qman_portal *p = qp ? qp : get_affine_portal();
@@ -1424,28 +1336,6 @@ void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
 	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
 }
 
-void qman_static_dequeue_del(u32 pools, struct qman_portal *qp)
-{
-	struct qman_portal *p = qp ? qp : get_affine_portal();
-
-	pools &= p->config->pools;
-	p->sdqcr &= ~pools;
-	qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
-}
-
-u32 qman_static_dequeue_get(struct qman_portal *qp)
-{
-	struct qman_portal *p = qp ? qp : get_affine_portal();
-	return p->sdqcr;
-}
-
-void qman_dca(const struct qm_dqrr_entry *dq, int park_request)
-{
-	struct qman_portal *p = get_affine_portal();
-
-	qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
-}
-
 void qman_dca_index(u8 index, int park_request)
 {
 	struct qman_portal *p = get_affine_portal();
@@ -1563,42 +1453,11 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
 	return -EIO;
 }
 
-void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
-{
-	/*
-	 * We don't need to lock the FQ as it is a pre-condition that the FQ be
-	 * quiesced. Instead, run some checks.
-	 */
-	switch (fq->state) {
-	case qman_fq_state_parked:
-		DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
-		/* Fallthrough */
-	case qman_fq_state_oos:
-		if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
-			qman_release_fqid(fq->fqid);
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-		clear_fq_table_entry(fq->key);
-#endif
-		return;
-	default:
-		break;
-	}
-	DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
-}
-
 u32 qman_fq_fqid(struct qman_fq *fq)
 {
 	return fq->fqid;
 }
 
-void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
-{
-	if (state)
-		*state = fq->state;
-	if (flags)
-		*flags = fq->flags;
-}
-
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 {
 	struct qm_mc_command *mcc;
@@ -1695,48 +1554,6 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
 	return 0;
 }
 
-int qman_schedule_fq(struct qman_fq *fq)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p;
-
-	int ret = 0;
-	u8 res;
-
-	if (fq->state != qman_fq_state_parked)
-		return -EINVAL;
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
-	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
-		return -EINVAL;
-#endif
-	/* Issue a ALTERFQ_SCHED management command */
-	p = get_affine_portal();
-
-	FQLOCK(fq);
-	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
-		     (fq->state != qman_fq_state_parked))) {
-		ret = -EBUSY;
-		goto out;
-	}
-	mcc = qm_mc_start(&p->p);
-	mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
-	qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
-	res = mcr->result;
-	if (res != QM_MCR_RESULT_OK) {
-		ret = -EIO;
-		goto out;
-	}
-	fq->state = qman_fq_state_sched;
-out:
-	FQUNLOCK(fq);
-
-	return ret;
-}
-
 int qman_retire_fq(struct qman_fq *fq, u32 *flags)
 {
 	struct qm_mc_command *mcc;
@@ -1866,98 +1683,6 @@ int qman_oos_fq(struct qman_fq *fq)
 	return ret;
 }
 
-int qman_fq_flow_control(struct qman_fq *fq, int xon)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p;
-
-	int ret = 0;
-	u8 res;
-	u8 myverb;
-
-	if ((fq->state == qman_fq_state_oos) ||
-	    (fq->state == qman_fq_state_retired) ||
-		(fq->state == qman_fq_state_parked))
-		return -EINVAL;
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
-	if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
-		return -EINVAL;
-#endif
-	/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
-	p = get_affine_portal();
-	FQLOCK(fq);
-	if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
-		     (fq->state == qman_fq_state_parked) ||
-			(fq->state == qman_fq_state_oos) ||
-			(fq->state == qman_fq_state_retired))) {
-		ret = -EBUSY;
-		goto out;
-	}
-	mcc = qm_mc_start(&p->p);
-	mcc->alterfq.fqid = fq->fqid;
-	mcc->alterfq.count = 0;
-	myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
-
-	qm_mc_commit(&p->p, myverb);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
-
-	res = mcr->result;
-	if (res != QM_MCR_RESULT_OK) {
-		ret = -EIO;
-		goto out;
-	}
-out:
-	FQUNLOCK(fq);
-	return ret;
-}
-
-int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p = get_affine_portal();
-
-	u8 res;
-
-	mcc = qm_mc_start(&p->p);
-	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
-	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
-	res = mcr->result;
-	if (res == QM_MCR_RESULT_OK)
-		*fqd = mcr->queryfq.fqd;
-	hw_fqd_to_cpu(fqd);
-	if (res != QM_MCR_RESULT_OK)
-		return -EIO;
-	return 0;
-}
-
-int qman_query_fq_has_pkts(struct qman_fq *fq)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p = get_affine_portal();
-
-	int ret = 0;
-	u8 res;
-
-	mcc = qm_mc_start(&p->p);
-	mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
-	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	res = mcr->result;
-	if (res == QM_MCR_RESULT_OK)
-		ret = !!mcr->queryfq_np.frm_cnt;
-	return ret;
-}
-
 int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
 {
 	struct qm_mc_command *mcc;
@@ -2022,65 +1747,6 @@ int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt)
 	return 0;
 }
 
-int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p = get_affine_portal();
-
-	u8 res, myverb;
-
-	myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
-				 QM_MCR_VERB_QUERYWQ;
-	mcc = qm_mc_start(&p->p);
-	mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
-	qm_mc_commit(&p->p, myverb);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
-	res = mcr->result;
-	if (res == QM_MCR_RESULT_OK) {
-		int i, array_len;
-
-		wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
-		array_len = ARRAY_SIZE(mcr->querywq.wq_len);
-		for (i = 0; i < array_len; i++)
-			wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
-	}
-	if (res != QM_MCR_RESULT_OK) {
-		pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
-		return -EIO;
-	}
-	return 0;
-}
-
-int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
-		       struct qm_mcr_cgrtestwrite *result)
-{
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	struct qman_portal *p = get_affine_portal();
-
-	u8 res;
-
-	mcc = qm_mc_start(&p->p);
-	mcc->cgrtestwrite.cgid = cgr->cgrid;
-	mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
-	mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
-	qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
-	res = mcr->result;
-	if (res == QM_MCR_RESULT_OK)
-		*result = mcr->cgrtestwrite;
-	if (res != QM_MCR_RESULT_OK) {
-		pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
-		return -EIO;
-	}
-	return 0;
-}
-
 int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
 {
 	struct qm_mc_command *mcc;
@@ -2116,32 +1782,6 @@ int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
 	return 0;
 }
 
-int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
-{
-	struct qm_mc_result *mcr;
-	struct qman_portal *p = get_affine_portal();
-	u8 res;
-	unsigned int i;
-
-	qm_mc_start(&p->p);
-	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
-	while (!(mcr = qm_mc_result(&p->p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
-			QM_MCC_VERB_QUERYCONGESTION);
-	res = mcr->result;
-	if (res == QM_MCR_RESULT_OK)
-		*congestion = mcr->querycongestion;
-	if (res != QM_MCR_RESULT_OK) {
-		pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
-		return -EIO;
-	}
-	for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
-		congestion->state.state[i] =
-			be32_to_cpu(congestion->state.state[i]);
-	return 0;
-}
-
 int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
 {
 	struct qman_portal *p = get_affine_portal();
@@ -2179,128 +1819,6 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
 	return ret;
 }
 
-int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
-			  u32 vdqcr)
-{
-	struct qman_portal *p;
-	int ret = -EBUSY;
-
-	if ((fq->state != qman_fq_state_parked) &&
-	    (fq->state != qman_fq_state_retired))
-		return -EINVAL;
-	if (vdqcr & QM_VDQCR_FQID_MASK)
-		return -EINVAL;
-	if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
-		return -EBUSY;
-	vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
-
-	p = get_affine_portal();
-
-	if (!p->vdqcr_owned) {
-		FQLOCK(fq);
-		if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
-			goto escape;
-		fq_set(fq, QMAN_FQ_STATE_VDQCR);
-		FQUNLOCK(fq);
-		p->vdqcr_owned = fq;
-		ret = 0;
-	}
-escape:
-	if (ret)
-		return ret;
-
-	/* VDQCR is set */
-	qm_dqrr_vdqcr_set(&p->p, vdqcr);
-	return 0;
-}
-
-static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
-{
-	if (avail)
-		qm_eqcr_cce_prefetch(&p->p);
-	else
-		qm_eqcr_cce_update(&p->p);
-}
-
-int qman_eqcr_is_empty(void)
-{
-	struct qman_portal *p = get_affine_portal();
-	u8 avail;
-
-	update_eqcr_ci(p, 0);
-	avail = qm_eqcr_get_fill(&p->p);
-	return (avail == 0);
-}
-
-void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
-{
-	if (affine) {
-		struct qman_portal *p = get_affine_portal();
-
-		p->cb_dc_ern = handler;
-	} else
-		cb_dc_ern = handler;
-}
-
-static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
-					struct qman_fq *fq,
-					const struct qm_fd *fd,
-					u32 flags)
-{
-	struct qm_eqcr_entry *eq;
-	u8 avail;
-
-	if (p->use_eqcr_ci_stashing) {
-		/*
-		 * The stashing case is easy, only update if we need to in
-		 * order to try and liberate ring entries.
-		 */
-		eq = qm_eqcr_start_stash(&p->p);
-	} else {
-		/*
-		 * The non-stashing case is harder, need to prefetch ahead of
-		 * time.
-		 */
-		avail = qm_eqcr_get_avail(&p->p);
-		if (avail < 2)
-			update_eqcr_ci(p, avail);
-		eq = qm_eqcr_start_no_stash(&p->p);
-	}
-
-	if (unlikely(!eq))
-		return NULL;
-
-	if (flags & QMAN_ENQUEUE_FLAG_DCA)
-		eq->dca = QM_EQCR_DCA_ENABLE |
-			((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
-					QM_EQCR_DCA_PARK : 0) |
-			((flags >> 8) & QM_EQCR_DCA_IDXMASK);
-	eq->fqid = cpu_to_be32(fq->fqid);
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-	eq->tag = cpu_to_be32(fq->key);
-#else
-	eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
-#endif
-	eq->fd = *fd;
-	cpu_to_hw_fd(&eq->fd);
-	return eq;
-}
-
-int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
-{
-	struct qman_portal *p = get_affine_portal();
-	struct qm_eqcr_entry *eq;
-
-	eq = try_p_eq_start(p, fq, fd, flags);
-	if (!eq)
-		return -EBUSY;
-	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
-	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
-		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
-	/* Factor the below out, it's used from qman_enqueue_orp() too */
-	return 0;
-}
-
 int qman_enqueue_multi(struct qman_fq *fq,
 		       const struct qm_fd *fd, u32 *flags,
 		int frames_to_send)
@@ -2442,37 +1960,6 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 	return sent;
 }
 
-int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
-		     struct qman_fq *orp, u16 orp_seqnum)
-{
-	struct qman_portal *p  = get_affine_portal();
-	struct qm_eqcr_entry *eq;
-
-	eq = try_p_eq_start(p, fq, fd, flags);
-	if (!eq)
-		return -EBUSY;
-	/* Process ORP-specifics here */
-	if (flags & QMAN_ENQUEUE_FLAG_NLIS)
-		orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
-	else {
-		orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
-		if (flags & QMAN_ENQUEUE_FLAG_NESN)
-			orp_seqnum |= QM_EQCR_SEQNUM_NESN;
-		else
-			/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
-			orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
-	}
-	eq->seqnum = cpu_to_be16(orp_seqnum);
-	eq->orp = cpu_to_be32(orp->fqid);
-	/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
-	qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
-		((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
-				0 : QM_EQCR_VERB_CMD_ENQUEUE) |
-		(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
-
-	return 0;
-}
-
 int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts)
 {
@@ -2581,52 +2068,6 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 	return ret;
 }
 
-int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
-			   struct qm_mcc_initcgr *opts)
-{
-	struct qm_mcc_initcgr local_opts;
-	struct qm_mcr_querycgr cgr_state;
-	int ret;
-
-	if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
-		pr_warn("QMan version doesn't support CSCN => DCP portal\n");
-		return -EINVAL;
-	}
-	/* We have to check that the provided CGRID is within the limits of the
-	 * data-structures, for obvious reasons. However we'll let h/w take
-	 * care of determining whether it's within the limits of what exists on
-	 * the SoC.
-	 */
-	if (cgr->cgrid >= __CGR_NUM)
-		return -EINVAL;
-
-	ret = qman_query_cgr(cgr, &cgr_state);
-	if (ret)
-		return ret;
-
-	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
-	if (opts)
-		local_opts = *opts;
-
-	if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
-		local_opts.cgr.cscn_targ_upd_ctrl =
-				QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
-				QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
-	else
-		local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
-					TARG_DCP_MASK(dcp_portal);
-	local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
-
-	/* send init if flags indicate so */
-	if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
-		ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
-				      &local_opts);
-	else
-		ret = qman_modify_cgr(cgr, 0, &local_opts);
-
-	return ret;
-}
-
 int qman_delete_cgr(struct qman_cgr *cgr)
 {
 	struct qm_mcr_querycgr cgr_state;
@@ -2674,222 +2115,3 @@ int qman_delete_cgr(struct qman_cgr *cgr)
 put_portal:
 	return ret;
 }
-
-int qman_shutdown_fq(u32 fqid)
-{
-	struct qman_portal *p;
-	struct qm_portal *low_p;
-	struct qm_mc_command *mcc;
-	struct qm_mc_result *mcr;
-	u8 state;
-	int orl_empty, fq_empty, drain = 0;
-	u32 result;
-	u32 channel, wq;
-	u16 dest_wq;
-
-	p = get_affine_portal();
-	low_p = &p->p;
-
-	/* Determine the state of the FQID */
-	mcc = qm_mc_start(low_p);
-	mcc->queryfq_np.fqid = cpu_to_be32(fqid);
-	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
-	while (!(mcr = qm_mc_result(low_p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
-	state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
-	if (state == QM_MCR_NP_STATE_OOS)
-		return 0; /* Already OOS, no need to do anymore checks */
-
-	/* Query which channel the FQ is using */
-	mcc = qm_mc_start(low_p);
-	mcc->queryfq.fqid = cpu_to_be32(fqid);
-	qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
-	while (!(mcr = qm_mc_result(low_p)))
-		cpu_relax();
-	DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
-
-	/* Need to store these since the MCR gets reused */
-	dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
-	channel = dest_wq & 0x7;
-	wq = dest_wq >> 3;
-
-	switch (state) {
-	case QM_MCR_NP_STATE_TEN_SCHED:
-	case QM_MCR_NP_STATE_TRU_SCHED:
-	case QM_MCR_NP_STATE_ACTIVE:
-	case QM_MCR_NP_STATE_PARKED:
-		orl_empty = 0;
-		mcc = qm_mc_start(low_p);
-		mcc->alterfq.fqid = cpu_to_be32(fqid);
-		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
-		while (!(mcr = qm_mc_result(low_p)))
-			cpu_relax();
-		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
-			   QM_MCR_VERB_ALTER_RETIRE);
-		result = mcr->result; /* Make a copy as we reuse MCR below */
-
-		if (result == QM_MCR_RESULT_PENDING) {
-			/* Need to wait for the FQRN in the message ring, which
-			 * will only occur once the FQ has been drained.  In
-			 * order for the FQ to drain the portal needs to be set
-			 * to dequeue from the channel the FQ is scheduled on
-			 */
-			const struct qm_mr_entry *msg;
-			const struct qm_dqrr_entry *dqrr = NULL;
-			int found_fqrn = 0;
-			__maybe_unused u16 dequeue_wq = 0;
-
-			/* Flag that we need to drain FQ */
-			drain = 1;
-
-			if (channel >= qm_channel_pool1 &&
-			    channel < (u16)(qm_channel_pool1 + 15)) {
-				/* Pool channel, enable the bit in the portal */
-				dequeue_wq = (channel -
-					      qm_channel_pool1 + 1) << 4 | wq;
-			} else if (channel < qm_channel_pool1) {
-				/* Dedicated channel */
-				dequeue_wq = wq;
-			} else {
-				pr_info("Cannot recover FQ 0x%x,"
-					" it is scheduled on channel 0x%x",
-					fqid, channel);
-				return -EBUSY;
-			}
-			/* Set the sdqcr to drain this channel */
-			if (channel < qm_channel_pool1)
-				qm_dqrr_sdqcr_set(low_p,
-						  QM_SDQCR_TYPE_ACTIVE |
-					  QM_SDQCR_CHANNELS_DEDICATED);
-			else
-				qm_dqrr_sdqcr_set(low_p,
-						  QM_SDQCR_TYPE_ACTIVE |
-						  QM_SDQCR_CHANNELS_POOL_CONV
-						  (channel));
-			while (!found_fqrn) {
-				/* Keep draining DQRR while checking the MR*/
-				qm_dqrr_pvb_update(low_p);
-				dqrr = qm_dqrr_current(low_p);
-				while (dqrr) {
-					qm_dqrr_cdc_consume_1ptr(
-						low_p, dqrr, 0);
-					qm_dqrr_pvb_update(low_p);
-					qm_dqrr_next(low_p);
-					dqrr = qm_dqrr_current(low_p);
-				}
-				/* Process message ring too */
-				qm_mr_pvb_update(low_p);
-				msg = qm_mr_current(low_p);
-				while (msg) {
-					if ((msg->ern.verb &
-					     QM_MR_VERB_TYPE_MASK)
-					    == QM_MR_VERB_FQRN)
-						found_fqrn = 1;
-					qm_mr_next(low_p);
-					qm_mr_cci_consume_to_current(low_p);
-					qm_mr_pvb_update(low_p);
-					msg = qm_mr_current(low_p);
-				}
-				cpu_relax();
-			}
-		}
-		if (result != QM_MCR_RESULT_OK &&
-		    result !=  QM_MCR_RESULT_PENDING) {
-			/* error */
-			pr_err("qman_retire_fq failed on FQ 0x%x,"
-			       " result=0x%x\n", fqid, result);
-			return -1;
-		}
-		if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
-			/* ORL had no entries, no need to wait until the
-			 * ERNs come in.
-			 */
-			orl_empty = 1;
-		}
-		/* Retirement succeeded, check to see if FQ needs
-		 * to be drained.
-		 */
-		if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
-			/* FQ is Not Empty, drain using volatile DQ commands */
-			fq_empty = 0;
-			do {
-				const struct qm_dqrr_entry *dqrr = NULL;
-				u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
-
-				qm_dqrr_vdqcr_set(low_p, vdqcr);
-
-				/* Wait for a dequeue to occur */
-				while (dqrr == NULL) {
-					qm_dqrr_pvb_update(low_p);
-					dqrr = qm_dqrr_current(low_p);
-					if (!dqrr)
-						cpu_relax();
-				}
-				/* Process the dequeues, making sure to
-				 * empty the ring completely.
-				 */
-				while (dqrr) {
-					if (dqrr->fqid == fqid &&
-					    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
-						fq_empty = 1;
-					qm_dqrr_cdc_consume_1ptr(low_p,
-								 dqrr, 0);
-					qm_dqrr_pvb_update(low_p);
-					qm_dqrr_next(low_p);
-					dqrr = qm_dqrr_current(low_p);
-				}
-			} while (fq_empty == 0);
-		}
-		qm_dqrr_sdqcr_set(low_p, 0);
-
-		/* Wait for the ORL to have been completely drained */
-		while (orl_empty == 0) {
-			const struct qm_mr_entry *msg;
-
-			qm_mr_pvb_update(low_p);
-			msg = qm_mr_current(low_p);
-			while (msg) {
-				if ((msg->ern.verb & QM_MR_VERB_TYPE_MASK) ==
-				    QM_MR_VERB_FQRL)
-					orl_empty = 1;
-				qm_mr_next(low_p);
-				qm_mr_cci_consume_to_current(low_p);
-				qm_mr_pvb_update(low_p);
-				msg = qm_mr_current(low_p);
-			}
-			cpu_relax();
-		}
-		mcc = qm_mc_start(low_p);
-		mcc->alterfq.fqid = cpu_to_be32(fqid);
-		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
-		while (!(mcr = qm_mc_result(low_p)))
-			cpu_relax();
-		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
-			   QM_MCR_VERB_ALTER_OOS);
-		if (mcr->result != QM_MCR_RESULT_OK) {
-			pr_err(
-			"OOS after drain Failed on FQID 0x%x, result 0x%x\n",
-			       fqid, mcr->result);
-			return -1;
-		}
-		return 0;
-
-	case QM_MCR_NP_STATE_RETIRED:
-		/* Send OOS Command */
-		mcc = qm_mc_start(low_p);
-		mcc->alterfq.fqid = cpu_to_be32(fqid);
-		qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
-		while (!(mcr = qm_mc_result(low_p)))
-			cpu_relax();
-		DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
-			   QM_MCR_VERB_ALTER_OOS);
-		if (mcr->result) {
-			pr_err("OOS Failed on FQID 0x%x\n", fqid);
-			return -1;
-		}
-		return 0;
-
-	}
-	return -1;
-}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 8254729e66..25306804a5 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -165,15 +165,6 @@ struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
 void qm_put_unused_portal(struct qm_portal_config *pcfg);
 void qm_set_liodns(struct qm_portal_config *pcfg);
 
-/* This CGR feature is supported by h/w and required by unit-tests and the
- * debugfs hooks, so is implemented in the driver. However it allows an explicit
- * corruption of h/w fields by s/w that are usually incorruptible (because the
- * counters are usually maintained entirely within h/w). As such, we declare
- * this API internally.
- */
-int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
-		       struct qm_mcr_cgrtestwrite *result);
-
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
 /* If the fq object pointer is greater than the size of context_b field,
  * than a lookup table is required.
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 3098e23093..ca1e27aeaf 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -359,11 +359,6 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
 	return 0;
 }
 
-int rte_dpaa_portal_fq_close(struct qman_fq *fq)
-{
-	return fsl_qman_fq_portal_destroy(fq->qp);
-}
-
 void
 dpaa_portal_finish(void *arg)
 {
@@ -488,21 +483,6 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 	driver->dpaa_bus = &rte_dpaa_bus;
 }
 
-/* un-register a dpaa bus based dpaa driver */
-void
-rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
-{
-	struct rte_dpaa_bus *dpaa_bus;
-
-	BUS_INIT_FUNC_TRACE();
-
-	dpaa_bus = driver->dpaa_bus;
-
-	TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
-	/* Update Bus references */
-	driver->dpaa_bus = NULL;
-}
-
 static int
 rte_dpaa_device_match(struct rte_dpaa_driver *drv,
 		      struct rte_dpaa_device *dev)
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index 82da2fcfe0..a06d29eb2d 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -252,8 +252,6 @@ static inline int bman_reserve_bpid(u32 bpid)
 
 void bman_seed_bpid_range(u32 bpid, unsigned int count);
 
-int bman_shutdown_pool(u32 bpid);
-
 /**
  * bman_new_pool - Allocates a Buffer Pool object
  * @params: parameters specifying the buffer pool ID and behaviour
@@ -310,12 +308,6 @@ __rte_internal
 int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
-/**
- * bman_query_pools - Query all buffer pool states
- * @state: storage for the queried availability and depletion states
- */
-int bman_query_pools(struct bm_pool_state *state);
-
 /**
  * bman_query_free_buffers - Query how many free buffers are in buffer pool
  * @pool: the buffer pool object to query
@@ -325,13 +317,6 @@ int bman_query_pools(struct bm_pool_state *state);
 __rte_internal
 u32 bman_query_free_buffers(struct bman_pool *pool);
 
-/**
- * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
- * @pool: the buffer pool object to which the thresholds will be set
- * @thresholds: the new thresholds
- */
-int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
-
 /**
  * bm_pool_set_hw_threshold - Change the buffer pool's thresholds
  * @pool: Pool id
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index a3cf77f0e3..71f5a2f8cf 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -64,12 +64,6 @@ void fman_if_stats_reset(struct fman_if *p);
 __rte_internal
 void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
 
-/* Set ignore pause option for a specific interface */
-void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
-
-/* Set max frame length */
-void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
-
 /* Enable/disable Rx promiscuous mode on specified interface */
 __rte_internal
 void fman_if_promiscuous_enable(struct fman_if *p);
@@ -114,18 +108,11 @@ int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
 __rte_internal
 void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
 
-/* Get IC transfer params */
-int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
-
 /* Set IC transfer params */
 __rte_internal
 int fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp);
 
-/* Get interface fd->offset value */
-__rte_internal
-int fman_if_get_fdoff(struct fman_if *fm_if);
-
 /* Set interface fd->offset value */
 __rte_internal
 void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
@@ -138,20 +125,10 @@ int fman_if_get_sg_enable(struct fman_if *fm_if);
 __rte_internal
 void fman_if_set_sg(struct fman_if *fm_if, int enable);
 
-/* Get interface Max Frame length (MTU) */
-uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
-
 /* Set interface  Max Frame length (MTU) */
 __rte_internal
 void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
 
-/* Set interface next invoked action for dequeue operation */
-void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
-
-/* discard error packets on rx */
-__rte_internal
-void fman_if_discard_rx_errors(struct fman_if *fm_if);
-
 __rte_internal
 void fman_if_receive_rx_errors(struct fman_if *fm_if,
 	unsigned int err_eq);
@@ -162,11 +139,6 @@ void fman_if_set_mcast_filter_table(struct fman_if *p);
 __rte_internal
 void fman_if_reset_mcast_filter_table(struct fman_if *p);
 
-int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
-
-int fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
-
-
 /* Enable/disable Rx on all interfaces */
 static inline void fman_if_enable_all_rx(void)
 {
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 10212f0fd5..b24aa76409 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1379,16 +1379,6 @@ int qman_irqsource_remove(u32 bits);
 __rte_internal
 int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
 
-/**
- * qman_affine_channel - return the channel ID of an portal
- * @cpu: the cpu whose affine portal is the subject of the query
- *
- * If @cpu is -1, the affine portal for the current CPU will be used. It is a
- * bug to call this function for any value of @cpu (other than -1) that is not a
- * member of the cpu mask.
- */
-u16 qman_affine_channel(int cpu);
-
 __rte_internal
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs, struct qman_portal *q);
@@ -1428,55 +1418,6 @@ __rte_internal
 void qman_dqrr_consume(struct qman_fq *fq,
 		       struct qm_dqrr_entry *dq);
 
-/**
- * qman_poll_dqrr - process DQRR (fast-path) entries
- * @limit: the maximum number of DQRR entries to process
- *
- * Use of this function requires that DQRR processing not be interrupt-driven.
- * Ie. the value returned by qman_irqsource_get() should not include
- * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
- * this function will return -EINVAL, otherwise the return value is >=0 and
- * represents the number of DQRR entries processed.
- */
-__rte_internal
-int qman_poll_dqrr(unsigned int limit);
-
-/**
- * qman_poll
- *
- * Dispatcher logic on a cpu can use this to trigger any maintenance of the
- * affine portal. There are two classes of portal processing in question;
- * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
- * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
- * thresholds, congestion state changes, etc). This function does whatever
- * processing is not triggered by interrupts.
- *
- * Note, if DQRR and some slow-path processing are poll-driven (rather than
- * interrupt-driven) then this function uses a heuristic to determine how often
- * to run slow-path processing - as slow-path processing introduces at least a
- * minimum latency each time it is run, whereas fast-path (DQRR) processing is
- * close to zero-cost if there is no work to be done.
- */
-void qman_poll(void);
-
-/**
- * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
- *
- * Disables DQRR processing of the portal. This is reference-counted, so
- * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
- * truly re-enable dequeuing.
- */
-void qman_stop_dequeues(void);
-
-/**
- * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
- *
- * Enables DQRR processing of the portal. This is reference-counted, so
- * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
- * truly re-enable dequeuing.
- */
-void qman_start_dequeues(void);
-
 /**
  * qman_static_dequeue_add - Add pool channels to the portal SDQCR
  * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
@@ -1488,39 +1429,6 @@ void qman_start_dequeues(void);
 __rte_internal
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
 
-/**
- * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
- * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
- *
- * Removes a set of pool channels from the portal's static dequeue command
- * register (SDQCR). The requested pools are limited to those the portal has
- * dequeue access to.
- */
-void qman_static_dequeue_del(u32 pools, struct qman_portal *qp);
-
-/**
- * qman_static_dequeue_get - return the portal's current SDQCR
- *
- * Returns the portal's current static dequeue command register (SDQCR). The
- * entire register is returned, so if only the currently-enabled pool channels
- * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
- */
-u32 qman_static_dequeue_get(struct qman_portal *qp);
-
-/**
- * qman_dca - Perform a Discrete Consumption Acknowledgment
- * @dq: the DQRR entry to be consumed
- * @park_request: indicates whether the held-active @fq should be parked
- *
- * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
- * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
- * does not take a 'portal' argument but implies the core affine portal from the
- * cpu that is currently executing the function. For reasons of locking, this
- * function must be called from the same CPU as that which processed the DQRR
- * entry in the first place.
- */
-void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
-
 /**
  * qman_dca_index - Perform a Discrete Consumption Acknowledgment
  * @index: the DQRR index to be consumed
@@ -1536,36 +1444,6 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
 __rte_internal
 void qman_dca_index(u8 index, int park_request);
 
-/**
- * qman_eqcr_is_empty - Determine if portal's EQCR is empty
- *
- * For use in situations where a cpu-affine caller needs to determine when all
- * enqueues for the local portal have been processed by Qman but can't use the
- * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
- * The function forces tracking of EQCR consumption (which normally doesn't
- * happen until enqueue processing needs to find space to put new enqueue
- * commands), and returns zero if the ring still has unprocessed entries,
- * non-zero if it is empty.
- */
-int qman_eqcr_is_empty(void);
-
-/**
- * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
- * @handler: callback for processing DCP ERNs
- * @affine: whether this handler is specific to the locally affine portal
- *
- * If a hardware block's interface to Qman (ie. its direct-connect portal, or
- * DCP) is configured not to receive enqueue rejections, then any enqueues
- * through that DCP that are rejected will be sent to a given software portal.
- * If @affine is non-zero, then this handler will only be used for DCP ERNs
- * received on the portal affine to the current CPU. If multiple CPUs share a
- * portal and they all call this function, they will be setting the handler for
- * the same portal! If @affine is zero, then this handler will be global to all
- * portals handled by this instance of the driver. Only those portals that do
- * not have their own affine handler will use the global handler.
- */
-void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
-
 	/* FQ management */
 	/* ------------- */
 /**
@@ -1594,18 +1472,6 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
 __rte_internal
 int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
 
-/**
- * qman_destroy_fq - Deallocates a FQ
- * @fq: the frame queue object to release
- * @flags: bit-mask of QMAN_FQ_FREE_*** options
- *
- * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
- * not deallocated but the caller regains ownership, to do with as desired. The
- * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
- * is specified, in which case it may also be in the 'parked' state.
- */
-void qman_destroy_fq(struct qman_fq *fq, u32 flags);
-
 /**
  * qman_fq_fqid - Queries the frame queue ID of a FQ object
  * @fq: the frame queue object to query
@@ -1613,19 +1479,6 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
 __rte_internal
 u32 qman_fq_fqid(struct qman_fq *fq);
 
-/**
- * qman_fq_state - Queries the state of a FQ object
- * @fq: the frame queue object to query
- * @state: pointer to state enum to return the FQ scheduling state
- * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
- *
- * Queries the state of the FQ object, without performing any h/w commands.
- * This captures the state, as seen by the driver, at the time the function
- * executes.
- */
-__rte_internal
-void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
-
 /**
  * qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
  * @fq: the frame queue object to modify, must be 'parked' or new.
@@ -1663,15 +1516,6 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
 __rte_internal
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
 
-/**
- * qman_schedule_fq - Schedules a FQ
- * @fq: the frame queue object to schedule, must be 'parked'
- *
- * Schedules the frame queue, which must be Parked, which takes it to
- * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
- */
-int qman_schedule_fq(struct qman_fq *fq);
-
 /**
  * qman_retire_fq - Retires a FQ
  * @fq: the frame queue object to retire
@@ -1703,32 +1547,6 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
 __rte_internal
 int qman_oos_fq(struct qman_fq *fq);
 
-/**
- * qman_fq_flow_control - Set the XON/XOFF state of a FQ
- * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
- * or 'retired' or 'parked' state
- * @xon: boolean to set fq in XON or XOFF state
- *
- * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
- * otherwise the IFSI interrupt will be asserted.
- */
-int qman_fq_flow_control(struct qman_fq *fq, int xon);
-
-/**
- * qman_query_fq - Queries FQD fields (via h/w query command)
- * @fq: the frame queue object to be queried
- * @fqd: storage for the queried FQD fields
- */
-int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
-
-/**
- * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
- * if packets are in the frame queue. If there are no packets on frame
- * queue '0' is returned.
- * @fq: the frame queue object to be queried
- */
-int qman_query_fq_has_pkts(struct qman_fq *fq);
-
 /**
  * qman_query_fq_np - Queries non-programmable FQD fields
  * @fq: the frame queue object to be queried
@@ -1745,73 +1563,6 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
 __rte_internal
 int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
 
-/**
- * qman_query_wq - Queries work queue lengths
- * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
- *		to this software portal. Otherwise, query length of WQs in a
- *		channel  specified in wq.
- * @wq: storage for the queried WQs lengths. Also specified the channel to
- *	to query if query_dedicated is zero.
- */
-int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
-
-/**
- * qman_volatile_dequeue - Issue a volatile dequeue command
- * @fq: the frame queue object to dequeue from
- * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
- * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
- *
- * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
- * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
- * the VDQCR is already in use, otherwise returns non-zero for failure. If
- * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
- * the VDQCR command has finished executing (ie. once the callback for the last
- * DQRR entry resulting from the VDQCR command has been called). If not using
- * the FINISH flag, completion can be determined either by detecting the
- * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
- * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
- * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
- * "flags" retrieved from qman_fq_state().
- */
-__rte_internal
-int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
-
-/**
- * qman_enqueue - Enqueue a frame to a frame queue
- * @fq: the frame queue object to enqueue to
- * @fd: a descriptor of the frame to be enqueued
- * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
- *
- * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
- * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
- * field is ignored. The return value is non-zero on error, such as ring full
- * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
- * specified), etc. If the ring is full and FLAG_WAIT is specified, this
- * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
- * interrupt will assert when Qman consumes the EQCR entry (subject to "status
- * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
- * perform an implied "discrete consumption acknowledgment" on the dequeue
- * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
- * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
- * this implicit DCA can delay the release of a "held active" frame queue
- * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
- * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
- * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
- * acknowledgment should "park request" the "held active" frame queue. Ie.
- * when the portal eventually releases that frame queue, it will be left in the
- * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
- * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
- * is requested, and the FQ is a member of a congestion group, then this
- * function returns -EAGAIN if the congestion group is currently congested.
- * Note, this does not eliminate ERNs, as the async interface means we can be
- * sending enqueue commands to an un-congested FQ that becomes congested before
- * the enqueue commands are processed, but it does minimise needless thrashing
- * of an already busy hardware resource by throttling many of the to-be-dropped
- * enqueues "at the source".
- */
-__rte_internal
-int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
-
 __rte_internal
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
@@ -1846,45 +1597,6 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 
 typedef int (*qman_cb_precommit) (void *arg);
 
-/**
- * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
- * @fq: the frame queue object to enqueue to
- * @fd: a descriptor of the frame to be enqueued
- * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
- * @orp: the frame queue object used as an order restoration point.
- * @orp_seqnum: the sequence number of this frame in the order restoration path
- *
- * Similar to qman_enqueue(), but with the addition of an Order Restoration
- * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
- * enqueue operation to employ order restoration. Each frame queue object acts
- * as an Order Definition Point (ODP) by providing each frame dequeued from it
- * with an incrementing sequence number, this value is generally ignored unless
- * that sequence of dequeued frames will need order restoration later. Each
- * frame queue object also encapsulates an Order Restoration Point (ORP), which
- * is a re-assembly context for re-ordering frames relative to their sequence
- * numbers as they are enqueued. The ORP does not have to be within the frame
- * queue that receives the enqueued frame, in fact it is usually the frame
- * queue from which the frames were originally dequeued. For the purposes of
- * order restoration, multiple frames (or "fragments") can be enqueued for a
- * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
- * enqueues except the final fragment of a given sequence number. Ordering
- * between sequence numbers is guaranteed, even if fragments of different
- * sequence numbers are interlaced with one another. Fragments of the same
- * sequence number will retain the order in which they are enqueued. If no
- * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
- * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
- * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
- * sequence number should become the ORP's "Next Expected Sequence Number".
- *
- * Side note: a frame queue object can be used purely as an ORP, without
- * carrying any frames at all. Care should be taken not to deallocate a frame
- * queue object that is being actively used as an ORP, as a future allocation
- * of the frame queue object may start using the internal ORP before the
- * previous use has finished.
- */
-int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
-		     struct qman_fq *orp, u16 orp_seqnum);
-
 /**
  * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
  * @result: is set by the API to the base FQID of the allocated range
@@ -1922,8 +1634,6 @@ static inline void qman_release_fqid(u32 fqid)
 
 void qman_seed_fqid_range(u32 fqid, unsigned int count);
 
-int qman_shutdown_fq(u32 fqid);
-
 /**
  * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
  * @fqid: the base FQID of the range to deallocate
@@ -2001,17 +1711,6 @@ __rte_internal
 int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
-/**
- * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
- * @cgr: the 'cgr' object, with fields filled in
- * @flags: QMAN_CGR_FLAG_* values
- * @dcp_portal: the DCP portal to which the cgr object is registered.
- * @opts: optional state of CGR settings
- *
- */
-int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
-			   struct qm_mcc_initcgr *opts);
-
 /**
  * qman_delete_cgr - Deregisters a congestion group object
  * @cgr: the 'cgr' object to deregister
@@ -2048,12 +1747,6 @@ int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
  */
 int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
 
-/**
- * qman_query_congestion - Queries the state of all congestion groups
- * @congestion: storage for the queried state of all congestion groups
- */
-int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
-
 /**
  * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
  * @result: is set by the API to the base CGR ID of the allocated range
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index dcf35e4adb..3a5df9bf7e 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -51,16 +51,9 @@ struct dpaa_raw_portal {
 	uint64_t cena;
 };
 
-int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
-int qman_free_raw_portal(struct dpaa_raw_portal *portal);
-
-int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
-int bman_free_raw_portal(struct dpaa_raw_portal *portal);
-
 /* Obtain thread-local UIO file-descriptors */
 __rte_internal
 int qman_thread_fd(void);
-int bman_thread_fd(void);
 
 /* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
  * line before notifying us, and this post-processing re-enables it once
@@ -70,12 +63,8 @@ int bman_thread_fd(void);
 __rte_internal
 void qman_thread_irq(void);
 
-__rte_internal
-void bman_thread_irq(void);
 __rte_internal
 void qman_fq_portal_thread_irq(struct qman_portal *qp);
-__rte_internal
-void qman_clear_irq(void);
 
 /* Global setup */
 int qman_global_init(void);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index d7d1befd24..815b3ba087 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -49,12 +49,6 @@ struct netcfg_interface {
 __rte_internal
 struct netcfg_info *netcfg_acquire(void);
 
-/* cfg_ptr: configuration information pointer.
- * Frees the resources allocated by the configuration layer.
- */
-__rte_internal
-void netcfg_release(struct netcfg_info *cfg_ptr);
-
 #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
 /* cfg_ptr: configuration information pointer.
  * This function dumps configuration data to stdout.
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 48d5cf4625..40d82412df 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -214,16 +214,6 @@ rte_dpaa_mem_vtop(void *vaddr)
 __rte_internal
 void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
 
-/**
- * Unregister a DPAA driver.
- *
- * @param driver
- *	A pointer to a rte_dpaa_driver structure describing the driver
- *	to be unregistered.
- */
-__rte_internal
-void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
-
 /**
  * Initialize a DPAA portal
  *
@@ -239,9 +229,6 @@ int rte_dpaa_portal_init(void *arg);
 __rte_internal
 int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
 
-__rte_internal
-int rte_dpaa_portal_fq_close(struct qman_fq *fq);
-
 /**
  * Cleanup a DPAA Portal
  */
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index fe4f9ac5aa..98f1e00582 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -7,7 +7,6 @@ INTERNAL {
 	bman_new_pool;
 	bman_query_free_buffers;
 	bman_release;
-	bman_thread_irq;
 	dpaa_get_ioctl_version_number;
 	dpaa_get_eth_port_cfg;
 	dpaa_get_qm_channel_caam;
@@ -25,11 +24,9 @@ INTERNAL {
 	fman_if_add_mac_addr;
 	fman_if_clear_mac_addr;
 	fman_if_disable_rx;
-	fman_if_discard_rx_errors;
 	fman_if_enable_rx;
 	fman_if_get_fc_quanta;
 	fman_if_get_fc_threshold;
-	fman_if_get_fdoff;
 	fman_if_get_sg_enable;
 	fman_if_loopback_disable;
 	fman_if_loopback_enable;
@@ -52,19 +49,16 @@ INTERNAL {
 	fman_if_receive_rx_errors;
 	fsl_qman_fq_portal_create;
 	netcfg_acquire;
-	netcfg_release;
 	per_lcore_dpaa_io;
 	qman_alloc_cgrid_range;
 	qman_alloc_fqid_range;
 	qman_alloc_pool_range;
-	qman_clear_irq;
 	qman_create_cgr;
 	qman_create_fq;
 	qman_dca_index;
 	qman_delete_cgr;
 	qman_dequeue;
 	qman_dqrr_consume;
-	qman_enqueue;
 	qman_enqueue_multi;
 	qman_enqueue_multi_fq;
 	qman_ern_poll_free;
@@ -79,7 +73,6 @@ INTERNAL {
 	qman_irqsource_remove;
 	qman_modify_cgr;
 	qman_oos_fq;
-	qman_poll_dqrr;
 	qman_portal_dequeue;
 	qman_portal_poll_rx;
 	qman_query_fq_frm_cnt;
@@ -92,10 +85,7 @@ INTERNAL {
 	qman_static_dequeue_add;
 	qman_thread_fd;
 	qman_thread_irq;
-	qman_volatile_dequeue;
 	rte_dpaa_driver_register;
-	rte_dpaa_driver_unregister;
-	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
 
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 58435589b2..51749764e7 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -521,25 +521,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 	driver->fslmc_bus = &rte_fslmc_bus;
 }
 
-/*un-register a fslmc bus based dpaa2 driver */
-void
-rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
-{
-	struct rte_fslmc_bus *fslmc_bus;
-
-	fslmc_bus = driver->fslmc_bus;
-
-	/* Cleanup the PA->VA Translation table; From whereever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
-	TAILQ_REMOVE(&fslmc_bus->driver_list, driver, next);
-	/* Update Bus references */
-	driver->fslmc_bus = NULL;
-}
-
 /*
  * All device has iova as va
  */
diff --git a/drivers/bus/fslmc/mc/dpbp.c b/drivers/bus/fslmc/mc/dpbp.c
index d9103409cf..f3af33b658 100644
--- a/drivers/bus/fslmc/mc/dpbp.c
+++ b/drivers/bus/fslmc/mc/dpbp.c
@@ -77,78 +77,6 @@ int dpbp_close(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpbp_create() - Create the DPBP object.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id; use in subsequent API calls
- *
- * Create the DPBP object, allocate required resources and
- * perform required initialization.
- *
- * This function accepts an authentication token of a parent
- * container that this object should be assigned to and returns
- * an object id. This object_id will be used in all subsequent calls to
- * this specific object.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpbp_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpbp_cfg *cfg,
-		uint32_t *obj_id)
-{
-	struct mc_command cmd = { 0 };
-	int err;
-
-	(void)(cfg); /* unused */
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPBP_CMDID_CREATE,
-					  cmd_flags, dprc_token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpbp_destroy() - Destroy the DPBP object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @obj_id:	ID of DPBP object
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpbp_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t obj_id)
-{
-	struct dpbp_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPBP_CMDID_DESTROY,
-					  cmd_flags, dprc_token);
-
-	cmd_params = (struct dpbp_cmd_destroy *)cmd.params;
-	cmd_params->object_id = cpu_to_le32(obj_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpbp_enable() - Enable the DPBP.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -193,40 +121,6 @@ int dpbp_disable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpbp_is_enabled() - Check if the DPBP is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPBP object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpbp_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en)
-{
-	struct dpbp_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPBP_CMDID_IS_ENABLED, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpbp_rsp_is_enabled *)cmd.params;
-	*en = rsp_params->enabled & DPBP_ENABLE;
-
-	return 0;
-}
-
 /**
  * dpbp_reset() - Reset the DPBP, returns the object to initial state.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -284,41 +178,6 @@ int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpbp_get_api_version - Get Data Path Buffer Pool API version
- * @mc_io:	Pointer to Mc portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of Buffer Pool API
- * @minor_ver:	Minor version of Buffer Pool API
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpbp_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver)
-{
-	struct dpbp_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPBP_CMDID_GET_API_VERSION,
-					  cmd_flags, 0);
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpbp_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
-
 /**
  * dpbp_get_num_free_bufs() - Get number of free buffers in the buffer pool
  * @mc_io:  Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/dpci.c b/drivers/bus/fslmc/mc/dpci.c
index 7e31327afa..cd558d507c 100644
--- a/drivers/bus/fslmc/mc/dpci.c
+++ b/drivers/bus/fslmc/mc/dpci.c
@@ -53,116 +53,6 @@ int dpci_open(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpci_close() - Close the control session of the object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_close(struct fsl_mc_io *mc_io,
-	       uint32_t cmd_flags,
-	       uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_CLOSE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_create() - Create the DPCI object.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPCI object, allocate required resources and perform required
- * initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpci_cfg *cfg,
-		uint32_t *obj_id)
-{
-	struct dpci_cmd_create *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpci_cmd_create *)cmd.params;
-	cmd_params->num_of_priorities = cfg->num_of_priorities;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpci_destroy() - Destroy the DPCI object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpci_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id)
-{
-	struct dpci_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpci_cmd_destroy *)cmd.params;
-	cmd_params->dpci_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpci_enable() - Enable the DPCI, allow sending and receiving frames.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -186,86 +76,6 @@ int dpci_enable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpci_disable() - Disable the DPCI, stop sending and receiving frames.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_disable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_DISABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_is_enabled() - Check if the DPCI is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en)
-{
-	struct dpci_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_IS_ENABLED, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpci_rsp_is_enabled *)cmd.params;
-	*en = dpci_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpci_reset() - Reset the DPCI, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_reset(struct fsl_mc_io *mc_io,
-	       uint32_t cmd_flags,
-	       uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_RESET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpci_get_attributes() - Retrieve DPCI attributes.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -431,133 +241,3 @@ int dpci_get_tx_queue(struct fsl_mc_io *mc_io,
 
 	return 0;
 }
-
-/**
- * dpci_get_api_version() - Get communication interface API version
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path communication interface API
- * @minor_ver:	Minor version of data path communication interface API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dpci_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver)
-{
-	struct dpci_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpci_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
-
-/**
- * dpci_set_opr() - Set Order Restoration configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- * @index:	The queue index
- * @options:	Configuration mode options
- *		can be OPR_OPT_CREATE or OPR_OPT_RETIRE
- * @cfg:	Configuration options for the OPR
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_set_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t index,
-		 uint8_t options,
-		 struct opr_cfg *cfg)
-{
-	struct dpci_cmd_set_opr *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_SET_OPR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpci_cmd_set_opr *)cmd.params;
-	cmd_params->index = index;
-	cmd_params->options = options;
-	cmd_params->oloe = cfg->oloe;
-	cmd_params->oeane = cfg->oeane;
-	cmd_params->olws = cfg->olws;
-	cmd_params->oa = cfg->oa;
-	cmd_params->oprrws = cfg->oprrws;
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCI object
- * @index:	The queue index
- * @cfg:	Returned OPR configuration
- * @qry:	Returned OPR query
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpci_get_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t index,
-		 struct opr_cfg *cfg,
-		 struct opr_qry *qry)
-{
-	struct dpci_rsp_get_opr *rsp_params;
-	struct dpci_cmd_get_opr *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCI_CMDID_GET_OPR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpci_cmd_get_opr *)cmd.params;
-	cmd_params->index = index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpci_rsp_get_opr *)cmd.params;
-	cfg->oloe = rsp_params->oloe;
-	cfg->oeane = rsp_params->oeane;
-	cfg->olws = rsp_params->olws;
-	cfg->oa = rsp_params->oa;
-	cfg->oprrws = rsp_params->oprrws;
-	qry->rip = dpci_get_field(rsp_params->flags, RIP);
-	qry->enable = dpci_get_field(rsp_params->flags, OPR_ENABLE);
-	qry->nesn = le16_to_cpu(rsp_params->nesn);
-	qry->ndsn = le16_to_cpu(rsp_params->ndsn);
-	qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
-	qry->tseq_nlis = dpci_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
-	qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
-	qry->hseq_nlis = dpci_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
-	qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
-	qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
-	qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
-	qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
-	return 0;
-}
diff --git a/drivers/bus/fslmc/mc/dpcon.c b/drivers/bus/fslmc/mc/dpcon.c
index 2c46638dcb..e9bf364507 100644
--- a/drivers/bus/fslmc/mc/dpcon.c
+++ b/drivers/bus/fslmc/mc/dpcon.c
@@ -53,212 +53,6 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpcon_close() - Close the control session of the object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCON object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpcon_close(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_CLOSE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_create() - Create the DPCON object.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id; use in subsequent API calls
- *
- * Create the DPCON object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * This function accepts an authentication token of a parent
- * container that this object should be assigned to and returns
- * an object id. This object_id will be used in all subsequent calls to
- * this specific object.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpcon_create(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 const struct dpcon_cfg *cfg,
-		 uint32_t *obj_id)
-{
-	struct dpcon_cmd_create *dpcon_cmd;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	dpcon_cmd = (struct dpcon_cmd_create *)cmd.params;
-	dpcon_cmd->num_priorities = cfg->num_priorities;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpcon_destroy() - Destroy the DPCON object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @obj_id:	ID of DPCON object
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpcon_destroy(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  uint32_t obj_id)
-{
-	struct dpcon_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpcon_cmd_destroy *)cmd.params;
-	cmd_params->object_id = cpu_to_le32(obj_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_enable() - Enable the DPCON
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCON object
- *
- * Return:	'0' on Success; Error code otherwise
- */
-int dpcon_enable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_ENABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_disable() - Disable the DPCON
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCON object
- *
- * Return:	'0' on Success; Error code otherwise
- */
-int dpcon_disable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_DISABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_is_enabled() -	Check if the DPCON is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCON object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpcon_is_enabled(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     int *en)
-{
-	struct dpcon_rsp_is_enabled *dpcon_rsp;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_IS_ENABLED,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	dpcon_rsp = (struct dpcon_rsp_is_enabled *)cmd.params;
-	*en = dpcon_rsp->enabled & DPCON_ENABLE;
-
-	return 0;
-}
-
-/**
- * dpcon_reset() - Reset the DPCON, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPCON object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpcon_reset(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_RESET,
-					  cmd_flags, token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpcon_get_attributes() - Retrieve DPCON attributes.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -295,38 +89,3 @@ int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 
 	return 0;
 }
-
-/**
- * dpcon_get_api_version - Get Data Path Concentrator API version
- * @mc_io:	Pointer to MC portal's DPCON object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of DPCON API
- * @minor_ver:	Minor version of DPCON API
- *
- * Return:	'0' on Success; Error code otherwise
- */
-int dpcon_get_api_version(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t *major_ver,
-			  uint16_t *minor_ver)
-{
-	struct dpcon_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPCON_CMDID_GET_API_VERSION,
-					  cmd_flags, 0);
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpcon_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..30640fd353 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -76,92 +76,6 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpdmai_create() - Create the DPDMAI object
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPDMAI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmai_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpdmai_cfg *cfg,
-		  uint32_t *obj_id)
-{
-	struct dpdmai_cmd_create *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpdmai_cmd_create *)cmd.params;
-	cmd_params->num_queues = cfg->num_queues;
-	cmd_params->priorities[0] = cfg->priorities[0];
-	cmd_params->priorities[1] = cfg->priorities[1];
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpdmai_destroy() - Destroy the DPDMAI object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- *		created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpdmai_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id)
-{
-	struct dpdmai_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpdmai_cmd_destroy *)cmd.params;
-	cmd_params->dpdmai_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpdmai_enable() - Enable the DPDMAI, allow sending and receiving frames.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -208,64 +122,6 @@ int dpdmai_disable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpdmai_is_enabled() - Check if the DPDMAI is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMAI object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmai_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en)
-{
-	struct dpdmai_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_IS_ENABLED,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmai_rsp_is_enabled *)cmd.params;
-	*en = dpdmai_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpdmai_reset() - Reset the DPDMAI, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMAI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmai_reset(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_RESET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpdmai_get_attributes() - Retrieve DPDMAI attributes.
  * @mc_io:	Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..317924c856 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -76,95 +76,6 @@ int dpio_close(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpio_create() - Create the DPIO object.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPIO object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpio_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpio_cfg *cfg,
-		uint32_t *obj_id)
-{
-	struct dpio_cmd_create *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPIO_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpio_cmd_create *)cmd.params;
-	cmd_params->num_priorities = cfg->num_priorities;
-	dpio_set_field(cmd_params->channel_mode,
-		       CHANNEL_MODE,
-		       cfg->channel_mode);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpio_destroy() - Destroy the DPIO object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- *		created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; Error code otherwise
- */
-int dpio_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id)
-{
-	struct dpio_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPIO_CMDID_DESTROY,
-			cmd_flags,
-			dprc_token);
-
-	/* set object id to destroy */
-	cmd_params = (struct dpio_cmd_destroy *)cmd.params;
-	cmd_params->dpio_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpio_enable() - Enable the DPIO, allow I/O portal operations.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -211,40 +122,6 @@ int dpio_disable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpio_is_enabled() - Check if the DPIO is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPIO object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpio_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en)
-{
-	struct dpio_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPIO_CMDID_IS_ENABLED, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpio_rsp_is_enabled *)cmd.params;
-	*en = dpio_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
 /**
  * dpio_reset() - Reset the DPIO, returns the object to initial state.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -341,41 +218,6 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpio_get_stashing_destination() - Get the stashing destination..
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPIO object
- * @sdest:	Returns the stashing destination value
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
-				  uint32_t cmd_flags,
-				  uint16_t token,
-				  uint8_t *sdest)
-{
-	struct dpio_stashing_dest *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpio_stashing_dest *)cmd.params;
-	*sdest = rsp_params->sdest;
-
-	return 0;
-}
-
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
@@ -444,36 +286,3 @@ int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
-
-/**
- * dpio_get_api_version() - Get Data Path I/O API version
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path i/o API
- * @minor_ver:	Minor version of data path i/o API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dpio_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver)
-{
-	struct dpio_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpio_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
index 8a021f55f1..f50131ba45 100644
--- a/drivers/bus/fslmc/mc/fsl_dpbp.h
+++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
@@ -34,17 +34,6 @@ struct dpbp_cfg {
 	uint32_t options;
 };
 
-int dpbp_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpbp_cfg *cfg,
-		uint32_t *obj_id);
-
-int dpbp_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t obj_id);
-
 __rte_internal
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
@@ -55,11 +44,6 @@ int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
-int dpbp_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en);
-
 __rte_internal
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
@@ -90,10 +74,6 @@ int dpbp_get_attributes(struct fsl_mc_io *mc_io,
  * BPSCN write will attempt to allocate into a cache (coherent write)
  */
 #define DPBP_NOTIF_OPT_COHERENT_WRITE	0x00000001
-int dpbp_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver);
 
 __rte_internal
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
index 81fd3438aa..9fdc3a8ea5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpci.h
+++ b/drivers/bus/fslmc/mc/fsl_dpci.h
@@ -37,10 +37,6 @@ int dpci_open(struct fsl_mc_io *mc_io,
 	      int dpci_id,
 	      uint16_t *token);
 
-int dpci_close(struct fsl_mc_io *mc_io,
-	       uint32_t cmd_flags,
-	       uint16_t token);
-
 /**
  * Enable the Order Restoration support
  */
@@ -66,34 +62,10 @@ struct dpci_cfg {
 	uint8_t num_of_priorities;
 };
 
-int dpci_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpci_cfg *cfg,
-		uint32_t *obj_id);
-
-int dpci_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id);
-
 int dpci_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
-int dpci_disable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
-int dpci_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en);
-
-int dpci_reset(struct fsl_mc_io *mc_io,
-	       uint32_t cmd_flags,
-	       uint16_t token);
-
 /**
  * struct dpci_attr - Structure representing DPCI attributes
  * @id:			DPCI object ID
@@ -224,25 +196,4 @@ int dpci_get_tx_queue(struct fsl_mc_io *mc_io,
 		      uint8_t priority,
 		      struct dpci_tx_queue_attr *attr);
 
-int dpci_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver);
-
-__rte_internal
-int dpci_set_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t index,
-		 uint8_t options,
-		 struct opr_cfg *cfg);
-
-__rte_internal
-int dpci_get_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t index,
-		 struct opr_cfg *cfg,
-		 struct opr_qry *qry);
-
 #endif /* __FSL_DPCI_H */
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 7caa6c68a1..0b3add5d52 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -26,10 +26,6 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
-int dpcon_close(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token);
-
 /**
  * struct dpcon_cfg - Structure representing DPCON configuration
  * @num_priorities: Number of priorities for the DPCON channel (1-8)
@@ -38,34 +34,6 @@ struct dpcon_cfg {
 	uint8_t num_priorities;
 };
 
-int dpcon_create(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 const struct dpcon_cfg *cfg,
-		 uint32_t *obj_id);
-
-int dpcon_destroy(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  uint32_t obj_id);
-
-int dpcon_enable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
-int dpcon_disable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token);
-
-int dpcon_is_enabled(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     int *en);
-
-int dpcon_reset(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token);
-
 /**
  * struct dpcon_attr - Structure representing DPCON attributes
  * @id:			DPCON object ID
@@ -84,9 +52,4 @@ int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint16_t token,
 			 struct dpcon_attr *attr);
 
-int dpcon_get_api_version(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t *major_ver,
-			  uint16_t *minor_ver);
-
 #endif /* __FSL_DPCON_H */
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..eb1d3c1658 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -47,17 +47,6 @@ struct dpdmai_cfg {
 	uint8_t priorities[DPDMAI_PRIO_NUM];
 };
 
-int dpdmai_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpdmai_cfg *cfg,
-		  uint32_t *obj_id);
-
-int dpdmai_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id);
-
 __rte_internal
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
@@ -68,15 +57,6 @@ int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token);
 
-int dpdmai_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en);
-
-int dpdmai_reset(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
 /**
  * struct dpdmai_attr - Structure representing DPDMAI attributes
  * @id: DPDMAI object ID
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..0ddcdb41ec 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -50,17 +50,6 @@ struct dpio_cfg {
 };
 
 
-int dpio_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpio_cfg *cfg,
-		uint32_t *obj_id);
-
-int dpio_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id);
-
 __rte_internal
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
@@ -71,11 +60,6 @@ int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
-int dpio_is_enabled(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    int *en);
-
 __rte_internal
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
@@ -87,11 +71,6 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
-int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
-				  uint32_t cmd_flags,
-				  uint16_t token,
-				  uint8_t *sdest);
-
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
@@ -135,9 +114,4 @@ int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint16_t token,
 			struct dpio_attr *attr);
 
-int dpio_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver);
-
 #endif /* __FSL_DPIO_H */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d9619848d8..06b3e81f26 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -109,13 +109,6 @@ void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
 	}
 }
 
-int dpaa2_dpbp_supported(void)
-{
-	if (TAILQ_EMPTY(&dpbp_dev_list))
-		return -1;
-	return 0;
-}
-
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ac24f01451..b72017bd32 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -454,9 +454,6 @@ struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
 __rte_internal
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
 
-__rte_internal
-int dpaa2_dpbp_supported(void);
-
 __rte_internal
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
 
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..12beb148fb 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -36,6 +36,4 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 __rte_internal
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
 
-uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
-
 #endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index eb68c9cab5..b24c809fa1 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -50,14 +50,6 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
  */
 int qbman_swp_update(struct qbman_swp *p, int stash_off);
 
-/**
- * qbman_swp_finish() - Create and destroy a functional object representing
- * the given QBMan portal descriptor.
- * @p: the qbman_swp object to be destroyed.
- *
- */
-void qbman_swp_finish(struct qbman_swp *p);
-
 /**
  * qbman_swp_invalidate() - Invalidate the cache enabled area of the QBMan
  * portal. This is required to be called if a portal moved to another core
@@ -67,14 +59,6 @@ void qbman_swp_finish(struct qbman_swp *p);
  */
 void qbman_swp_invalidate(struct qbman_swp *p);
 
-/**
- * qbman_swp_get_desc() - Get the descriptor of the given portal object.
- * @p: the given portal object.
- *
- * Return the descriptor for this portal.
- */
-const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
-
 	/**************/
 	/* Interrupts */
 	/**************/
@@ -92,32 +76,6 @@ const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
 /* Volatile dequeue command interrupt */
 #define QBMAN_SWP_INTERRUPT_VDCI ((uint32_t)0x00000020)
 
-/**
- * qbman_swp_interrupt_get_vanish() - Get the data in software portal
- * interrupt status disable register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_ISDR register.
- */
-uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p);
-
-/**
- * qbman_swp_interrupt_set_vanish() - Set the data in software portal
- * interrupt status disable register.
- * @p: the given software portal object.
- * @mask: The value to set in SWP_IDSR register.
- */
-void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask);
-
-/**
- * qbman_swp_interrupt_read_status() - Get the data in software portal
- * interrupt status register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_ISR register.
- */
-uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
-
 /**
  * qbman_swp_interrupt_clear_status() - Set the data in software portal
  * interrupt status register.
@@ -127,13 +85,6 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
 __rte_internal
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
 
-/**
- * qbman_swp_dqrr_thrshld_read_status() - Get the data in software portal
- * DQRR interrupt threshold register.
- * @p: the given software portal object.
- */
-uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p);
-
 /**
  * qbman_swp_dqrr_thrshld_write() - Set the data in software portal
  * DQRR interrupt threshold register.
@@ -142,13 +93,6 @@ uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p);
  */
 void qbman_swp_dqrr_thrshld_write(struct qbman_swp *p, uint32_t mask);
 
-/**
- * qbman_swp_intr_timeout_read_status() - Get the data in software portal
- * Interrupt Time-Out period register.
- * @p: the given software portal object.
- */
-uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p);
-
 /**
  * qbman_swp_intr_timeout_write() - Set the data in software portal
  * Interrupt Time-Out period register.
@@ -157,15 +101,6 @@ uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p);
  */
 void qbman_swp_intr_timeout_write(struct qbman_swp *p, uint32_t mask);
 
-/**
- * qbman_swp_interrupt_get_trigger() - Get the data in software portal
- * interrupt enable register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_IER register.
- */
-uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
-
 /**
  * qbman_swp_interrupt_set_trigger() - Set the data in software portal
  * interrupt enable register.
@@ -174,15 +109,6 @@ uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
  */
 void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask);
 
-/**
- * qbman_swp_interrupt_get_inhibit() - Get the data in software portal
- * interrupt inhibit register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_IIR register.
- */
-int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p);
-
 /**
  * qbman_swp_interrupt_set_inhibit() - Set the data in software portal
  * interrupt inhibit register.
@@ -268,21 +194,6 @@ int qbman_swp_dequeue_get_timeout(struct qbman_swp *s, unsigned int *timeout);
 /* Push-mode dequeuing */
 /* ------------------- */
 
-/* The user of a portal can enable and disable push-mode dequeuing of up to 16
- * channels independently. It does not specify this toggling by channel IDs, but
- * rather by specifying the index (from 0 to 15) that has been mapped to the
- * desired channel.
- */
-
-/**
- * qbman_swp_push_get() - Get the push dequeue setup.
- * @s: the software portal object.
- * @channel_idx: the channel index to query.
- * @enabled: returned boolean to show whether the push dequeue is enabled for
- * the given channel.
- */
-void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
-
 /**
  * qbman_swp_push_set() - Enable or disable push dequeue.
  * @s: the software portal object.
@@ -363,17 +274,6 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 __rte_internal
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes);
-/**
- * qbman_pull_desc_set_token() - Set dequeue token for pull command
- * @d: the dequeue descriptor
- * @token: the token to be set
- *
- * token is the value that shows up in the dequeue response that can be used to
- * detect when the results have been published. The easiest technique is to zero
- * result "storage" before issuing a dequeue, and use any non-zero 'token' value
- */
-void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
-
 /* Exactly one of the following descriptor "actions" should be set. (Calling any
  * one of these will replace the effect of any prior call to one of these.)
  * - pull dequeue from the given frame queue (FQ)
@@ -387,30 +287,6 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
 __rte_internal
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
 
-/**
- * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues.
- * @wqid: composed of channel id and wqid within the channel.
- * @dct: the dequeue command type.
- */
-void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
-			    enum qbman_pull_type_e dct);
-
-/* qbman_pull_desc_set_channel() - Set channelid from which the dequeue command
- * dequeues.
- * @chid: the channel id to be dequeued.
- * @dct: the dequeue command type.
- */
-void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
-				 enum qbman_pull_type_e dct);
-
-/**
- * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
- *
- * @rad: 1 = Reschedule the FQ after dequeue.
- *	 0 = Allow the FQ to remain active after dequeue.
- */
-void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
-
 /**
  * qbman_swp_pull() - Issue the pull dequeue command
  * @s: the software portal object.
@@ -471,17 +347,6 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
 __rte_internal
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
 
-/**
- * qbman_get_dqrr_from_idx() - Use index to get the dqrr entry from the
- * given portal
- * @s: the given portal.
- * @idx: the dqrr index.
- *
- * Return dqrr entry object.
- */
-__rte_internal
-struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
-
 /* ------------------------------------------------- */
 /* Polling user-provided storage for dequeue results */
 /* ------------------------------------------------- */
@@ -549,78 +414,6 @@ static inline int qbman_result_is_SCN(const struct qbman_result *dq)
 	return !qbman_result_is_DQ(dq);
 }
 
-/* Recognise different notification types, only required if the user allows for
- * these to occur, and cares about them when they do.
- */
-
-/**
- * qbman_result_is_FQDAN() - Check for FQ Data Availability
- * @dq: the qbman_result object.
- *
- * Return 1 if this is FQDAN.
- */
-int qbman_result_is_FQDAN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CDAN() - Check for Channel Data Availability
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CDAN.
- */
-int qbman_result_is_CDAN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CSCN() - Check for Congestion State Change
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CSCN.
- */
-int qbman_result_is_CSCN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_BPSCN() - Check for Buffer Pool State Change.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is BPSCN.
- */
-int qbman_result_is_BPSCN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CGCU() - Check for Congestion Group Count Update.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CGCU.
- */
-int qbman_result_is_CGCU(const struct qbman_result *dq);
-
-/* Frame queue state change notifications; (FQDAN in theory counts too as it
- * leaves a FQ parked, but it is primarily a data availability notification)
- */
-
-/**
- * qbman_result_is_FQRN() - Check for FQ Retirement Notification.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQRN.
- */
-int qbman_result_is_FQRN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_FQRNI() - Check for FQ Retirement Immediate
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQRNI.
- */
-int qbman_result_is_FQRNI(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_FQPN() - Check for FQ Park Notification
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQPN.
- */
-int qbman_result_is_FQPN(const struct qbman_result *dq);
-
 /* Parsing frame dequeue results (qbman_result_is_DQ() must be TRUE)
  */
 /* FQ empty */
@@ -695,30 +488,6 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
 __rte_internal
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
 
-/**
- * qbman_result_DQ_fqid() - Get the fqid in dequeue response
- * @dq: the dequeue result.
- *
- * Return fqid.
- */
-uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq);
-
-/**
- * qbman_result_DQ_byte_count() - Get the byte count in dequeue response
- * @dq: the dequeue result.
- *
- * Return the byte count remaining in the FQ.
- */
-uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq);
-
-/**
- * qbman_result_DQ_frame_count - Get the frame count in dequeue response
- * @dq: the dequeue result.
- *
- * Return the frame count remaining in the FQ.
- */
-uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
-
 /**
  * qbman_result_DQ_fqd_ctx() - Get the frame queue context in dequeue response
  * @dq: the dequeue result.
@@ -780,66 +549,6 @@ uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn);
 /* Get the CGID from the CSCN */
 #define qbman_result_CSCN_cgid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
 
-/**
- * qbman_result_bpscn_bpid() - Get the bpid from BPSCN
- * @scn: the state change notification.
- *
- * Return the buffer pool id.
- */
-uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_has_free_bufs() - Check whether there are free
- * buffers in the pool from BPSCN.
- * @scn: the state change notification.
- *
- * Return the number of free buffers.
- */
-int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_is_depleted() - Check BPSCN to see whether the
- * buffer pool is depleted.
- * @scn: the state change notification.
- *
- * Return the status of buffer pool depletion.
- */
-int qbman_result_bpscn_is_depleted(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_is_surplus() - Check BPSCN to see whether the buffer
- * pool is surplus or not.
- * @scn: the state change notification.
- *
- * Return the status of buffer pool surplus.
- */
-int qbman_result_bpscn_is_surplus(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_ctx() - Get the BPSCN CTX from BPSCN message
- * @scn: the state change notification.
- *
- * Return the BPSCN context.
- */
-uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn);
-
-/* Parsing CGCU */
-/**
- * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid
- * @scn: the state change notification.
- *
- * Return the CGCU resource id.
- */
-uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn);
-
-/**
- * qbman_result_cgcu_icnt() - Get the I_CNT from CGCU
- * @scn: the state change notification.
- *
- * Return instantaneous count in the CGCU notification.
- */
-uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn);
-
 	/************/
 	/* Enqueues */
 	/************/
@@ -916,25 +625,6 @@ __rte_internal
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete);
 
-/**
- * qbman_eq_desc_set_orp_hole() - fill a hole in the order-restoration sequence
- * without any enqueue
- * @d: the enqueue descriptor.
- * @opr_id: the order point record id.
- * @seqnum: the order restoration sequence number.
- */
-void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint16_t opr_id,
-				uint16_t seqnum);
-
-/**
- * qbman_eq_desc_set_orp_nesn() -  advance NESN (Next Expected Sequence Number)
- * without any enqueue
- * @d: the enqueue descriptor.
- * @opr_id: the order point record id.
- * @seqnum: the order restoration sequence number.
- */
-void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
-				uint16_t seqnum);
 /**
  * qbman_eq_desc_set_response() - Set the enqueue response info.
  * @d: the enqueue descriptor
@@ -981,27 +671,6 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
 __rte_internal
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
 
-/**
- * qbman_eq_desc_set_qd() - Set Queuing Destination for the enqueue command.
- * @d: the enqueue descriptor
- * @qdid: the id of the queuing destination to be enqueued.
- * @qd_bin: the queuing destination bin
- * @qd_prio: the queuing destination priority.
- */
-__rte_internal
-void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
-			  uint16_t qd_bin, uint8_t qd_prio);
-
-/**
- * qbman_eq_desc_set_eqdi() - enable/disable EQDI interrupt
- * @d: the enqueue descriptor
- * @enable: boolean to enable/disable EQDI
- *
- * Determines whether or not the portal's EQDI interrupt source should be
- * asserted after the enqueue command is completed.
- */
-void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
-
 /**
  * qbman_eq_desc_set_dca() - Set DCA mode in the enqueue command.
  * @d: the enqueue descriptor.
@@ -1060,19 +729,6 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
 __rte_internal
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
 
-/**
- * qbman_swp_enqueue() - Issue an enqueue command.
- * @s: the software portal used for enqueue.
- * @d: the enqueue descriptor.
- * @fd: the frame descriptor to be enqueued.
- *
- * Please note that 'fd' should only be NULL if the "action" of the
- * descriptor is "orp_hole" or "orp_nesn".
- *
- * Return 0 for a successful enqueue, -EBUSY if the EQCR is not ready.
- */
-int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
-		      const struct qbman_fd *fd);
 /**
  * qbman_swp_enqueue_multiple() - Enqueue multiple frames with same
 				  eq descriptor
@@ -1171,13 +827,6 @@ void qbman_release_desc_clear(struct qbman_release_desc *d);
 __rte_internal
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
 
-/**
- * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI
- * interrupt source should be asserted after the release command is completed.
- * @d: the qbman release descriptor.
- */
-void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
-
 /**
  * qbman_swp_release() - Issue a buffer release command.
  * @s: the software portal object.
@@ -1217,116 +866,4 @@ __rte_internal
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers);
 
-	/*****************/
-	/* FQ management */
-	/*****************/
-/**
- * qbman_swp_fq_schedule() - Move the fq to the scheduled state.
- * @s: the software portal object.
- * @fqid: the index of frame queue to be scheduled.
- *
- * There are a couple of different ways that a FQ can end up parked state,
- * This schedules it.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid);
-
-/**
- * qbman_swp_fq_force() - Force the FQ to fully scheduled state.
- * @s: the software portal object.
- * @fqid: the index of frame queue to be forced.
- *
- * Force eligible will force a tentatively-scheduled FQ to be fully-scheduled
- * and thus be available for selection by any channel-dequeuing behaviour (push
- * or pull). If the FQ is subsequently "dequeued" from the channel and is still
- * empty at the time this happens, the resulting dq_entry will have no FD.
- * (qbman_result_DQ_fd() will return NULL.)
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid);
-
-/**
- * These functions change the FQ flow-control stuff between XON/XOFF. (The
- * default is XON.) This setting doesn't affect enqueues to the FQ, just
- * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when
- * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is
- * changed to XOFF after it had already become truly-scheduled to a channel, and
- * a pull dequeue of that channel occurs that selects that FQ for dequeuing,
- * then the resulting dq_entry will have no FD. (qbman_result_DQ_fd() will
- * return NULL.)
- */
-/**
- * qbman_swp_fq_xon() - XON the frame queue.
- * @s: the software portal object.
- * @fqid: the index of frame queue.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid);
-/**
- * qbman_swp_fq_xoff() - XOFF the frame queue.
- * @s: the software portal object.
- * @fqid: the index of frame queue.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid);
-
-	/**********************/
-	/* Channel management */
-	/**********************/
-
-/**
- * If the user has been allocated a channel object that is going to generate
- * CDANs to another channel, then these functions will be necessary.
- * CDAN-enabled channels only generate a single CDAN notification, after which
- * it they need to be reenabled before they'll generate another. (The idea is
- * that pull dequeuing will occur in reaction to the CDAN, followed by a
- * reenable step.) Each function generates a distinct command to hardware, so a
- * combination function is provided if the user wishes to modify the "context"
- * (which shows up in each CDAN message) each time they reenable, as a single
- * command to hardware.
- */
-
-/**
- * qbman_swp_CDAN_set_context() - Set CDAN context
- * @s: the software portal object.
- * @channelid: the channel index.
- * @ctx: the context to be set in CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
-			       uint64_t ctx);
-
-/**
- * qbman_swp_CDAN_enable() - Enable CDAN for the channel.
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid);
-
-/**
- * qbman_swp_CDAN_disable() - disable CDAN for the channel.
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid);
-
-/**
- * qbman_swp_CDAN_set_context_enable() - Set CDAN contest and enable CDAN
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- * @ctx: the context set in CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
-				      uint64_t ctx);
 #endif /* !_FSL_QBMAN_PORTAL_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..2c6a7dcd16 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -59,8 +59,3 @@ uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
 {
 	return (r->frm_cnt & 0x00FFFFFF);
 }
-
-uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
-{
-	return r->byte_cnt;
-}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 77c9d508c4..b8bcfb7189 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -82,10 +82,6 @@ qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
-		const struct qbman_eq_desc *d,
-		const struct qbman_fd *fd);
-static int
 qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
@@ -377,80 +373,30 @@ int qbman_swp_update(struct qbman_swp *p, int stash_off)
 	return 0;
 }
 
-void qbman_swp_finish(struct qbman_swp *p)
-{
-#ifdef QBMAN_CHECKING
-	QBMAN_BUG_ON(p->mc.check != swp_mc_can_start);
-#endif
-	qbman_swp_sys_finish(&p->sys);
-	portal_idx_map[p->desc.idx] = NULL;
-	free(p);
-}
-
-const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p)
-{
-	return &p->desc;
-}
-
 /**************/
 /* Interrupts */
 /**************/
 
-uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISDR);
-}
-
-void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask)
-{
-	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISDR, mask);
-}
-
-uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
-}
-
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
 }
 
-uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_DQRR_ITR);
-}
-
 void qbman_swp_dqrr_thrshld_write(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_DQRR_ITR, mask);
 }
 
-uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ITPR);
-}
-
 void qbman_swp_intr_timeout_write(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ITPR, mask);
 }
 
-uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IER);
-}
-
 void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IER, mask);
 }
 
-int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p)
-{
-	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IIR);
-}
-
 void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IIR,
@@ -643,28 +589,6 @@ void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 		d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
 }
 
-void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint16_t opr_id,
-				uint16_t seqnum)
-{
-	d->eq.verb |= 1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT;
-	d->eq.verb &= ~QB_ENQUEUE_CMD_EC_OPTION_MASK;
-	d->eq.orpid = opr_id;
-	d->eq.seqnum = seqnum;
-	d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
-	d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT);
-}
-
-void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
-				uint16_t seqnum)
-{
-	d->eq.verb |= 1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT;
-	d->eq.verb &= ~QB_ENQUEUE_CMD_EC_OPTION_MASK;
-	d->eq.orpid = opr_id;
-	d->eq.seqnum = seqnum;
-	d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
-	d->eq.seqnum |= 1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT;
-}
-
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				dma_addr_t storage_phys,
 				int stash)
@@ -684,23 +608,6 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
 	d->eq.tgtid = fqid;
 }
 
-void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
-			  uint16_t qd_bin, uint8_t qd_prio)
-{
-	d->eq.verb |= 1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT;
-	d->eq.tgtid = qdid;
-	d->eq.qdbin = qd_bin;
-	d->eq.qpri = qd_prio;
-}
-
-void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
-{
-	if (enable)
-		d->eq.verb |= 1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT;
-	else
-		d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT);
-}
-
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park)
 {
@@ -789,13 +696,6 @@ static int qbman_swp_enqueue_array_mode_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-static inline int qbman_swp_enqueue_array_mode(struct qbman_swp *s,
-					       const struct qbman_eq_desc *d,
-					       const struct qbman_fd *fd)
-{
-	return qbman_swp_enqueue_array_mode_ptr(s, d, fd);
-}
-
 static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
 					      const struct qbman_eq_desc *d,
 					      const struct qbman_fd *fd)
@@ -873,44 +773,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
 	return 0;
 }
 
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
-		struct qbman_swp *s,
-		const struct qbman_eq_desc *d,
-		const struct qbman_fd *fd)
-{
-	uint32_t *p;
-	const uint32_t *cl = qb_cl(d);
-	uint32_t eqcr_ci, full_mask, half_mask;
-
-	half_mask = (s->eqcr.pi_ci_mask>>1);
-	full_mask = s->eqcr.pi_ci_mask;
-	if (!s->eqcr.available) {
-		eqcr_ci = s->eqcr.ci;
-		s->eqcr.ci = qbman_cinh_read(&s->sys,
-				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
-		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
-				eqcr_ci, s->eqcr.ci);
-		if (!s->eqcr.available)
-			return -EBUSY;
-	}
-
-	p = qbman_cinh_write_start_wo_shadow(&s->sys,
-			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
-	memcpy_byte_by_byte(&p[1], &cl[1], 28);
-	memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
-	lwsync();
-
-	/* Set the verb byte, have to substitute in the valid-bit */
-	p[0] = cl[0] | s->eqcr.pi_vb;
-	s->eqcr.pi++;
-	s->eqcr.pi &= full_mask;
-	s->eqcr.available--;
-	if (!(s->eqcr.pi & half_mask))
-		s->eqcr.pi_vb ^= QB_VALID_BIT;
-
-	return 0;
-}
-
 static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 						const struct qbman_eq_desc *d,
 						const struct qbman_fd *fd)
@@ -949,25 +811,6 @@ static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
-				       const struct qbman_eq_desc *d,
-				       const struct qbman_fd *fd)
-{
-	if (!s->stash_off)
-		return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
-	else
-		return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
-}
-
-int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
-		      const struct qbman_fd *fd)
-{
-	if (s->sys.eqcr_mode == qman_eqcr_vb_array)
-		return qbman_swp_enqueue_array_mode(s, d, fd);
-	else    /* Use ring mode by default */
-		return qbman_swp_enqueue_ring_mode(s, d, fd);
-}
-
 static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 					     const struct qbman_eq_desc *d,
 					     const struct qbman_fd *fd,
@@ -1769,14 +1612,6 @@ int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 /* Static (push) dequeue */
 /*************************/
 
-void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
-{
-	uint16_t src = (s->sdq >> QB_SDQCR_SRC_SHIFT) & QB_SDQCR_SRC_MASK;
-
-	QBMAN_BUG_ON(channel_idx > 15);
-	*enabled = src | (1 << channel_idx);
-}
-
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
 {
 	uint16_t dqsrc;
@@ -1845,11 +1680,6 @@ void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 	d->pull.numf = numframes - 1;
 }
 
-void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
-{
-	d->pull.tok = token;
-}
-
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
 {
 	d->pull.verb |= 1 << QB_VDQCR_VERB_DCT_SHIFT;
@@ -1857,34 +1687,6 @@ void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
 	d->pull.dq_src = fqid;
 }
 
-void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
-			    enum qbman_pull_type_e dct)
-{
-	d->pull.verb |= dct << QB_VDQCR_VERB_DCT_SHIFT;
-	d->pull.verb |= qb_pull_dt_workqueue << QB_VDQCR_VERB_DT_SHIFT;
-	d->pull.dq_src = wqid;
-}
-
-void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
-				 enum qbman_pull_type_e dct)
-{
-	d->pull.verb |= dct << QB_VDQCR_VERB_DCT_SHIFT;
-	d->pull.verb |= qb_pull_dt_channel << QB_VDQCR_VERB_DT_SHIFT;
-	d->pull.dq_src = chid;
-}
-
-void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
-{
-	if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
-		if (rad)
-			d->pull.verb |= 1 << QB_VDQCR_VERB_RAD_SHIFT;
-		else
-			d->pull.verb &= ~(1 << QB_VDQCR_VERB_RAD_SHIFT);
-	} else {
-		printf("The RAD feature is not valid when RLS = 0\n");
-	}
-}
-
 static int qbman_swp_pull_direct(struct qbman_swp *s,
 				 struct qbman_pull_desc *d)
 {
@@ -2303,47 +2105,6 @@ int qbman_result_is_DQ(const struct qbman_result *dq)
 	return __qbman_result_is_x(dq, QBMAN_RESULT_DQ);
 }
 
-int qbman_result_is_FQDAN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_FQDAN);
-}
-
-int qbman_result_is_CDAN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_CDAN);
-}
-
-int qbman_result_is_CSCN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_CSCN_MEM) ||
-		__qbman_result_is_x(dq, QBMAN_RESULT_CSCN_WQ);
-}
-
-int qbman_result_is_BPSCN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_BPSCN);
-}
-
-int qbman_result_is_CGCU(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_CGCU);
-}
-
-int qbman_result_is_FQRN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_FQRN);
-}
-
-int qbman_result_is_FQRNI(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_FQRNI);
-}
-
-int qbman_result_is_FQPN(const struct qbman_result *dq)
-{
-	return __qbman_result_is_x(dq, QBMAN_RESULT_FQPN);
-}
-
 /*********************************/
 /* Parsing frame dequeue results */
 /*********************************/
@@ -2365,21 +2126,6 @@ uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq)
 	return dq->dq.oprid;
 }
 
-uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq)
-{
-	return dq->dq.fqid;
-}
-
-uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq)
-{
-	return dq->dq.fq_byte_cnt;
-}
-
-uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq)
-{
-	return dq->dq.fq_frm_cnt;
-}
-
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq)
 {
 	return dq->dq.fqd_ctx;
@@ -2408,47 +2154,6 @@ uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn)
 	return scn->scn.ctx;
 }
 
-/*****************/
-/* Parsing BPSCN */
-/*****************/
-uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn)
-{
-	return (uint16_t)qbman_result_SCN_rid(scn) & 0x3FFF;
-}
-
-int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn)
-{
-	return !(int)(qbman_result_SCN_state(scn) & 0x1);
-}
-
-int qbman_result_bpscn_is_depleted(const struct qbman_result *scn)
-{
-	return (int)(qbman_result_SCN_state(scn) & 0x2);
-}
-
-int qbman_result_bpscn_is_surplus(const struct qbman_result *scn)
-{
-	return (int)(qbman_result_SCN_state(scn) & 0x4);
-}
-
-uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn)
-{
-	return qbman_result_SCN_ctx(scn);
-}
-
-/*****************/
-/* Parsing CGCU  */
-/*****************/
-uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn)
-{
-	return (uint16_t)qbman_result_SCN_rid(scn) & 0xFFFF;
-}
-
-uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn)
-{
-	return qbman_result_SCN_ctx(scn);
-}
-
 /********************/
 /* Parsing EQ RESP  */
 /********************/
@@ -2492,14 +2197,6 @@ void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid)
 	d->br.bpid = bpid;
 }
 
-void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable)
-{
-	if (enable)
-		d->br.verb |= 1 << QB_BR_RCDI_SHIFT;
-	else
-		d->br.verb &= ~(1 << QB_BR_RCDI_SHIFT);
-}
-
 #define RAR_IDX(rar)     ((rar) & 0x7)
 #define RAR_VB(rar)      ((rar) & 0x80)
 #define RAR_SUCCESS(rar) ((rar) & 0x100)
@@ -2751,60 +2448,6 @@ struct qbman_alt_fq_state_rslt {
 
 #define ALT_FQ_FQID_MASK 0x00FFFFFF
 
-static int qbman_swp_alt_fq_state(struct qbman_swp *s, uint32_t fqid,
-				  uint8_t alt_fq_verb)
-{
-	struct qbman_alt_fq_state_desc *p;
-	struct qbman_alt_fq_state_rslt *r;
-
-	/* Start the management command */
-	p = qbman_swp_mc_start(s);
-	if (!p)
-		return -EBUSY;
-
-	p->fqid = fqid & ALT_FQ_FQID_MASK;
-
-	/* Complete the management command */
-	r = qbman_swp_mc_complete(s, p, alt_fq_verb);
-	if (!r) {
-		pr_err("qbman: mgmt cmd failed, no response (verb=0x%x)\n",
-		       alt_fq_verb);
-		return -EIO;
-	}
-
-	/* Decode the outcome */
-	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != alt_fq_verb);
-
-	/* Determine success or failure */
-	if (r->rslt != QBMAN_MC_RSLT_OK) {
-		pr_err("ALT FQID %d failed: verb = 0x%08x, code = 0x%02x\n",
-		       fqid, alt_fq_verb, r->rslt);
-		return -EIO;
-	}
-
-	return 0;
-}
-
-int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid)
-{
-	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_SCHEDULE);
-}
-
-int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid)
-{
-	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_FORCE);
-}
-
-int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid)
-{
-	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XON);
-}
-
-int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid)
-{
-	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XOFF);
-}
-
 /**********************/
 /* Channel management */
 /**********************/
@@ -2834,87 +2477,7 @@ struct qbman_cdan_ctrl_rslt {
 #define CODE_CDAN_WE_EN    0x1
 #define CODE_CDAN_WE_CTX   0x4
 
-static int qbman_swp_CDAN_set(struct qbman_swp *s, uint16_t channelid,
-			      uint8_t we_mask, uint8_t cdan_en,
-			      uint64_t ctx)
-{
-	struct qbman_cdan_ctrl_desc *p;
-	struct qbman_cdan_ctrl_rslt *r;
-
-	/* Start the management command */
-	p = qbman_swp_mc_start(s);
-	if (!p)
-		return -EBUSY;
-
-	/* Encode the caller-provided attributes */
-	p->ch = channelid;
-	p->we = we_mask;
-	if (cdan_en)
-		p->ctrl = 1;
-	else
-		p->ctrl = 0;
-	p->cdan_ctx = ctx;
-
-	/* Complete the management command */
-	r = qbman_swp_mc_complete(s, p, QBMAN_WQCHAN_CONFIGURE);
-	if (!r) {
-		pr_err("qbman: wqchan config failed, no response\n");
-		return -EIO;
-	}
-
-	/* Decode the outcome */
-	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK)
-		     != QBMAN_WQCHAN_CONFIGURE);
-
-	/* Determine success or failure */
-	if (r->rslt != QBMAN_MC_RSLT_OK) {
-		pr_err("CDAN cQID %d failed: code = 0x%02x\n",
-		       channelid, r->rslt);
-		return -EIO;
-	}
-
-	return 0;
-}
-
-int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
-			       uint64_t ctx)
-{
-	return qbman_swp_CDAN_set(s, channelid,
-				  CODE_CDAN_WE_CTX,
-				  0, ctx);
-}
-
-int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid)
-{
-	return qbman_swp_CDAN_set(s, channelid,
-				  CODE_CDAN_WE_EN,
-				  1, 0);
-}
-
-int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid)
-{
-	return qbman_swp_CDAN_set(s, channelid,
-				  CODE_CDAN_WE_EN,
-				  0, 0);
-}
-
-int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
-				      uint64_t ctx)
-{
-	return qbman_swp_CDAN_set(s, channelid,
-				  CODE_CDAN_WE_EN | CODE_CDAN_WE_CTX,
-				  1, ctx);
-}
-
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr)
 {
 	return QBMAN_IDX_FROM_DQRR(dqrr);
 }
-
-struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx)
-{
-	struct qbman_result *dq;
-
-	dq = qbman_cena_read(&s->sys, QBMAN_CENA_SWP_DQRR(idx));
-	return dq;
-}
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 37d45dffe5..f6ded1717e 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -170,16 +170,6 @@ struct rte_fslmc_bus {
 __rte_internal
 void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
 
-/**
- * Unregister a DPAA2 driver.
- *
- * @param driver
- *   A pointer to a rte_dpaa2_driver structure describing the driver
- *   to be unregistered.
- */
-__rte_internal
-void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
-
 /** Helper for DPAA2 device registration from driver (eth, crypto) instance */
 #define RTE_PMD_REGISTER_DPAA2(nm, dpaa2_drv) \
 RTE_INIT(dpaa2initfn_ ##nm) \
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index f44c1a7988..a95c0faa00 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -11,7 +11,6 @@ INTERNAL {
 	dpaa2_affine_qbman_swp;
 	dpaa2_alloc_dpbp_dev;
 	dpaa2_alloc_dq_storage;
-	dpaa2_dpbp_supported;
 	dpaa2_dqrr_size;
 	dpaa2_eqcr_size;
 	dpaa2_free_dpbp_dev;
@@ -28,8 +27,6 @@ INTERNAL {
 	dpbp_get_num_free_bufs;
 	dpbp_open;
 	dpbp_reset;
-	dpci_get_opr;
-	dpci_set_opr;
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
@@ -61,12 +58,10 @@ INTERNAL {
 	qbman_eq_desc_set_fq;
 	qbman_eq_desc_set_no_orp;
 	qbman_eq_desc_set_orp;
-	qbman_eq_desc_set_qd;
 	qbman_eq_desc_set_response;
 	qbman_eq_desc_set_token;
 	qbman_fq_query_state;
 	qbman_fq_state_frame_count;
-	qbman_get_dqrr_from_idx;
 	qbman_get_dqrr_idx;
 	qbman_pull_desc_clear;
 	qbman_pull_desc_set_fq;
@@ -103,7 +98,6 @@ INTERNAL {
 	rte_dpaa2_intr_disable;
 	rte_dpaa2_intr_enable;
 	rte_fslmc_driver_register;
-	rte_fslmc_driver_unregister;
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
diff --git a/drivers/bus/ifpga/ifpga_common.c b/drivers/bus/ifpga/ifpga_common.c
index 78e2eaee4e..7281b169d0 100644
--- a/drivers/bus/ifpga/ifpga_common.c
+++ b/drivers/bus/ifpga/ifpga_common.c
@@ -52,29 +52,6 @@ int rte_ifpga_get_integer32_arg(const char *key __rte_unused,
 
 	return 0;
 }
-int ifpga_get_integer64_arg(const char *key __rte_unused,
-	const char *value, void *extra_args)
-{
-	if (!value || !extra_args)
-		return -EINVAL;
-
-	*(uint64_t *)extra_args = strtoull(value, NULL, 0);
-
-	return 0;
-}
-int ifpga_get_unsigned_long(const char *str, int base)
-{
-	unsigned long num;
-	char *end = NULL;
-
-	errno = 0;
-
-	num = strtoul(str, &end, base);
-	if ((str[0] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
-		return -1;
-
-	return num;
-}
 
 int ifpga_afu_id_cmp(const struct rte_afu_id *afu_id0,
 	const struct rte_afu_id *afu_id1)
diff --git a/drivers/bus/ifpga/ifpga_common.h b/drivers/bus/ifpga/ifpga_common.h
index f9254b9d5d..44381eb78d 100644
--- a/drivers/bus/ifpga/ifpga_common.h
+++ b/drivers/bus/ifpga/ifpga_common.h
@@ -9,9 +9,6 @@ int rte_ifpga_get_string_arg(const char *key __rte_unused,
 	const char *value, void *extra_args);
 int rte_ifpga_get_integer32_arg(const char *key __rte_unused,
 	const char *value, void *extra_args);
-int ifpga_get_integer64_arg(const char *key __rte_unused,
-	const char *value, void *extra_args);
-int ifpga_get_unsigned_long(const char *str, int base);
 int ifpga_afu_id_cmp(const struct rte_afu_id *afu_id0,
 	const struct rte_afu_id *afu_id1);
 
diff --git a/drivers/common/dpaax/dpaa_of.c b/drivers/common/dpaax/dpaa_of.c
index bb2c8fc66b..ad96eb0b3d 100644
--- a/drivers/common/dpaax/dpaa_of.c
+++ b/drivers/common/dpaax/dpaa_of.c
@@ -242,33 +242,6 @@ of_init_path(const char *dt_path)
 	return 0;
 }
 
-static void
-destroy_dir(struct dt_dir *d)
-{
-	struct dt_file *f, *tmpf;
-	struct dt_dir *dd, *tmpd;
-
-	list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
-		list_del(&f->node.list);
-		free(f);
-	}
-	list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
-		destroy_dir(dd);
-		list_del(&dd->node.list);
-		free(dd);
-	}
-}
-
-void
-of_finish(void)
-{
-	DPAAX_HWWARN(!alive, "Double-finish of device-tree driver!");
-
-	destroy_dir(&root_dir);
-	INIT_LIST_HEAD(&linear);
-	alive = 0;
-}
-
 static const struct dt_dir *
 next_linear(const struct dt_dir *f)
 {
diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
index aed6bf98b0..0ba3794e9b 100644
--- a/drivers/common/dpaax/dpaa_of.h
+++ b/drivers/common/dpaax/dpaa_of.h
@@ -161,11 +161,6 @@ bool of_device_is_compatible(const struct device_node *dev_node,
 __rte_internal
 int of_init_path(const char *dt_path);
 
-/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
- * full reload is desired without a process exit.
- */
-void of_finish(void);
-
 /* Use of this wrapper is recommended. */
 static inline int of_init(void)
 {
diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c
index 91bee65e7b..357e62c164 100644
--- a/drivers/common/dpaax/dpaax_iova_table.c
+++ b/drivers/common/dpaax/dpaax_iova_table.c
@@ -346,45 +346,6 @@ dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
 	return 0;
 }
 
-/* dpaax_iova_table_dump
- * Dump the table, with its entries, on screen. Only works in Debug Mode
- * Not for weak hearted - the tables can get quite large
- */
-void
-dpaax_iova_table_dump(void)
-{
-	unsigned int i, j;
-	struct dpaax_iovat_element *entry;
-
-	/* In case DEBUG is not enabled, some 'if' conditions might misbehave
-	 * as they have nothing else in them  except a DPAAX_DEBUG() which if
-	 * tuned out would leave 'if' naked.
-	 */
-	if (rte_log_get_global_level() < RTE_LOG_DEBUG) {
-		DPAAX_ERR("Set log level to Debug for PA->Table dump!");
-		return;
-	}
-
-	DPAAX_DEBUG(" === Start of PA->VA Translation Table ===");
-	if (dpaax_iova_table_p == NULL)
-		DPAAX_DEBUG("\tNULL");
-
-	entry = dpaax_iova_table_p->entries;
-	for (i = 0; i < dpaax_iova_table_p->count; i++) {
-		DPAAX_DEBUG("\t(%16i),(%16"PRIu64"),(%16zu),(%16p)",
-			    i, entry[i].start, entry[i].len, entry[i].pages);
-		DPAAX_DEBUG("\t\t          (PA),          (VA)");
-		for (j = 0; j < (entry->len/DPAAX_MEM_SPLIT); j++) {
-			if (entry[i].pages[j] == 0)
-				continue;
-			DPAAX_DEBUG("\t\t(%16"PRIx64"),(%16"PRIx64")",
-				    (entry[i].start + (j * sizeof(uint64_t))),
-				    entry[i].pages[j]);
-		}
-	}
-	DPAAX_DEBUG(" === End of PA->VA Translation Table ===");
-}
-
 static void
 dpaax_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 		  void *arg __rte_unused)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..8c3ce45f6a 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -67,8 +67,6 @@ __rte_internal
 void dpaax_iova_table_depopulate(void);
 __rte_internal
 int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
-__rte_internal
-void dpaax_iova_table_dump(void);
 
 static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
 
diff --git a/drivers/common/dpaax/version.map b/drivers/common/dpaax/version.map
index ee1ca6801c..7390954793 100644
--- a/drivers/common/dpaax/version.map
+++ b/drivers/common/dpaax/version.map
@@ -2,7 +2,6 @@ INTERNAL {
 	global:
 
 	dpaax_iova_table_depopulate;
-	dpaax_iova_table_dump;
 	dpaax_iova_table_p;
 	dpaax_iova_table_populate;
 	dpaax_iova_table_update;
diff --git a/drivers/common/iavf/iavf_common.c b/drivers/common/iavf/iavf_common.c
index c951b7d787..025c9e9ece 100644
--- a/drivers/common/iavf/iavf_common.c
+++ b/drivers/common/iavf/iavf_common.c
@@ -43,214 +43,6 @@ enum iavf_status iavf_set_mac_type(struct iavf_hw *hw)
 	return status;
 }
 
-/**
- * iavf_aq_str - convert AQ err code to a string
- * @hw: pointer to the HW structure
- * @aq_err: the AQ error code to convert
- **/
-const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err)
-{
-	switch (aq_err) {
-	case IAVF_AQ_RC_OK:
-		return "OK";
-	case IAVF_AQ_RC_EPERM:
-		return "IAVF_AQ_RC_EPERM";
-	case IAVF_AQ_RC_ENOENT:
-		return "IAVF_AQ_RC_ENOENT";
-	case IAVF_AQ_RC_ESRCH:
-		return "IAVF_AQ_RC_ESRCH";
-	case IAVF_AQ_RC_EINTR:
-		return "IAVF_AQ_RC_EINTR";
-	case IAVF_AQ_RC_EIO:
-		return "IAVF_AQ_RC_EIO";
-	case IAVF_AQ_RC_ENXIO:
-		return "IAVF_AQ_RC_ENXIO";
-	case IAVF_AQ_RC_E2BIG:
-		return "IAVF_AQ_RC_E2BIG";
-	case IAVF_AQ_RC_EAGAIN:
-		return "IAVF_AQ_RC_EAGAIN";
-	case IAVF_AQ_RC_ENOMEM:
-		return "IAVF_AQ_RC_ENOMEM";
-	case IAVF_AQ_RC_EACCES:
-		return "IAVF_AQ_RC_EACCES";
-	case IAVF_AQ_RC_EFAULT:
-		return "IAVF_AQ_RC_EFAULT";
-	case IAVF_AQ_RC_EBUSY:
-		return "IAVF_AQ_RC_EBUSY";
-	case IAVF_AQ_RC_EEXIST:
-		return "IAVF_AQ_RC_EEXIST";
-	case IAVF_AQ_RC_EINVAL:
-		return "IAVF_AQ_RC_EINVAL";
-	case IAVF_AQ_RC_ENOTTY:
-		return "IAVF_AQ_RC_ENOTTY";
-	case IAVF_AQ_RC_ENOSPC:
-		return "IAVF_AQ_RC_ENOSPC";
-	case IAVF_AQ_RC_ENOSYS:
-		return "IAVF_AQ_RC_ENOSYS";
-	case IAVF_AQ_RC_ERANGE:
-		return "IAVF_AQ_RC_ERANGE";
-	case IAVF_AQ_RC_EFLUSHED:
-		return "IAVF_AQ_RC_EFLUSHED";
-	case IAVF_AQ_RC_BAD_ADDR:
-		return "IAVF_AQ_RC_BAD_ADDR";
-	case IAVF_AQ_RC_EMODE:
-		return "IAVF_AQ_RC_EMODE";
-	case IAVF_AQ_RC_EFBIG:
-		return "IAVF_AQ_RC_EFBIG";
-	}
-
-	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
-	return hw->err_str;
-}
-
-/**
- * iavf_stat_str - convert status err code to a string
- * @hw: pointer to the HW structure
- * @stat_err: the status error code to convert
- **/
-const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err)
-{
-	switch (stat_err) {
-	case IAVF_SUCCESS:
-		return "OK";
-	case IAVF_ERR_NVM:
-		return "IAVF_ERR_NVM";
-	case IAVF_ERR_NVM_CHECKSUM:
-		return "IAVF_ERR_NVM_CHECKSUM";
-	case IAVF_ERR_PHY:
-		return "IAVF_ERR_PHY";
-	case IAVF_ERR_CONFIG:
-		return "IAVF_ERR_CONFIG";
-	case IAVF_ERR_PARAM:
-		return "IAVF_ERR_PARAM";
-	case IAVF_ERR_MAC_TYPE:
-		return "IAVF_ERR_MAC_TYPE";
-	case IAVF_ERR_UNKNOWN_PHY:
-		return "IAVF_ERR_UNKNOWN_PHY";
-	case IAVF_ERR_LINK_SETUP:
-		return "IAVF_ERR_LINK_SETUP";
-	case IAVF_ERR_ADAPTER_STOPPED:
-		return "IAVF_ERR_ADAPTER_STOPPED";
-	case IAVF_ERR_INVALID_MAC_ADDR:
-		return "IAVF_ERR_INVALID_MAC_ADDR";
-	case IAVF_ERR_DEVICE_NOT_SUPPORTED:
-		return "IAVF_ERR_DEVICE_NOT_SUPPORTED";
-	case IAVF_ERR_MASTER_REQUESTS_PENDING:
-		return "IAVF_ERR_MASTER_REQUESTS_PENDING";
-	case IAVF_ERR_INVALID_LINK_SETTINGS:
-		return "IAVF_ERR_INVALID_LINK_SETTINGS";
-	case IAVF_ERR_AUTONEG_NOT_COMPLETE:
-		return "IAVF_ERR_AUTONEG_NOT_COMPLETE";
-	case IAVF_ERR_RESET_FAILED:
-		return "IAVF_ERR_RESET_FAILED";
-	case IAVF_ERR_SWFW_SYNC:
-		return "IAVF_ERR_SWFW_SYNC";
-	case IAVF_ERR_NO_AVAILABLE_VSI:
-		return "IAVF_ERR_NO_AVAILABLE_VSI";
-	case IAVF_ERR_NO_MEMORY:
-		return "IAVF_ERR_NO_MEMORY";
-	case IAVF_ERR_BAD_PTR:
-		return "IAVF_ERR_BAD_PTR";
-	case IAVF_ERR_RING_FULL:
-		return "IAVF_ERR_RING_FULL";
-	case IAVF_ERR_INVALID_PD_ID:
-		return "IAVF_ERR_INVALID_PD_ID";
-	case IAVF_ERR_INVALID_QP_ID:
-		return "IAVF_ERR_INVALID_QP_ID";
-	case IAVF_ERR_INVALID_CQ_ID:
-		return "IAVF_ERR_INVALID_CQ_ID";
-	case IAVF_ERR_INVALID_CEQ_ID:
-		return "IAVF_ERR_INVALID_CEQ_ID";
-	case IAVF_ERR_INVALID_AEQ_ID:
-		return "IAVF_ERR_INVALID_AEQ_ID";
-	case IAVF_ERR_INVALID_SIZE:
-		return "IAVF_ERR_INVALID_SIZE";
-	case IAVF_ERR_INVALID_ARP_INDEX:
-		return "IAVF_ERR_INVALID_ARP_INDEX";
-	case IAVF_ERR_INVALID_FPM_FUNC_ID:
-		return "IAVF_ERR_INVALID_FPM_FUNC_ID";
-	case IAVF_ERR_QP_INVALID_MSG_SIZE:
-		return "IAVF_ERR_QP_INVALID_MSG_SIZE";
-	case IAVF_ERR_QP_TOOMANY_WRS_POSTED:
-		return "IAVF_ERR_QP_TOOMANY_WRS_POSTED";
-	case IAVF_ERR_INVALID_FRAG_COUNT:
-		return "IAVF_ERR_INVALID_FRAG_COUNT";
-	case IAVF_ERR_QUEUE_EMPTY:
-		return "IAVF_ERR_QUEUE_EMPTY";
-	case IAVF_ERR_INVALID_ALIGNMENT:
-		return "IAVF_ERR_INVALID_ALIGNMENT";
-	case IAVF_ERR_FLUSHED_QUEUE:
-		return "IAVF_ERR_FLUSHED_QUEUE";
-	case IAVF_ERR_INVALID_PUSH_PAGE_INDEX:
-		return "IAVF_ERR_INVALID_PUSH_PAGE_INDEX";
-	case IAVF_ERR_INVALID_IMM_DATA_SIZE:
-		return "IAVF_ERR_INVALID_IMM_DATA_SIZE";
-	case IAVF_ERR_TIMEOUT:
-		return "IAVF_ERR_TIMEOUT";
-	case IAVF_ERR_OPCODE_MISMATCH:
-		return "IAVF_ERR_OPCODE_MISMATCH";
-	case IAVF_ERR_CQP_COMPL_ERROR:
-		return "IAVF_ERR_CQP_COMPL_ERROR";
-	case IAVF_ERR_INVALID_VF_ID:
-		return "IAVF_ERR_INVALID_VF_ID";
-	case IAVF_ERR_INVALID_HMCFN_ID:
-		return "IAVF_ERR_INVALID_HMCFN_ID";
-	case IAVF_ERR_BACKING_PAGE_ERROR:
-		return "IAVF_ERR_BACKING_PAGE_ERROR";
-	case IAVF_ERR_NO_PBLCHUNKS_AVAILABLE:
-		return "IAVF_ERR_NO_PBLCHUNKS_AVAILABLE";
-	case IAVF_ERR_INVALID_PBLE_INDEX:
-		return "IAVF_ERR_INVALID_PBLE_INDEX";
-	case IAVF_ERR_INVALID_SD_INDEX:
-		return "IAVF_ERR_INVALID_SD_INDEX";
-	case IAVF_ERR_INVALID_PAGE_DESC_INDEX:
-		return "IAVF_ERR_INVALID_PAGE_DESC_INDEX";
-	case IAVF_ERR_INVALID_SD_TYPE:
-		return "IAVF_ERR_INVALID_SD_TYPE";
-	case IAVF_ERR_MEMCPY_FAILED:
-		return "IAVF_ERR_MEMCPY_FAILED";
-	case IAVF_ERR_INVALID_HMC_OBJ_INDEX:
-		return "IAVF_ERR_INVALID_HMC_OBJ_INDEX";
-	case IAVF_ERR_INVALID_HMC_OBJ_COUNT:
-		return "IAVF_ERR_INVALID_HMC_OBJ_COUNT";
-	case IAVF_ERR_INVALID_SRQ_ARM_LIMIT:
-		return "IAVF_ERR_INVALID_SRQ_ARM_LIMIT";
-	case IAVF_ERR_SRQ_ENABLED:
-		return "IAVF_ERR_SRQ_ENABLED";
-	case IAVF_ERR_ADMIN_QUEUE_ERROR:
-		return "IAVF_ERR_ADMIN_QUEUE_ERROR";
-	case IAVF_ERR_ADMIN_QUEUE_TIMEOUT:
-		return "IAVF_ERR_ADMIN_QUEUE_TIMEOUT";
-	case IAVF_ERR_BUF_TOO_SHORT:
-		return "IAVF_ERR_BUF_TOO_SHORT";
-	case IAVF_ERR_ADMIN_QUEUE_FULL:
-		return "IAVF_ERR_ADMIN_QUEUE_FULL";
-	case IAVF_ERR_ADMIN_QUEUE_NO_WORK:
-		return "IAVF_ERR_ADMIN_QUEUE_NO_WORK";
-	case IAVF_ERR_BAD_IWARP_CQE:
-		return "IAVF_ERR_BAD_IWARP_CQE";
-	case IAVF_ERR_NVM_BLANK_MODE:
-		return "IAVF_ERR_NVM_BLANK_MODE";
-	case IAVF_ERR_NOT_IMPLEMENTED:
-		return "IAVF_ERR_NOT_IMPLEMENTED";
-	case IAVF_ERR_PE_DOORBELL_NOT_ENABLED:
-		return "IAVF_ERR_PE_DOORBELL_NOT_ENABLED";
-	case IAVF_ERR_DIAG_TEST_FAILED:
-		return "IAVF_ERR_DIAG_TEST_FAILED";
-	case IAVF_ERR_NOT_READY:
-		return "IAVF_ERR_NOT_READY";
-	case IAVF_NOT_SUPPORTED:
-		return "IAVF_NOT_SUPPORTED";
-	case IAVF_ERR_FIRMWARE_API_VERSION:
-		return "IAVF_ERR_FIRMWARE_API_VERSION";
-	case IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
-		return "IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
-	}
-
-	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
-	return hw->err_str;
-}
-
 /**
  * iavf_debug_aq
  * @hw: debug mask related to admin queue
@@ -362,164 +154,6 @@ enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw,
 	return status;
 }
 
-/**
- * iavf_aq_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- **/
-STATIC enum iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
-						u16 vsi_id, bool pf_lut,
-						u8 *lut, u16 lut_size,
-						bool set)
-{
-	enum iavf_status status;
-	struct iavf_aq_desc desc;
-	struct iavf_aqc_get_set_rss_lut *cmd_resp =
-		   (struct iavf_aqc_get_set_rss_lut *)&desc.params.raw;
-
-	if (set)
-		iavf_fill_default_direct_cmd_desc(&desc,
-						  iavf_aqc_opc_set_rss_lut);
-	else
-		iavf_fill_default_direct_cmd_desc(&desc,
-						  iavf_aqc_opc_get_rss_lut);
-
-	/* Indirect command */
-	desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD);
-
-	cmd_resp->vsi_id =
-			CPU_TO_LE16((u16)((vsi_id <<
-					  IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
-					  IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
-	cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_LUT_VSI_VALID);
-
-	if (pf_lut)
-		cmd_resp->flags |= CPU_TO_LE16((u16)
-					((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
-					IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
-					IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
-	else
-		cmd_resp->flags |= CPU_TO_LE16((u16)
-					((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
-					IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
-					IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
-
-	status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL);
-
-	return status;
-}
-
-/**
- * iavf_aq_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- **/
-enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
-				     bool pf_lut, u8 *lut, u16 lut_size)
-{
-	return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
-				       false);
-}
-
-/**
- * iavf_aq_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- **/
-enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
-				     bool pf_lut, u8 *lut, u16 lut_size)
-{
-	return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * iavf_aq_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- **/
-STATIC enum iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw,
-				      u16 vsi_id,
-				      struct iavf_aqc_get_set_rss_key_data *key,
-				      bool set)
-{
-	enum iavf_status status;
-	struct iavf_aq_desc desc;
-	struct iavf_aqc_get_set_rss_key *cmd_resp =
-			(struct iavf_aqc_get_set_rss_key *)&desc.params.raw;
-	u16 key_size = sizeof(struct iavf_aqc_get_set_rss_key_data);
-
-	if (set)
-		iavf_fill_default_direct_cmd_desc(&desc,
-						  iavf_aqc_opc_set_rss_key);
-	else
-		iavf_fill_default_direct_cmd_desc(&desc,
-						  iavf_aqc_opc_get_rss_key);
-
-	/* Indirect command */
-	desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD);
-
-	cmd_resp->vsi_id =
-			CPU_TO_LE16((u16)((vsi_id <<
-					  IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
-					  IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
-	cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_KEY_VSI_VALID);
-
-	status = iavf_asq_send_command(hw, &desc, key, key_size, NULL);
-
-	return status;
-}
-
-/**
- * iavf_aq_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- **/
-enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw,
-				     u16 vsi_id,
-				     struct iavf_aqc_get_set_rss_key_data *key)
-{
-	return iavf_aq_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * iavf_aq_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- **/
-enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw,
-				     u16 vsi_id,
-				     struct iavf_aqc_get_set_rss_key_data *key)
-{
-	return iavf_aq_get_set_rss_key(hw, vsi_id, key, true);
-}
-
 /* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the
  * hardware to a bit-field that can be used by SW to more easily determine the
  * packet type.
@@ -885,30 +519,6 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = {
 	IAVF_PTT_UNUSED_ENTRY(255)
 };
 
-/**
- * iavf_validate_mac_addr - Validate unicast MAC address
- * @mac_addr: pointer to MAC address
- *
- * Tests a MAC address to ensure it is a valid Individual Address
- **/
-enum iavf_status iavf_validate_mac_addr(u8 *mac_addr)
-{
-	enum iavf_status status = IAVF_SUCCESS;
-
-	DEBUGFUNC("iavf_validate_mac_addr");
-
-	/* Broadcast addresses ARE multicast addresses
-	 * Make sure it is not a multicast address
-	 * Reject the zero address
-	 */
-	if (IAVF_IS_MULTICAST(mac_addr) ||
-	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
-	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
-		status = IAVF_ERR_INVALID_MAC_ADDR;
-
-	return status;
-}
-
 /**
  * iavf_aq_send_msg_to_pf
  * @hw: pointer to the hardware structure
@@ -989,38 +599,3 @@ void iavf_vf_parse_hw_config(struct iavf_hw *hw,
 		vsi_res++;
 	}
 }
-
-/**
- * iavf_vf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a VF_RESET message to the PF. Does not wait for response from PF
- * as none will be forthcoming. Immediately after calling this function,
- * the admin queue should be shut down and (optionally) reinitialized.
- **/
-enum iavf_status iavf_vf_reset(struct iavf_hw *hw)
-{
-	return iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
-				      IAVF_SUCCESS, NULL, 0, NULL);
-}
-
-/**
-* iavf_aq_clear_all_wol_filters
-* @hw: pointer to the hw struct
-* @cmd_details: pointer to command details structure or NULL
-*
-* Get information for the reason of a Wake Up event
-**/
-enum iavf_status iavf_aq_clear_all_wol_filters(struct iavf_hw *hw,
-			struct iavf_asq_cmd_details *cmd_details)
-{
-	struct iavf_aq_desc desc;
-	enum iavf_status status;
-
-	iavf_fill_default_direct_cmd_desc(&desc,
-					  iavf_aqc_opc_clear_all_wol_filters);
-
-	status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
diff --git a/drivers/common/iavf/iavf_prototype.h b/drivers/common/iavf/iavf_prototype.h
index f34e77db0f..5d5deacfe2 100644
--- a/drivers/common/iavf/iavf_prototype.h
+++ b/drivers/common/iavf/iavf_prototype.h
@@ -30,7 +30,6 @@ enum iavf_status iavf_shutdown_arq(struct iavf_hw *hw);
 u16 iavf_clean_asq(struct iavf_hw *hw);
 void iavf_free_adminq_asq(struct iavf_hw *hw);
 void iavf_free_adminq_arq(struct iavf_hw *hw);
-enum iavf_status iavf_validate_mac_addr(u8 *mac_addr);
 void iavf_adminq_init_ring_data(struct iavf_hw *hw);
 __rte_internal
 enum iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
@@ -51,19 +50,6 @@ void iavf_idle_aq(struct iavf_hw *hw);
 bool iavf_check_asq_alive(struct iavf_hw *hw);
 enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading);
 
-enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid,
-				     bool pf_lut, u8 *lut, u16 lut_size);
-enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
-				     bool pf_lut, u8 *lut, u16 lut_size);
-enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw,
-				     u16 seid,
-				     struct iavf_aqc_get_set_rss_key_data *key);
-enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw,
-				     u16 seid,
-				     struct iavf_aqc_get_set_rss_key_data *key);
-const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err);
-const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err);
-
 __rte_internal
 enum iavf_status iavf_set_mac_type(struct iavf_hw *hw);
 
@@ -83,7 +69,6 @@ void iavf_destroy_spinlock(struct iavf_spinlock *sp);
 __rte_internal
 void iavf_vf_parse_hw_config(struct iavf_hw *hw,
 			     struct virtchnl_vf_resource *msg);
-enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
 __rte_internal
 enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
 				enum virtchnl_ops v_opcode,
@@ -95,6 +80,4 @@ enum iavf_status iavf_aq_debug_dump(struct iavf_hw *hw, u8 cluster_id,
 				    void *buff, u16 *ret_buff_size,
 				    u8 *ret_next_table, u32 *ret_next_index,
 				    struct iavf_asq_cmd_details *cmd_details);
-enum iavf_status iavf_aq_clear_all_wol_filters(struct iavf_hw *hw,
-			struct iavf_asq_cmd_details *cmd_details);
 #endif /* _IAVF_PROTOTYPE_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
index 6df1e8ea63..e65fe602f7 100644
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ b/drivers/common/octeontx2/otx2_mbox.c
@@ -381,19 +381,6 @@ otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
 	return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
 }
 
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
-	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
-	int avail;
-
-	rte_spinlock_lock(&mdev->mbox_lock);
-	avail = mbox->tx_size - mdev->msg_size - msgs_offset();
-	rte_spinlock_unlock(&mdev->mbox_lock);
-
-	return avail;
-}
-
 int
 otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
 {
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
index f6d884c198..7d9c018597 100644
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ b/drivers/common/octeontx2/otx2_mbox.h
@@ -1785,7 +1785,6 @@ int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
 __rte_internal
 int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
 			  uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
 __rte_internal
 struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
 					    int size, int size_rsp);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index aa7fad6d70..d23e58ff6d 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -399,25 +399,6 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
 	return 0;
 }
 
-int
-bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
-{
-	struct rte_cryptodev *cryptodev;
-
-	if (fsdev == NULL)
-		return -ENODEV;
-	if (fsdev->sym_dev == NULL)
-		return 0;
-
-	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
-	rte_cryptodev_pmd_destroy(cryptodev);
-	fsdev->sym_rte_dev.name = NULL;
-	fsdev->sym_dev = NULL;
-
-	return 0;
-}
-
 static struct cryptodev_driver bcmfs_crypto_drv;
 RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
 			       cryptodev_bcmfs_sym_driver,
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
index 65d7046090..d9ddd024ff 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -32,7 +32,4 @@ struct bcmfs_sym_dev_private {
 int
 bcmfs_sym_dev_create(struct bcmfs_device *fdev);
 
-int
-bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
-
 #endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
index dc2def580f..81994d9d56 100644
--- a/drivers/crypto/bcmfs/bcmfs_vfio.c
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -74,34 +74,10 @@ bcmfs_attach_vfio(struct bcmfs_device *dev)
 
 	return 0;
 }
-
-void
-bcmfs_release_vfio(struct bcmfs_device *dev)
-{
-	int ret;
-
-	if (dev == NULL)
-		return;
-
-	/* unmap the addr */
-	munmap(dev->mmap_addr, dev->mmap_size);
-	/* release the device */
-	ret = rte_vfio_release_device(dev->dirname, dev->name,
-				      dev->vfio_dev_fd);
-	if (ret < 0) {
-		BCMFS_LOG(ERR, "cannot release device");
-		return;
-	}
-}
 #else
 int
 bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
 {
 	return -1;
 }
-
-void
-bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
-{
-}
 #endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
index d0fdf6483f..4177bc1fee 100644
--- a/drivers/crypto/bcmfs/bcmfs_vfio.h
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -10,8 +10,4 @@
 int
 bcmfs_attach_vfio(struct bcmfs_device *dev);
 
-/* Release the bcmfs device from vfio */
-void
-bcmfs_release_vfio(struct bcmfs_device *dev);
-
 #endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/caam_jr/caam_jr_pvt.h b/drivers/crypto/caam_jr/caam_jr_pvt.h
index 552d6b9b1b..60cf1fa45b 100644
--- a/drivers/crypto/caam_jr/caam_jr_pvt.h
+++ b/drivers/crypto/caam_jr/caam_jr_pvt.h
@@ -222,7 +222,6 @@ struct uio_job_ring {
 	int uio_minor_number;
 };
 
-int sec_cleanup(void);
 int sec_configure(void);
 void sec_uio_job_rings_init(void);
 struct uio_job_ring *config_job_ring(void);
diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c
index e4ee102344..60c551e4f2 100644
--- a/drivers/crypto/caam_jr/caam_jr_uio.c
+++ b/drivers/crypto/caam_jr/caam_jr_uio.c
@@ -471,34 +471,6 @@ sec_configure(void)
 	return config_jr_no;
 }
 
-int
-sec_cleanup(void)
-{
-	int i;
-	struct uio_job_ring *job_ring;
-
-	for (i = 0; i < g_uio_jr_num; i++) {
-		job_ring = &g_uio_job_ring[i];
-		/* munmap SEC's register memory */
-		if (job_ring->register_base_addr) {
-			munmap(job_ring->register_base_addr,
-				job_ring->map_size);
-			job_ring->register_base_addr = NULL;
-		}
-		/* I need to close the fd after shutdown UIO commands need to be
-		 * sent using the fd
-		 */
-		if (job_ring->uio_fd != -1) {
-			CAAM_JR_INFO(
-			"Closed device file for job ring %d , fd = %d",
-			job_ring->jr_id, job_ring->uio_fd);
-			close(job_ring->uio_fd);
-			job_ring->uio_fd = -1;
-		}
-	}
-	return 0;
-}
-
 void
 sec_uio_job_rings_init(void)
 {
diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index 664ddc1747..fc34b6a639 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -62,26 +62,6 @@ ccp_allot_queue(struct rte_cryptodev *cdev, int slot_req)
 	return NULL;
 }
 
-int
-ccp_read_hwrng(uint32_t *value)
-{
-	struct ccp_device *dev;
-
-	TAILQ_FOREACH(dev, &ccp_list, next) {
-		void *vaddr = (void *)(dev->pci.mem_resource[2].addr);
-
-		while (dev->hwrng_retries++ < CCP_MAX_TRNG_RETRIES) {
-			*value = CCP_READ_REG(vaddr, TRNG_OUT_REG);
-			if (*value) {
-				dev->hwrng_retries = 0;
-				return 0;
-			}
-		}
-		dev->hwrng_retries = 0;
-	}
-	return -1;
-}
-
 static const struct rte_memzone *
 ccp_queue_dma_zone_reserve(const char *queue_name,
 			   uint32_t queue_size,
@@ -180,28 +160,6 @@ ccp_bitmap_set(unsigned long *map, unsigned int start, int len)
 	}
 }
 
-static void
-ccp_bitmap_clear(unsigned long *map, unsigned int start, int len)
-{
-	unsigned long *p = map + WORD_OFFSET(start);
-	const unsigned int size = start + len;
-	int bits_to_clear = BITS_PER_WORD - (start % BITS_PER_WORD);
-	unsigned long mask_to_clear = CCP_BITMAP_FIRST_WORD_MASK(start);
-
-	while (len - bits_to_clear >= 0) {
-		*p &= ~mask_to_clear;
-		len -= bits_to_clear;
-		bits_to_clear = BITS_PER_WORD;
-		mask_to_clear = ~0UL;
-		p++;
-	}
-	if (len) {
-		mask_to_clear &= CCP_BITMAP_LAST_WORD_MASK(size);
-		*p &= ~mask_to_clear;
-	}
-}
-
-
 static unsigned long
 _ccp_find_next_bit(const unsigned long *addr,
 		   unsigned long nbits,
@@ -312,29 +270,6 @@ ccp_lsb_alloc(struct ccp_queue *cmd_q, unsigned int count)
 	return 0;
 }
 
-static void __rte_unused
-ccp_lsb_free(struct ccp_queue *cmd_q,
-	     unsigned int start,
-	     unsigned int count)
-{
-	int lsbno = start / LSB_SIZE;
-
-	if (!start)
-		return;
-
-	if (cmd_q->lsb == lsbno) {
-		/* An entry from the private LSB */
-		ccp_bitmap_clear(cmd_q->lsbmap, start % LSB_SIZE, count);
-	} else {
-		/* From the shared LSBs */
-		struct ccp_device *ccp = cmd_q->dev;
-
-		rte_spinlock_lock(&ccp->lsb_lock);
-		ccp_bitmap_clear(ccp->lsbmap, start, count);
-		rte_spinlock_unlock(&ccp->lsb_lock);
-	}
-}
-
 static int
 ccp_find_lsb_regions(struct ccp_queue *cmd_q, uint64_t status)
 {
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index 37e04218ce..8bfce5d9fb 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -484,12 +484,4 @@ int ccp_probe_devices(const struct rte_pci_id *ccp_id);
  */
 struct ccp_queue *ccp_allot_queue(struct rte_cryptodev *dev, int slot_req);
 
-/**
- * read hwrng value
- *
- * @param trng_value data pointer to write RNG value
- * @return 0 on success otherwise -1
- */
-int ccp_read_hwrng(uint32_t *trng_value);
-
 #endif /* _CCP_DEV_H_ */
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..52bfd72f50 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -80,96 +80,6 @@ int dpseci_close(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpseci_create() - Create the DPSECI object
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPSECI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpseci_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpseci_cfg *cfg,
-		  uint32_t *obj_id)
-{
-	struct dpseci_cmd_create *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err, i;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpseci_cmd_create *)cmd.params;
-	for (i = 0; i < 8; i++)
-		cmd_params->priorities[i] = cfg->priorities[i];
-	for (i = 0; i < 8; i++)
-		cmd_params->priorities2[i] = cfg->priorities[8 + i];
-	cmd_params->num_tx_queues = cfg->num_tx_queues;
-	cmd_params->num_rx_queues = cfg->num_rx_queues;
-	cmd_params->options = cpu_to_le32(cfg->options);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpseci_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id)
-{
-	struct dpseci_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpseci_cmd_destroy *)cmd.params;
-	cmd_params->dpseci_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -216,41 +126,6 @@ int dpseci_disable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpseci_is_enabled() - Check if the DPSECI is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpseci_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en)
-{
-	struct dpseci_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpseci_rsp_is_enabled *)cmd.params;
-	*en = dpseci_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
 /**
  * dpseci_reset() - Reset the DPSECI, returns the object to initial state.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -446,59 +321,6 @@ int dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @attr:	Returned SEC attributes
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			struct dpseci_sec_attr *attr)
-{
-	struct dpseci_rsp_get_sec_attr *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpseci_rsp_get_sec_attr *)cmd.params;
-	attr->ip_id = le16_to_cpu(rsp_params->ip_id);
-	attr->major_rev = rsp_params->major_rev;
-	attr->minor_rev = rsp_params->minor_rev;
-	attr->era = rsp_params->era;
-	attr->deco_num = rsp_params->deco_num;
-	attr->zuc_auth_acc_num = rsp_params->zuc_auth_acc_num;
-	attr->zuc_enc_acc_num = rsp_params->zuc_enc_acc_num;
-	attr->snow_f8_acc_num = rsp_params->snow_f8_acc_num;
-	attr->snow_f9_acc_num = rsp_params->snow_f9_acc_num;
-	attr->crc_acc_num = rsp_params->crc_acc_num;
-	attr->pk_acc_num = rsp_params->pk_acc_num;
-	attr->kasumi_acc_num = rsp_params->kasumi_acc_num;
-	attr->rng_acc_num = rsp_params->rng_acc_num;
-	attr->md_acc_num = rsp_params->md_acc_num;
-	attr->arc4_acc_num = rsp_params->arc4_acc_num;
-	attr->des_acc_num = rsp_params->des_acc_num;
-	attr->aes_acc_num = rsp_params->aes_acc_num;
-	attr->ccha_acc_num = rsp_params->ccha_acc_num;
-	attr->ptha_acc_num = rsp_params->ptha_acc_num;
-
-	return 0;
-}
-
 /**
  * dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -540,226 +362,3 @@ int dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
 
 	return 0;
 }
-
-/**
- * dpseci_get_api_version() - Get Data Path SEC Interface API version
- * @mc_io:  Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path sec API
- * @minor_ver:	Minor version of data path sec API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dpseci_get_api_version(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t *major_ver,
-			   uint16_t *minor_ver)
-{
-	struct dpseci_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpseci_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
-
-/**
- * dpseci_set_opr() - Set Order Restoration configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @index:	The queue index
- * @options:	Configuration mode options
- *			can be OPR_OPT_CREATE or OPR_OPT_RETIRE
- * @cfg:	Configuration options for the OPR
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpseci_set_opr(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token,
-		   uint8_t index,
-		   uint8_t options,
-		   struct opr_cfg *cfg)
-{
-	struct dpseci_cmd_set_opr *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_OPR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpseci_cmd_set_opr *)cmd.params;
-	cmd_params->index = index;
-	cmd_params->options = options;
-	cmd_params->oloe = cfg->oloe;
-	cmd_params->oeane = cfg->oeane;
-	cmd_params->olws = cfg->olws;
-	cmd_params->oa = cfg->oa;
-	cmd_params->oprrws = cfg->oprrws;
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpseci_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @index:	The queue index
- * @cfg:	Returned OPR configuration
- * @qry:	Returned OPR query
- *
- * Return:     '0' on Success; Error code otherwise.
- */
-int dpseci_get_opr(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token,
-		   uint8_t index,
-		   struct opr_cfg *cfg,
-		   struct opr_qry *qry)
-{
-	struct dpseci_rsp_get_opr *rsp_params;
-	struct dpseci_cmd_get_opr *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_OPR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpseci_cmd_get_opr *)cmd.params;
-	cmd_params->index = index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpseci_rsp_get_opr *)cmd.params;
-	cfg->oloe = rsp_params->oloe;
-	cfg->oeane = rsp_params->oeane;
-	cfg->olws = rsp_params->olws;
-	cfg->oa = rsp_params->oa;
-	cfg->oprrws = rsp_params->oprrws;
-	qry->rip = dpseci_get_field(rsp_params->flags, RIP);
-	qry->enable = dpseci_get_field(rsp_params->flags, OPR_ENABLE);
-	qry->nesn = le16_to_cpu(rsp_params->nesn);
-	qry->ndsn = le16_to_cpu(rsp_params->ndsn);
-	qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
-	qry->tseq_nlis = dpseci_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
-	qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
-	qry->hseq_nlis = dpseci_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
-	qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
-	qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
-	qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
-	qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
-	return 0;
-}
-
-/**
- * dpseci_set_congestion_notification() - Set congestion group
- *	notification configuration
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @cfg:	congestion notification configuration
- *
- * Return:	'0' on success, error code otherwise
- */
-int dpseci_set_congestion_notification(
-			struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			const struct dpseci_congestion_notification_cfg *cfg)
-{
-	struct dpseci_cmd_set_congestion_notification *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(
-			DPSECI_CMDID_SET_CONGESTION_NOTIFICATION,
-			cmd_flags,
-			token);
-
-	cmd_params =
-		(struct dpseci_cmd_set_congestion_notification *)cmd.params;
-	cmd_params->dest_id = cfg->dest_cfg.dest_id;
-	cmd_params->dest_priority = cfg->dest_cfg.priority;
-	cmd_params->message_ctx = cfg->message_ctx;
-	cmd_params->message_iova = cfg->message_iova;
-	cmd_params->notification_mode = cfg->notification_mode;
-	cmd_params->threshold_entry = cfg->threshold_entry;
-	cmd_params->threshold_exit = cfg->threshold_exit;
-	dpseci_set_field(cmd_params->type_units,
-			 DEST_TYPE,
-			 cfg->dest_cfg.dest_type);
-	dpseci_set_field(cmd_params->type_units,
-			 CG_UNITS,
-			 cfg->units);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpseci_get_congestion_notification() - Get congestion group
- *	notification configuration
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPSECI object
- * @cfg:	congestion notification configuration
- *
- * Return:	'0' on success, error code otherwise
- */
-int dpseci_get_congestion_notification(
-				struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				struct dpseci_congestion_notification_cfg *cfg)
-{
-	struct dpseci_cmd_set_congestion_notification *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(
-			DPSECI_CMDID_GET_CONGESTION_NOTIFICATION,
-			cmd_flags,
-			token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params =
-		(struct dpseci_cmd_set_congestion_notification *)cmd.params;
-
-	cfg->dest_cfg.dest_id = le32_to_cpu(rsp_params->dest_id);
-	cfg->dest_cfg.priority = rsp_params->dest_priority;
-	cfg->notification_mode = le16_to_cpu(rsp_params->notification_mode);
-	cfg->message_ctx = le64_to_cpu(rsp_params->message_ctx);
-	cfg->message_iova = le64_to_cpu(rsp_params->message_iova);
-	cfg->threshold_entry = le32_to_cpu(rsp_params->threshold_entry);
-	cfg->threshold_exit = le32_to_cpu(rsp_params->threshold_exit);
-	cfg->units = dpseci_get_field(rsp_params->type_units, CG_UNITS);
-	cfg->dest_cfg.dest_type = dpseci_get_field(rsp_params->type_units,
-						DEST_TYPE);
-
-	return 0;
-}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index 279e8f4d4a..fbbfd40815 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -61,17 +61,6 @@ struct dpseci_cfg {
 	uint8_t priorities[DPSECI_MAX_QUEUE_NUM];
 };
 
-int dpseci_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpseci_cfg *cfg,
-		  uint32_t *obj_id);
-
-int dpseci_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id);
-
 int dpseci_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -80,11 +69,6 @@ int dpseci_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token);
 
-int dpseci_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en);
-
 int dpseci_reset(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -287,11 +271,6 @@ struct dpseci_sec_attr {
 	uint8_t ptha_acc_num;
 };
 
-int dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			struct dpseci_sec_attr *attr);
-
 /**
  * struct dpseci_sec_counters - Structure representing global SEC counters and
  *				not per dpseci counters
@@ -318,25 +297,6 @@ int dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
 			    uint16_t token,
 			    struct dpseci_sec_counters *counters);
 
-int dpseci_get_api_version(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t *major_ver,
-			   uint16_t *minor_ver);
-
-int dpseci_set_opr(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token,
-		   uint8_t index,
-		   uint8_t options,
-		   struct opr_cfg *cfg);
-
-int dpseci_get_opr(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token,
-		   uint8_t index,
-		   struct opr_cfg *cfg,
-		   struct opr_qry *qry);
-
 /**
  * enum dpseci_congestion_unit - DPSECI congestion units
  * @DPSECI_CONGESTION_UNIT_BYTES: bytes units
@@ -405,16 +365,4 @@ struct dpseci_congestion_notification_cfg {
 	uint16_t notification_mode;
 };
 
-int dpseci_set_congestion_notification(
-			struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			const struct dpseci_congestion_notification_cfg *cfg);
-
-int dpseci_get_congestion_notification(
-			struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			struct dpseci_congestion_notification_cfg *cfg);
-
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/virtio/virtio_pci.c b/drivers/crypto/virtio/virtio_pci.c
index ae069794a6..40bd748094 100644
--- a/drivers/crypto/virtio/virtio_pci.c
+++ b/drivers/crypto/virtio/virtio_pci.c
@@ -246,13 +246,6 @@ vtpci_read_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
 	VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
 }
 
-void
-vtpci_write_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
-		const void *src, int length)
-{
-	VTPCI_OPS(hw)->write_dev_cfg(hw, offset, src, length);
-}
-
 uint64_t
 vtpci_cryptodev_negotiate_features(struct virtio_crypto_hw *hw,
 		uint64_t host_features)
@@ -298,12 +291,6 @@ vtpci_cryptodev_get_status(struct virtio_crypto_hw *hw)
 	return VTPCI_OPS(hw)->get_status(hw);
 }
 
-uint8_t
-vtpci_cryptodev_isr(struct virtio_crypto_hw *hw)
-{
-	return VTPCI_OPS(hw)->get_isr(hw);
-}
-
 static void *
 get_cfg_addr(struct rte_pci_device *dev, struct virtio_pci_cap *cap)
 {
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index d9a214dfd0..3092b56952 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -242,12 +242,7 @@ void vtpci_cryptodev_set_status(struct virtio_crypto_hw *hw, uint8_t status);
 uint64_t vtpci_cryptodev_negotiate_features(struct virtio_crypto_hw *hw,
 	uint64_t host_features);
 
-void vtpci_write_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
-	const void *src, int length);
-
 void vtpci_read_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
 	void *dst, int length);
 
-uint8_t vtpci_cryptodev_isr(struct virtio_crypto_hw *hw);
-
 #endif /* _VIRTIO_PCI_H_ */
diff --git a/drivers/event/dlb/dlb_priv.h b/drivers/event/dlb/dlb_priv.h
index 58ff4287df..deaf467090 100644
--- a/drivers/event/dlb/dlb_priv.h
+++ b/drivers/event/dlb/dlb_priv.h
@@ -470,8 +470,6 @@ void dlb_eventdev_dump(struct rte_eventdev *dev, FILE *f);
 
 int dlb_xstats_init(struct dlb_eventdev *dlb);
 
-void dlb_xstats_uninit(struct dlb_eventdev *dlb);
-
 int dlb_eventdev_xstats_get(const struct rte_eventdev *dev,
 			    enum rte_event_dev_xstats_mode mode,
 			    uint8_t queue_port_id, const unsigned int ids[],
diff --git a/drivers/event/dlb/dlb_xstats.c b/drivers/event/dlb/dlb_xstats.c
index 5f4c590307..6678a8b322 100644
--- a/drivers/event/dlb/dlb_xstats.c
+++ b/drivers/event/dlb/dlb_xstats.c
@@ -578,13 +578,6 @@ dlb_xstats_init(struct dlb_eventdev *dlb)
 	return 0;
 }
 
-void
-dlb_xstats_uninit(struct dlb_eventdev *dlb)
-{
-	rte_free(dlb->xstats);
-	dlb->xstats_count = 0;
-}
-
 int
 dlb_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index b73cf3ff14..56bd4ebe1b 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -536,8 +536,6 @@ void dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f);
 
 int dlb2_xstats_init(struct dlb2_eventdev *dlb2);
 
-void dlb2_xstats_uninit(struct dlb2_eventdev *dlb2);
-
 int dlb2_eventdev_xstats_get(const struct rte_eventdev *dev,
 		enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
 		const unsigned int ids[], uint64_t values[], unsigned int n);
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda94..574fca89e8 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -634,13 +634,6 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
 	return 0;
 }
 
-void
-dlb2_xstats_uninit(struct dlb2_eventdev *dlb2)
-{
-	rte_free(dlb2->xstats);
-	dlb2->xstats_count = 0;
-}
-
 int
 dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
 		enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
diff --git a/drivers/event/opdl/opdl_ring.c b/drivers/event/opdl/opdl_ring.c
index 69392b56bb..3ddfcaf67c 100644
--- a/drivers/event/opdl/opdl_ring.c
+++ b/drivers/event/opdl/opdl_ring.c
@@ -586,52 +586,6 @@ opdl_stage_claim_multithread(struct opdl_stage *s, void *entries,
 	return i;
 }
 
-/* Claim and copy slot pointers, optimised for single-thread operation */
-static __rte_always_inline uint32_t
-opdl_stage_claim_copy_singlethread(struct opdl_stage *s, void *entries,
-		uint32_t num_entries, uint32_t *seq, bool block)
-{
-	num_entries = num_to_process(s, num_entries, block);
-	if (num_entries == 0)
-		return 0;
-	copy_entries_out(s->t, s->head, entries, num_entries);
-	if (seq != NULL)
-		*seq = s->head;
-	s->head += num_entries;
-	return num_entries;
-}
-
-/* Thread-safe version of function to claim and copy pointers to slots */
-static __rte_always_inline uint32_t
-opdl_stage_claim_copy_multithread(struct opdl_stage *s, void *entries,
-		uint32_t num_entries, uint32_t *seq, bool block)
-{
-	uint32_t old_head;
-
-	move_head_atomically(s, &num_entries, &old_head, block, true);
-	if (num_entries == 0)
-		return 0;
-	copy_entries_out(s->t, old_head, entries, num_entries);
-	if (seq != NULL)
-		*seq = old_head;
-	return num_entries;
-}
-
-static __rte_always_inline void
-opdl_stage_disclaim_singlethread_n(struct opdl_stage *s,
-		uint32_t num_entries)
-{
-	uint32_t old_tail = s->shared.tail;
-
-	if (unlikely(num_entries > (s->head - old_tail))) {
-		PMD_DRV_LOG(WARNING, "Attempt to disclaim (%u) more than claimed (%u)",
-				num_entries, s->head - old_tail);
-		num_entries = s->head - old_tail;
-	}
-	__atomic_store_n(&s->shared.tail, num_entries + old_tail,
-			__ATOMIC_RELEASE);
-}
-
 uint32_t
 opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
 		bool block)
@@ -644,26 +598,6 @@ opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
 				block);
 }
 
-uint32_t
-opdl_ring_copy_from_burst(struct opdl_ring *t, struct opdl_stage *s,
-		const void *entries, uint32_t num_entries, bool block)
-{
-	uint32_t head = s->head;
-
-	num_entries = num_to_process(s, num_entries, block);
-
-	if (num_entries == 0)
-		return 0;
-
-	copy_entries_in(t, head, entries, num_entries);
-
-	s->head += num_entries;
-	__atomic_store_n(&s->shared.tail, s->head, __ATOMIC_RELEASE);
-
-	return num_entries;
-
-}
-
 uint32_t
 opdl_ring_copy_to_burst(struct opdl_ring *t, struct opdl_stage *s,
 		void *entries, uint32_t num_entries, bool block)
@@ -682,25 +616,6 @@ opdl_ring_copy_to_burst(struct opdl_ring *t, struct opdl_stage *s,
 	return num_entries;
 }
 
-uint32_t
-opdl_stage_find_num_available(struct opdl_stage *s, uint32_t num_entries)
-{
-	/* return (num_to_process(s, num_entries, false)); */
-
-	if (available(s) >= num_entries)
-		return num_entries;
-
-	update_available_seq(s);
-
-	uint32_t avail = available(s);
-
-	if (avail == 0) {
-		rte_pause();
-		return 0;
-	}
-	return (avail <= num_entries) ? avail : num_entries;
-}
-
 uint32_t
 opdl_stage_claim(struct opdl_stage *s, void *entries,
 		uint32_t num_entries, uint32_t *seq, bool block, bool atomic)
@@ -713,41 +628,6 @@ opdl_stage_claim(struct opdl_stage *s, void *entries,
 				seq, block);
 }
 
-uint32_t
-opdl_stage_claim_copy(struct opdl_stage *s, void *entries,
-		uint32_t num_entries, uint32_t *seq, bool block)
-{
-	if (s->threadsafe == false)
-		return opdl_stage_claim_copy_singlethread(s, entries,
-				num_entries, seq, block);
-	else
-		return opdl_stage_claim_copy_multithread(s, entries,
-				num_entries, seq, block);
-}
-
-void
-opdl_stage_disclaim_n(struct opdl_stage *s, uint32_t num_entries,
-		bool block)
-{
-
-	if (s->threadsafe == false) {
-		opdl_stage_disclaim_singlethread_n(s, s->num_claimed);
-	} else {
-		struct claim_manager *disclaims =
-			&s->pending_disclaims[rte_lcore_id()];
-
-		if (unlikely(num_entries > s->num_slots)) {
-			PMD_DRV_LOG(WARNING, "Attempt to disclaim (%u) more than claimed (%u)",
-					num_entries, disclaims->num_claimed);
-			num_entries = disclaims->num_claimed;
-		}
-
-		num_entries = RTE_MIN(num_entries + disclaims->num_to_disclaim,
-				disclaims->num_claimed);
-		opdl_stage_disclaim_multithread_n(s, num_entries, block);
-	}
-}
-
 int
 opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries, bool block)
 {
@@ -769,12 +649,6 @@ opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries, bool block)
 	return num_entries;
 }
 
-uint32_t
-opdl_ring_available(struct opdl_ring *t)
-{
-	return opdl_stage_available(&t->stages[0]);
-}
-
 uint32_t
 opdl_stage_available(struct opdl_stage *s)
 {
@@ -782,14 +656,6 @@ opdl_stage_available(struct opdl_stage *s)
 	return available(s);
 }
 
-void
-opdl_ring_flush(struct opdl_ring *t)
-{
-	struct opdl_stage *s = input_stage(t);
-
-	wait_for_available(s, s->num_slots);
-}
-
 /******************** Non performance sensitive functions ********************/
 
 /* Initial setup of a new stage's context */
@@ -962,12 +828,6 @@ opdl_ring_create(const char *name, uint32_t num_slots, uint32_t slot_size,
 	return NULL;
 }
 
-void *
-opdl_ring_get_slot(const struct opdl_ring *t, uint32_t index)
-{
-	return get_slot(t, index);
-}
-
 bool
 opdl_ring_cas_slot(struct opdl_stage *s, const struct rte_event *ev,
 		uint32_t index, bool atomic)
@@ -1046,24 +906,6 @@ opdl_ring_cas_slot(struct opdl_stage *s, const struct rte_event *ev,
 	return ev_updated;
 }
 
-int
-opdl_ring_get_socket(const struct opdl_ring *t)
-{
-	return t->socket;
-}
-
-uint32_t
-opdl_ring_get_num_slots(const struct opdl_ring *t)
-{
-	return t->num_slots;
-}
-
-const char *
-opdl_ring_get_name(const struct opdl_ring *t)
-{
-	return t->name;
-}
-
 /* Check dependency list is valid for a given opdl_ring */
 static int
 check_deps(struct opdl_ring *t, struct opdl_stage *deps[],
@@ -1146,36 +988,6 @@ opdl_stage_deps_add(struct opdl_ring *t, struct opdl_stage *s,
 	return ret;
 }
 
-struct opdl_stage *
-opdl_ring_get_input_stage(const struct opdl_ring *t)
-{
-	return input_stage(t);
-}
-
-int
-opdl_stage_set_deps(struct opdl_stage *s, struct opdl_stage *deps[],
-		uint32_t num_deps)
-{
-	unsigned int i;
-	int ret;
-
-	if ((num_deps == 0) || (!deps)) {
-		PMD_DRV_LOG(ERR, "cannot set NULL dependencies");
-		return -EINVAL;
-	}
-
-	ret = check_deps(s->t, deps, num_deps);
-	if (ret < 0)
-		return ret;
-
-	/* Update deps */
-	for (i = 0; i < num_deps; i++)
-		s->deps[i] = &deps[i]->shared;
-	s->num_deps = num_deps;
-
-	return 0;
-}
-
 struct opdl_ring *
 opdl_stage_get_opdl_ring(const struct opdl_stage *s)
 {
@@ -1245,25 +1057,3 @@ opdl_ring_free(struct opdl_ring *t)
 	if (rte_memzone_free(mz) != 0)
 		PMD_DRV_LOG(ERR, "Cannot free memzone for %s", t->name);
 }
-
-/* search a opdl_ring from its name */
-struct opdl_ring *
-opdl_ring_lookup(const char *name)
-{
-	const struct rte_memzone *mz;
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-
-	snprintf(mz_name, sizeof(mz_name), "%s%s", LIB_NAME, name);
-
-	mz = rte_memzone_lookup(mz_name);
-	if (mz == NULL)
-		return NULL;
-
-	return mz->addr;
-}
-
-void
-opdl_ring_set_stage_threadsafe(struct opdl_stage *s, bool threadsafe)
-{
-	s->threadsafe = threadsafe;
-}
diff --git a/drivers/event/opdl/opdl_ring.h b/drivers/event/opdl/opdl_ring.h
index 14ababe0bb..c9e2ab6b1b 100644
--- a/drivers/event/opdl/opdl_ring.h
+++ b/drivers/event/opdl/opdl_ring.h
@@ -83,57 +83,6 @@ struct opdl_ring *
 opdl_ring_create(const char *name, uint32_t num_slots, uint32_t slot_size,
 		uint32_t max_num_stages, int socket);
 
-/**
- * Get pointer to individual slot in a opdl_ring.
- *
- * @param t
- *   The opdl_ring.
- * @param index
- *   Index of slot. If greater than the number of slots it will be masked to be
- *   within correct range.
- *
- * @return
- *   A pointer to that slot.
- */
-void *
-opdl_ring_get_slot(const struct opdl_ring *t, uint32_t index);
-
-/**
- * Get NUMA socket used by a opdl_ring.
- *
- * @param t
- *   The opdl_ring.
- *
- * @return
- *   NUMA socket.
- */
-int
-opdl_ring_get_socket(const struct opdl_ring *t);
-
-/**
- * Get number of slots in a opdl_ring.
- *
- * @param t
- *   The opdl_ring.
- *
- * @return
- *   Number of slots.
- */
-uint32_t
-opdl_ring_get_num_slots(const struct opdl_ring *t);
-
-/**
- * Get name of a opdl_ring.
- *
- * @param t
- *   The opdl_ring.
- *
- * @return
- *   Name string.
- */
-const char *
-opdl_ring_get_name(const struct opdl_ring *t);
-
 /**
  * Adds a new processing stage to a specified opdl_ring instance. Adding a stage
  * while there are entries in the opdl_ring being processed will cause undefined
@@ -160,38 +109,6 @@ opdl_ring_get_name(const struct opdl_ring *t);
 struct opdl_stage *
 opdl_stage_add(struct opdl_ring *t, bool threadsafe, bool is_input);
 
-/**
- * Returns the input stage of a opdl_ring to be used by other API functions.
- *
- * @param t
- *   The opdl_ring.
- *
- * @return
- *   A pointer to the input stage.
- */
-struct opdl_stage *
-opdl_ring_get_input_stage(const struct opdl_ring *t);
-
-/**
- * Sets the dependencies for a stage (clears all the previous deps!). Changing
- * dependencies while there are entries in the opdl_ring being processed will
- * cause undefined behaviour.
- *
- * @param s
- *   The stage to set the dependencies for.
- * @param deps
- *   An array of pointers to other stages that this stage will depends on. The
- *   other stages must be part of the same opdl_ring!
- * @param num_deps
- *   The size of the deps array. This must be > 0.
- *
- * @return
- *   0 on success, a negative value on error.
- */
-int
-opdl_stage_set_deps(struct opdl_stage *s, struct opdl_stage *deps[],
-		uint32_t num_deps);
-
 /**
  * Returns the opdl_ring that a stage belongs to.
  *
@@ -228,32 +145,6 @@ uint32_t
 opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
 		bool block);
 
-/**
- * Inputs a new batch of entries into a opdl stage. This function is only
- * threadsafe (with the same opdl parameter) if the threadsafe parameter of
- * opdl_create() was true. For performance reasons, this function does not
- * check input parameters.
- *
- * @param t
- *   The opdl ring to input entries in to.
- * @param s
- *   The stage to copy entries to.
- * @param entries
- *   An array of entries that will be copied in to the opdl ring.
- * @param num_entries
- *   The size of the entries array.
- * @param block
- *   If this is true, the function blocks until enough slots are available to
- *   input all the requested entries. If false, then the function inputs as
- *   many entries as currently possible.
- *
- * @return
- *   The number of entries successfully input.
- */
-uint32_t
-opdl_ring_copy_from_burst(struct opdl_ring *t, struct opdl_stage *s,
-			const void *entries, uint32_t num_entries, bool block);
-
 /**
  * Copy a batch of entries from the opdl ring. This function is only
  * threadsafe (with the same opdl parameter) if the threadsafe parameter of
@@ -368,41 +259,6 @@ opdl_stage_claim_check(struct opdl_stage *s, void **entries,
 		uint32_t num_entries, uint32_t *seq, bool block,
 		opdl_ring_check_entries_t *check, void *arg);
 
-/**
- * Before processing a batch of entries, a stage must first claim them to get
- * access. This function is threadsafe using same opdl_stage parameter if
- * the stage was created with threadsafe set to true, otherwise it is only
- * threadsafe with a different opdl_stage per thread.
- *
- * The difference between this function and opdl_stage_claim() is that this
- * function copies the entries from the opdl_ring. Note that any changes made to
- * the copied entries will not be reflected back in to the entries in the
- * opdl_ring, so this function probably only makes sense if the entries are
- * pointers to other data. For performance reasons, this function does not check
- * input parameters.
- *
- * @param s
- *   The opdl_ring stage to read entries in.
- * @param entries
- *   An array of entries that will be filled in by this function.
- * @param num_entries
- *   The number of entries to attempt to claim for processing (and the size of
- *   the entries array).
- * @param seq
- *   If not NULL, this is set to the value of the internal stage sequence number
- *   associated with the first entry returned.
- * @param block
- *   If this is true, the function blocks until num_entries slots are available
- *   to process. If false, then the function claims as many entries as
- *   currently possible.
- *
- * @return
- *   The number of entries copied in to the entries array.
- */
-uint32_t
-opdl_stage_claim_copy(struct opdl_stage *s, void *entries,
-		uint32_t num_entries, uint32_t *seq, bool block);
-
 /**
  * This function must be called when a stage has finished its processing of
  * entries, to make them available to any dependent stages. All entries that are
@@ -433,48 +289,6 @@ int
 opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries,
 		bool block);
 
-/**
- * This function can be called when a stage has finished its processing of
- * entries, to make them available to any dependent stages. The difference
- * between this function and opdl_stage_disclaim() is that here only a
- * portion of entries are disclaimed, not all of them. For performance reasons,
- * this function does not check input parameters.
- *
- * @param s
- *   The opdl_ring stage in which to disclaim entries.
- *
- * @param num_entries
- *   The number of entries to disclaim.
- *
- * @param block
- *   Entries are always made available to a stage in the same order that they
- *   were input in the stage. If a stage is multithread safe, this may mean that
- *   full disclaiming of a batch of entries can not be considered complete until
- *   all earlier threads in the stage have disclaimed. If this parameter is true
- *   then the function blocks until the specified number of entries has been
- *   disclaimed (or there are no more entries to disclaim). Otherwise it
- *   disclaims as many claims as currently possible and an attempt to disclaim
- *   them is made the next time a claim or disclaim function for this stage on
- *   this thread is called.
- *
- *   In a single threaded stage, this parameter has no effect.
- */
-void
-opdl_stage_disclaim_n(struct opdl_stage *s, uint32_t num_entries,
-		bool block);
-
-/**
- * Check how many entries can be input.
- *
- * @param t
- *   The opdl_ring instance to check.
- *
- * @return
- *   The number of new entries currently allowed to be input.
- */
-uint32_t
-opdl_ring_available(struct opdl_ring *t);
-
 /**
  * Check how many entries can be processed in a stage.
  *
@@ -487,23 +301,6 @@ opdl_ring_available(struct opdl_ring *t);
 uint32_t
 opdl_stage_available(struct opdl_stage *s);
 
-/**
- * Check how many entries are available to be processed.
- *
- * NOTE : DOES NOT CHANGE ANY STATE WITHIN THE STAGE
- *
- * @param s
- *   The stage to check.
- *
- * @param num_entries
- *   The number of entries to check for availability.
- *
- * @return
- *   The number of entries currently available to be processed in this stage.
- */
-uint32_t
-opdl_stage_find_num_available(struct opdl_stage *s, uint32_t num_entries);
-
 /**
  * Create empty stage instance and return the pointer.
  *
@@ -543,15 +340,6 @@ opdl_stage_set_queue_id(struct opdl_stage *s,
 void
 opdl_ring_dump(const struct opdl_ring *t, FILE *f);
 
-/**
- * Blocks until all entries in a opdl_ring have been processed by all stages.
- *
- * @param t
- *   The opdl_ring instance to flush.
- */
-void
-opdl_ring_flush(struct opdl_ring *t);
-
 /**
  * Deallocates all resources used by a opdl_ring instance
  *
@@ -561,30 +349,6 @@ opdl_ring_flush(struct opdl_ring *t);
 void
 opdl_ring_free(struct opdl_ring *t);
 
-/**
- * Search for a opdl_ring by its name
- *
- * @param name
- *   The name of the opdl_ring.
- * @return
- *   The pointer to the opdl_ring matching the name, or NULL if not found.
- *
- */
-struct opdl_ring *
-opdl_ring_lookup(const char *name);
-
-/**
- * Set a opdl_stage to threadsafe variable.
- *
- * @param s
- *   The opdl_stage.
- * @param threadsafe
- *   Threadsafe value.
- */
-void
-opdl_ring_set_stage_threadsafe(struct opdl_stage *s, bool threadsafe);
-
-
 /**
  * Compare the event descriptor with original version in the ring.
  * if key field event descriptor is changed by application, then
diff --git a/drivers/net/ark/ark_ddm.c b/drivers/net/ark/ark_ddm.c
index 91d1179d88..2a6aa93ffe 100644
--- a/drivers/net/ark/ark_ddm.c
+++ b/drivers/net/ark/ark_ddm.c
@@ -92,19 +92,6 @@ ark_ddm_dump(struct ark_ddm_t *ddm, const char *msg)
 		     );
 }
 
-void
-ark_ddm_dump_stats(struct ark_ddm_t *ddm, const char *msg)
-{
-	struct ark_ddm_stats_t *stats = &ddm->stats;
-
-	ARK_PMD_LOG(INFO, "DDM Stats: %s"
-		      ARK_SU64 ARK_SU64 ARK_SU64
-		      "\n", msg,
-		      "Bytes:", stats->tx_byte_count,
-		      "Packets:", stats->tx_pkt_count,
-		      "MBufs", stats->tx_mbuf_count);
-}
-
 int
 ark_ddm_is_stopped(struct ark_ddm_t *ddm)
 {
diff --git a/drivers/net/ark/ark_ddm.h b/drivers/net/ark/ark_ddm.h
index 5456b4b5cc..5b722b6ede 100644
--- a/drivers/net/ark/ark_ddm.h
+++ b/drivers/net/ark/ark_ddm.h
@@ -141,7 +141,6 @@ void ark_ddm_reset(struct ark_ddm_t *ddm);
 void ark_ddm_stats_reset(struct ark_ddm_t *ddm);
 void ark_ddm_setup(struct ark_ddm_t *ddm, rte_iova_t cons_addr,
 		   uint32_t interval);
-void ark_ddm_dump_stats(struct ark_ddm_t *ddm, const char *msg);
 void ark_ddm_dump(struct ark_ddm_t *ddm, const char *msg);
 int ark_ddm_is_stopped(struct ark_ddm_t *ddm);
 uint64_t ark_ddm_queue_byte_count(struct ark_ddm_t *ddm);
diff --git a/drivers/net/ark/ark_pktchkr.c b/drivers/net/ark/ark_pktchkr.c
index b8fb69497d..5a7e686f0e 100644
--- a/drivers/net/ark/ark_pktchkr.c
+++ b/drivers/net/ark/ark_pktchkr.c
@@ -15,7 +15,6 @@
 #include "ark_logs.h"
 
 static int set_arg(char *arg, char *val);
-static int ark_pktchkr_is_gen_forever(ark_pkt_chkr_t handle);
 
 #define ARK_MAX_STR_LEN 64
 union OPTV {
@@ -136,15 +135,6 @@ ark_pktchkr_stop(ark_pkt_chkr_t handle)
 	ARK_PMD_LOG(DEBUG, "Pktchk %d stopped.\n", inst->ordinal);
 }
 
-int
-ark_pktchkr_is_running(ark_pkt_chkr_t handle)
-{
-	struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-	uint32_t r = inst->sregs->pkt_start_stop;
-
-	return ((r & 1) == 1);
-}
-
 static void
 ark_pktchkr_set_pkt_ctrl(ark_pkt_chkr_t handle,
 			 uint32_t gen_forever,
@@ -173,48 +163,6 @@ ark_pktchkr_set_pkt_ctrl(ark_pkt_chkr_t handle,
 	inst->cregs->pkt_ctrl = r;
 }
 
-static
-int
-ark_pktchkr_is_gen_forever(ark_pkt_chkr_t handle)
-{
-	struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-	uint32_t r = inst->cregs->pkt_ctrl;
-
-	return (((r >> 24) & 1) == 1);
-}
-
-int
-ark_pktchkr_wait_done(ark_pkt_chkr_t handle)
-{
-	struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-
-	if (ark_pktchkr_is_gen_forever(handle)) {
-		ARK_PMD_LOG(NOTICE, "Pktchk wait_done will not terminate"
-			      " because gen_forever=1\n");
-		return -1;
-	}
-	int wait_cycle = 10;
-
-	while (!ark_pktchkr_stopped(handle) && (wait_cycle > 0)) {
-		usleep(1000);
-		wait_cycle--;
-		ARK_PMD_LOG(DEBUG, "Waiting for packet checker %d's"
-			      " internal pktgen to finish sending...\n",
-			      inst->ordinal);
-		ARK_PMD_LOG(DEBUG, "Pktchk %d's pktgen done.\n",
-			      inst->ordinal);
-	}
-	return 0;
-}
-
-int
-ark_pktchkr_get_pkts_sent(ark_pkt_chkr_t handle)
-{
-	struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-
-	return inst->cregs->pkts_sent;
-}
-
 void
 ark_pktchkr_set_payload_byte(ark_pkt_chkr_t handle, uint32_t b)
 {
diff --git a/drivers/net/ark/ark_pktchkr.h b/drivers/net/ark/ark_pktchkr.h
index b362281776..2b0ba17d90 100644
--- a/drivers/net/ark/ark_pktchkr.h
+++ b/drivers/net/ark/ark_pktchkr.h
@@ -69,8 +69,6 @@ void ark_pktchkr_uninit(ark_pkt_chkr_t handle);
 void ark_pktchkr_run(ark_pkt_chkr_t handle);
 int ark_pktchkr_stopped(ark_pkt_chkr_t handle);
 void ark_pktchkr_stop(ark_pkt_chkr_t handle);
-int ark_pktchkr_is_running(ark_pkt_chkr_t handle);
-int ark_pktchkr_get_pkts_sent(ark_pkt_chkr_t handle);
 void ark_pktchkr_set_payload_byte(ark_pkt_chkr_t handle, uint32_t b);
 void ark_pktchkr_set_pkt_size_min(ark_pkt_chkr_t handle, uint32_t x);
 void ark_pktchkr_set_pkt_size_max(ark_pkt_chkr_t handle, uint32_t x);
@@ -83,6 +81,5 @@ void ark_pktchkr_set_hdr_dW(ark_pkt_chkr_t handle, uint32_t *hdr);
 void ark_pktchkr_parse(char *args);
 void ark_pktchkr_setup(ark_pkt_chkr_t handle);
 void ark_pktchkr_dump_stats(ark_pkt_chkr_t handle);
-int ark_pktchkr_wait_done(ark_pkt_chkr_t handle);
 
 #endif
diff --git a/drivers/net/ark/ark_pktdir.c b/drivers/net/ark/ark_pktdir.c
index 25e1218310..00bf165bff 100644
--- a/drivers/net/ark/ark_pktdir.c
+++ b/drivers/net/ark/ark_pktdir.c
@@ -26,31 +26,9 @@ ark_pktdir_init(void *base)
 	return inst;
 }
 
-void
-ark_pktdir_uninit(ark_pkt_dir_t handle)
-{
-	struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
-
-	rte_free(inst);
-}
-
 void
 ark_pktdir_setup(ark_pkt_dir_t handle, uint32_t v)
 {
 	struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
 	inst->regs->ctrl = v;
 }
-
-uint32_t
-ark_pktdir_status(ark_pkt_dir_t handle)
-{
-	struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
-	return inst->regs->ctrl;
-}
-
-uint32_t
-ark_pktdir_stall_cnt(ark_pkt_dir_t handle)
-{
-	struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
-	return inst->regs->stall_cnt;
-}
diff --git a/drivers/net/ark/ark_pktdir.h b/drivers/net/ark/ark_pktdir.h
index 4afd128f95..e7f2026a00 100644
--- a/drivers/net/ark/ark_pktdir.h
+++ b/drivers/net/ark/ark_pktdir.h
@@ -33,9 +33,6 @@ struct ark_pkt_dir_inst {
 };
 
 ark_pkt_dir_t ark_pktdir_init(void *base);
-void ark_pktdir_uninit(ark_pkt_dir_t handle);
 void ark_pktdir_setup(ark_pkt_dir_t handle, uint32_t v);
-uint32_t ark_pktdir_stall_cnt(ark_pkt_dir_t handle);
-uint32_t ark_pktdir_status(ark_pkt_dir_t handle);
 
 #endif
diff --git a/drivers/net/ark/ark_pktgen.c b/drivers/net/ark/ark_pktgen.c
index 4a02662a46..9769c46b47 100644
--- a/drivers/net/ark/ark_pktgen.c
+++ b/drivers/net/ark/ark_pktgen.c
@@ -186,33 +186,6 @@ ark_pktgen_is_gen_forever(ark_pkt_gen_t handle)
 	return (((r >> 24) & 1) == 1);
 }
 
-void
-ark_pktgen_wait_done(ark_pkt_gen_t handle)
-{
-	struct ark_pkt_gen_inst *inst = (struct ark_pkt_gen_inst *)handle;
-	int wait_cycle = 10;
-
-	if (ark_pktgen_is_gen_forever(handle))
-		ARK_PMD_LOG(NOTICE, "Pktgen wait_done will not terminate"
-			    " because gen_forever=1\n");
-
-	while (!ark_pktgen_tx_done(handle) && (wait_cycle > 0)) {
-		usleep(1000);
-		wait_cycle--;
-		ARK_PMD_LOG(DEBUG,
-			      "Waiting for pktgen %d to finish sending...\n",
-			      inst->ordinal);
-	}
-	ARK_PMD_LOG(DEBUG, "Pktgen %d done.\n", inst->ordinal);
-}
-
-uint32_t
-ark_pktgen_get_pkts_sent(ark_pkt_gen_t handle)
-{
-	struct ark_pkt_gen_inst *inst = (struct ark_pkt_gen_inst *)handle;
-	return inst->regs->pkts_sent;
-}
-
 void
 ark_pktgen_set_payload_byte(ark_pkt_gen_t handle, uint32_t b)
 {
diff --git a/drivers/net/ark/ark_pktgen.h b/drivers/net/ark/ark_pktgen.h
index c61dfee6db..cc78577d3d 100644
--- a/drivers/net/ark/ark_pktgen.h
+++ b/drivers/net/ark/ark_pktgen.h
@@ -60,8 +60,6 @@ uint32_t ark_pktgen_is_gen_forever(ark_pkt_gen_t handle);
 uint32_t ark_pktgen_is_running(ark_pkt_gen_t handle);
 uint32_t ark_pktgen_tx_done(ark_pkt_gen_t handle);
 void ark_pktgen_reset(ark_pkt_gen_t handle);
-void ark_pktgen_wait_done(ark_pkt_gen_t handle);
-uint32_t ark_pktgen_get_pkts_sent(ark_pkt_gen_t handle);
 void ark_pktgen_set_payload_byte(ark_pkt_gen_t handle, uint32_t b);
 void ark_pktgen_set_pkt_spacing(ark_pkt_gen_t handle, uint32_t x);
 void ark_pktgen_set_pkt_size_min(ark_pkt_gen_t handle, uint32_t x);
diff --git a/drivers/net/ark/ark_udm.c b/drivers/net/ark/ark_udm.c
index a740d36d43..2132f4e972 100644
--- a/drivers/net/ark/ark_udm.c
+++ b/drivers/net/ark/ark_udm.c
@@ -135,21 +135,6 @@ ark_udm_dump_stats(struct ark_udm_t *udm, const char *msg)
 		      "MBuf Count", udm->stats.rx_mbuf_count);
 }
 
-void
-ark_udm_dump_queue_stats(struct ark_udm_t *udm, const char *msg, uint16_t qid)
-{
-	ARK_PMD_LOG(INFO, "UDM Queue %3u Stats: %s"
-		      ARK_SU64 ARK_SU64
-		      ARK_SU64 ARK_SU64
-		      ARK_SU64 "\n",
-		      qid, msg,
-		      "Pkts Received", udm->qstats.q_packet_count,
-		      "Pkts Finalized", udm->qstats.q_ff_packet_count,
-		      "Pkts Dropped", udm->qstats.q_pkt_drop,
-		      "Bytes Count", udm->qstats.q_byte_count,
-		      "MBuf Count", udm->qstats.q_mbuf_count);
-}
-
 void
 ark_udm_dump(struct ark_udm_t *udm, const char *msg)
 {
diff --git a/drivers/net/ark/ark_udm.h b/drivers/net/ark/ark_udm.h
index 5846c825b8..7f0d3c2a5e 100644
--- a/drivers/net/ark/ark_udm.h
+++ b/drivers/net/ark/ark_udm.h
@@ -145,8 +145,6 @@ void ark_udm_configure(struct ark_udm_t *udm,
 void ark_udm_write_addr(struct ark_udm_t *udm, rte_iova_t addr);
 void ark_udm_stats_reset(struct ark_udm_t *udm);
 void ark_udm_dump_stats(struct ark_udm_t *udm, const char *msg);
-void ark_udm_dump_queue_stats(struct ark_udm_t *udm, const char *msg,
-			      uint16_t qid);
 void ark_udm_dump(struct ark_udm_t *udm, const char *msg);
 void ark_udm_dump_perf(struct ark_udm_t *udm, const char *msg);
 void ark_udm_dump_setup(struct ark_udm_t *udm, uint16_t q_id);
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/atlantic/hw_atl/hw_atl_b0.c
index 7d0e724019..415099e04a 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_b0.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.c
@@ -480,20 +480,6 @@ int hw_atl_b0_hw_ring_tx_init(struct aq_hw_s *self, uint64_t base_addr,
 	return aq_hw_err_from_flags(self);
 }
 
-int hw_atl_b0_hw_irq_enable(struct aq_hw_s *self, u64 mask)
-{
-	hw_atl_itr_irq_msk_setlsw_set(self, LODWORD(mask));
-	return aq_hw_err_from_flags(self);
-}
-
-int hw_atl_b0_hw_irq_disable(struct aq_hw_s *self, u64 mask)
-{
-	hw_atl_itr_irq_msk_clearlsw_set(self, LODWORD(mask));
-	hw_atl_itr_irq_status_clearlsw_set(self, LODWORD(mask));
-
-	return aq_hw_err_from_flags(self);
-}
-
 int hw_atl_b0_hw_irq_read(struct aq_hw_s *self, u64 *mask)
 {
 	*mask = hw_atl_itr_irq_statuslsw_get(self);
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.h b/drivers/net/atlantic/hw_atl/hw_atl_b0.h
index d1ba2aceb3..4a155d2bc7 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_b0.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.h
@@ -35,8 +35,6 @@ int hw_atl_b0_hw_rss_hash_set(struct aq_hw_s *self,
 int hw_atl_b0_hw_rss_set(struct aq_hw_s *self,
 				struct aq_rss_parameters *rss_params);
 
-int hw_atl_b0_hw_irq_enable(struct aq_hw_s *self, u64 mask);
-int hw_atl_b0_hw_irq_disable(struct aq_hw_s *self, u64 mask);
 int hw_atl_b0_hw_irq_read(struct aq_hw_s *self, u64 *mask);
 
 #endif /* HW_ATL_B0_H */
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/atlantic/hw_atl/hw_atl_llh.c
index 2dc5be2ff1..b29419bce3 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_llh.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_llh.c
@@ -22,28 +22,6 @@ u32 hw_atl_reg_glb_cpu_sem_get(struct aq_hw_s *aq_hw, u32 semaphore)
 	return aq_hw_read_reg(aq_hw, HW_ATL_GLB_CPU_SEM_ADR(semaphore));
 }
 
-void hw_atl_glb_glb_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 glb_reg_res_dis)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_GLB_REG_RES_DIS_ADR,
-			    HW_ATL_GLB_REG_RES_DIS_MSK,
-			    HW_ATL_GLB_REG_RES_DIS_SHIFT,
-			    glb_reg_res_dis);
-}
-
-void hw_atl_glb_soft_res_set(struct aq_hw_s *aq_hw, u32 soft_res)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_GLB_SOFT_RES_ADR,
-			    HW_ATL_GLB_SOFT_RES_MSK,
-			    HW_ATL_GLB_SOFT_RES_SHIFT, soft_res);
-}
-
-u32 hw_atl_glb_soft_res_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_GLB_SOFT_RES_ADR,
-				  HW_ATL_GLB_SOFT_RES_MSK,
-				  HW_ATL_GLB_SOFT_RES_SHIFT);
-}
-
 u32 hw_atl_reg_glb_mif_id_get(struct aq_hw_s *aq_hw)
 {
 	return aq_hw_read_reg(aq_hw, HW_ATL_GLB_MIF_ID_ADR);
@@ -275,13 +253,6 @@ void hw_atl_itr_irq_msk_setlsw_set(struct aq_hw_s *aq_hw, u32 irq_msk_setlsw)
 	aq_hw_write_reg(aq_hw, HW_ATL_ITR_IMSRLSW_ADR, irq_msk_setlsw);
 }
 
-void hw_atl_itr_irq_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 irq_reg_res_dis)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_ITR_REG_RES_DSBL_ADR,
-			    HW_ATL_ITR_REG_RES_DSBL_MSK,
-			    HW_ATL_ITR_REG_RES_DSBL_SHIFT, irq_reg_res_dis);
-}
-
 void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
 					u32 irq_status_clearlsw)
 {
@@ -293,18 +264,6 @@ u32 hw_atl_itr_irq_statuslsw_get(struct aq_hw_s *aq_hw)
 	return aq_hw_read_reg(aq_hw, HW_ATL_ITR_ISRLSW_ADR);
 }
 
-u32 hw_atl_itr_res_irq_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_ITR_RES_ADR, HW_ATL_ITR_RES_MSK,
-				  HW_ATL_ITR_RES_SHIFT);
-}
-
-void hw_atl_itr_res_irq_set(struct aq_hw_s *aq_hw, u32 res_irq)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_ITR_RES_ADR, HW_ATL_ITR_RES_MSK,
-			    HW_ATL_ITR_RES_SHIFT, res_irq);
-}
-
 /* rdm */
 void hw_atl_rdm_cpu_id_set(struct aq_hw_s *aq_hw, u32 cpuid, u32 dca)
 {
@@ -374,13 +333,6 @@ void hw_atl_rdm_rx_desc_head_splitting_set(struct aq_hw_s *aq_hw,
 			    rx_desc_head_splitting);
 }
 
-u32 hw_atl_rdm_rx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_RDM_DESCDHD_ADR(descriptor),
-				  HW_ATL_RDM_DESCDHD_MSK,
-				  HW_ATL_RDM_DESCDHD_SHIFT);
-}
-
 void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
 				u32 descriptor)
 {
@@ -389,15 +341,6 @@ void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
 			    rx_desc_len);
 }
 
-void hw_atl_rdm_rx_desc_res_set(struct aq_hw_s *aq_hw, u32 rx_desc_res,
-				u32 descriptor)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_DESCDRESET_ADR(descriptor),
-			    HW_ATL_RDM_DESCDRESET_MSK,
-			    HW_ATL_RDM_DESCDRESET_SHIFT,
-			    rx_desc_res);
-}
-
 void hw_atl_rdm_rx_desc_wr_wb_irq_en_set(struct aq_hw_s *aq_hw,
 					 u32 rx_desc_wr_wb_irq_en)
 {
@@ -425,15 +368,6 @@ void hw_atl_rdm_rx_pld_dca_en_set(struct aq_hw_s *aq_hw, u32 rx_pld_dca_en,
 			    rx_pld_dca_en);
 }
 
-void hw_atl_rdm_rdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
-				      u32 rdm_intr_moder_en)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_INT_RIM_EN_ADR,
-			    HW_ATL_RDM_INT_RIM_EN_MSK,
-			    HW_ATL_RDM_INT_RIM_EN_SHIFT,
-			    rdm_intr_moder_en);
-}
-
 /* reg */
 void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
 				u32 regidx)
@@ -441,21 +375,11 @@ void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
 	aq_hw_write_reg(aq_hw, HW_ATL_GEN_INTR_MAP_ADR(regidx), gen_intr_map);
 }
 
-u32 hw_atl_reg_gen_irq_status_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg(aq_hw, HW_ATL_GEN_INTR_STAT_ADR);
-}
-
 void hw_atl_reg_irq_glb_ctl_set(struct aq_hw_s *aq_hw, u32 intr_glb_ctl)
 {
 	aq_hw_write_reg(aq_hw, HW_ATL_INTR_GLB_CTL_ADR, intr_glb_ctl);
 }
 
-void hw_atl_reg_irq_thr_set(struct aq_hw_s *aq_hw, u32 intr_thr, u32 throttle)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_INTR_THR_ADR(throttle), intr_thr);
-}
-
 void hw_atl_reg_rx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
 					       u32 rx_dma_desc_base_addrlsw,
 					       u32 descriptor)
@@ -472,11 +396,6 @@ void hw_atl_reg_rx_dma_desc_base_addressmswset(struct aq_hw_s *aq_hw,
 			rx_dma_desc_base_addrmsw);
 }
 
-u32 hw_atl_reg_rx_dma_desc_status_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
-	return aq_hw_read_reg(aq_hw, HW_ATL_RX_DMA_DESC_STAT_ADR(descriptor));
-}
-
 void hw_atl_reg_rx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
 					 u32 rx_dma_desc_tail_ptr,
 					 u32 descriptor)
@@ -506,26 +425,6 @@ void hw_atl_reg_rx_flr_rss_control1set(struct aq_hw_s *aq_hw,
 			rx_flr_rss_control1);
 }
 
-void hw_atl_reg_rx_flr_control2_set(struct aq_hw_s *aq_hw,
-				    u32 rx_filter_control2)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_RX_FLR_CONTROL2_ADR, rx_filter_control2);
-}
-
-void hw_atl_reg_rx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
-				       u32 rx_intr_moderation_ctl,
-				       u32 queue)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_RX_INTR_MODERATION_CTL_ADR(queue),
-			rx_intr_moderation_ctl);
-}
-
-void hw_atl_reg_tx_dma_debug_ctl_set(struct aq_hw_s *aq_hw,
-				     u32 tx_dma_debug_ctl)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_TX_DMA_DEBUG_CTL_ADR, tx_dma_debug_ctl);
-}
-
 void hw_atl_reg_tx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
 					       u32 tx_dma_desc_base_addrlsw,
 					       u32 descriptor)
@@ -552,22 +451,7 @@ void hw_atl_reg_tx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
 			tx_dma_desc_tail_ptr);
 }
 
-void hw_atl_reg_tx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
-				       u32 tx_intr_moderation_ctl,
-				       u32 queue)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_TX_INTR_MODERATION_CTL_ADR(queue),
-			tx_intr_moderation_ctl);
-}
-
 /* RPB: rx packet buffer */
-void hw_atl_rpb_dma_sys_lbk_set(struct aq_hw_s *aq_hw, u32 dma_sys_lbk)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPB_DMA_SYS_LBK_ADR,
-			    HW_ATL_RPB_DMA_SYS_LBK_MSK,
-			    HW_ATL_RPB_DMA_SYS_LBK_SHIFT, dma_sys_lbk);
-}
-
 void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
 					   u32 rx_traf_class_mode)
 {
@@ -577,13 +461,6 @@ void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
 			    rx_traf_class_mode);
 }
 
-u32 hw_atl_rpb_rpf_rx_traf_class_mode_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_RPB_RPF_RX_TC_MODE_ADR,
-			HW_ATL_RPB_RPF_RX_TC_MODE_MSK,
-			HW_ATL_RPB_RPF_RX_TC_MODE_SHIFT);
-}
-
 void hw_atl_rpb_rx_buff_en_set(struct aq_hw_s *aq_hw, u32 rx_buff_en)
 {
 	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPB_RX_BUF_EN_ADR,
@@ -664,15 +541,6 @@ void hw_atl_rpfl2broadcast_flr_act_set(struct aq_hw_s *aq_hw,
 			    HW_ATL_RPFL2BC_ACT_SHIFT, l2broadcast_flr_act);
 }
 
-void hw_atl_rpfl2multicast_flr_en_set(struct aq_hw_s *aq_hw,
-				      u32 l2multicast_flr_en,
-				      u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPFL2MC_ENF_ADR(filter),
-			    HW_ATL_RPFL2MC_ENF_MSK,
-			    HW_ATL_RPFL2MC_ENF_SHIFT, l2multicast_flr_en);
-}
-
 void hw_atl_rpfl2promiscuous_mode_en_set(struct aq_hw_s *aq_hw,
 					 u32 l2promiscuous_mode_en)
 {
@@ -813,15 +681,6 @@ void hw_atl_rpf_rss_redir_wr_en_set(struct aq_hw_s *aq_hw, u32 rss_redir_wr_en)
 			    HW_ATL_RPF_RSS_REDIR_WR_ENI_SHIFT, rss_redir_wr_en);
 }
 
-void hw_atl_rpf_tpo_to_rpf_sys_lbk_set(struct aq_hw_s *aq_hw,
-				       u32 tpo_to_rpf_sys_lbk)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_TPO_RPF_SYS_LBK_ADR,
-			    HW_ATL_RPF_TPO_RPF_SYS_LBK_MSK,
-			    HW_ATL_RPF_TPO_RPF_SYS_LBK_SHIFT,
-			    tpo_to_rpf_sys_lbk);
-}
-
 void hw_atl_rpf_vlan_inner_etht_set(struct aq_hw_s *aq_hw, u32 vlan_inner_etht)
 {
 	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_INNER_TPID_ADR,
@@ -847,24 +706,6 @@ void hw_atl_rpf_vlan_prom_mode_en_set(struct aq_hw_s *aq_hw,
 			    vlan_prom_mode_en);
 }
 
-void hw_atl_rpf_vlan_accept_untagged_packets_set(struct aq_hw_s *aq_hw,
-						 u32 vlan_acc_untagged_packets)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_ADR,
-			    HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_MSK,
-			    HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_SHIFT,
-			    vlan_acc_untagged_packets);
-}
-
-void hw_atl_rpf_vlan_untagged_act_set(struct aq_hw_s *aq_hw,
-				      u32 vlan_untagged_act)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_UNTAGGED_ACT_ADR,
-			    HW_ATL_RPF_VL_UNTAGGED_ACT_MSK,
-			    HW_ATL_RPF_VL_UNTAGGED_ACT_SHIFT,
-			    vlan_untagged_act);
-}
-
 void hw_atl_rpf_vlan_flr_en_set(struct aq_hw_s *aq_hw, u32 vlan_flr_en,
 				u32 filter)
 {
@@ -892,73 +733,6 @@ void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr,
 			    vlan_id_flr);
 }
 
-void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en,
-				u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_ENF_ADR(filter),
-			    HW_ATL_RPF_ET_ENF_MSK,
-			    HW_ATL_RPF_ET_ENF_SHIFT, etht_flr_en);
-}
-
-void hw_atl_rpf_etht_user_priority_en_set(struct aq_hw_s *aq_hw,
-					  u32 etht_user_priority_en, u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_UPFEN_ADR(filter),
-			    HW_ATL_RPF_ET_UPFEN_MSK, HW_ATL_RPF_ET_UPFEN_SHIFT,
-			    etht_user_priority_en);
-}
-
-void hw_atl_rpf_etht_rx_queue_en_set(struct aq_hw_s *aq_hw,
-				     u32 etht_rx_queue_en,
-				     u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_RXQFEN_ADR(filter),
-			    HW_ATL_RPF_ET_RXQFEN_MSK,
-			    HW_ATL_RPF_ET_RXQFEN_SHIFT,
-			    etht_rx_queue_en);
-}
-
-void hw_atl_rpf_etht_user_priority_set(struct aq_hw_s *aq_hw,
-				       u32 etht_user_priority,
-				       u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_UPF_ADR(filter),
-			    HW_ATL_RPF_ET_UPF_MSK,
-			    HW_ATL_RPF_ET_UPF_SHIFT, etht_user_priority);
-}
-
-void hw_atl_rpf_etht_rx_queue_set(struct aq_hw_s *aq_hw, u32 etht_rx_queue,
-				  u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_RXQF_ADR(filter),
-			    HW_ATL_RPF_ET_RXQF_MSK,
-			    HW_ATL_RPF_ET_RXQF_SHIFT, etht_rx_queue);
-}
-
-void hw_atl_rpf_etht_mgt_queue_set(struct aq_hw_s *aq_hw, u32 etht_mgt_queue,
-				   u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_MNG_RXQF_ADR(filter),
-			    HW_ATL_RPF_ET_MNG_RXQF_MSK,
-			    HW_ATL_RPF_ET_MNG_RXQF_SHIFT,
-			    etht_mgt_queue);
-}
-
-void hw_atl_rpf_etht_flr_act_set(struct aq_hw_s *aq_hw, u32 etht_flr_act,
-				 u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_ACTF_ADR(filter),
-			    HW_ATL_RPF_ET_ACTF_MSK,
-			    HW_ATL_RPF_ET_ACTF_SHIFT, etht_flr_act);
-}
-
-void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_VALF_ADR(filter),
-			    HW_ATL_RPF_ET_VALF_MSK,
-			    HW_ATL_RPF_ET_VALF_SHIFT, etht_flr);
-}
-
 /* RPO: rx packet offload */
 void hw_atl_rpo_ipv4header_crc_offload_en_set(struct aq_hw_s *aq_hw,
 					      u32 ipv4header_crc_offload_en)
@@ -1156,13 +930,6 @@ void hw_atl_tdm_tx_desc_en_set(struct aq_hw_s *aq_hw, u32 tx_desc_en,
 			    tx_desc_en);
 }
 
-u32 hw_atl_tdm_tx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_TDM_DESCDHD_ADR(descriptor),
-				  HW_ATL_TDM_DESCDHD_MSK,
-				  HW_ATL_TDM_DESCDHD_SHIFT);
-}
-
 void hw_atl_tdm_tx_desc_len_set(struct aq_hw_s *aq_hw, u32 tx_desc_len,
 				u32 descriptor)
 {
@@ -1191,15 +958,6 @@ void hw_atl_tdm_tx_desc_wr_wb_threshold_set(struct aq_hw_s *aq_hw,
 			    tx_desc_wr_wb_threshold);
 }
 
-void hw_atl_tdm_tdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
-				      u32 tdm_irq_moderation_en)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_TDM_INT_MOD_EN_ADR,
-			    HW_ATL_TDM_INT_MOD_EN_MSK,
-			    HW_ATL_TDM_INT_MOD_EN_SHIFT,
-			    tdm_irq_moderation_en);
-}
-
 /* thm */
 void hw_atl_thm_lso_tcp_flag_of_first_pkt_set(struct aq_hw_s *aq_hw,
 					      u32 lso_tcp_flag_of_first_pkt)
@@ -1236,13 +994,6 @@ void hw_atl_tpb_tx_buff_en_set(struct aq_hw_s *aq_hw, u32 tx_buff_en)
 			    HW_ATL_TPB_TX_BUF_EN_SHIFT, tx_buff_en);
 }
 
-u32 hw_atl_rpb_tps_tx_tc_mode_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_TPB_TX_TC_MODE_ADDR,
-			HW_ATL_TPB_TX_TC_MODE_MSK,
-			HW_ATL_TPB_TX_TC_MODE_SHIFT);
-}
-
 void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw,
 				   u32 tx_traf_class_mode)
 {
@@ -1272,15 +1023,6 @@ void hw_atl_tpb_tx_buff_lo_threshold_per_tc_set(struct aq_hw_s *aq_hw,
 			    tx_buff_lo_threshold_per_tc);
 }
 
-void hw_atl_tpb_tx_dma_sys_lbk_en_set(struct aq_hw_s *aq_hw,
-				      u32 tx_dma_sys_lbk_en)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_TPB_DMA_SYS_LBK_ADR,
-			    HW_ATL_TPB_DMA_SYS_LBK_MSK,
-			    HW_ATL_TPB_DMA_SYS_LBK_SHIFT,
-			    tx_dma_sys_lbk_en);
-}
-
 void hw_atl_tpb_tx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
 					    u32 tx_pkt_buff_size_per_tc,
 					    u32 buffer)
@@ -1319,15 +1061,6 @@ void hw_atl_tpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
 			    tcp_udp_crc_offload_en);
 }
 
-void hw_atl_tpo_tx_pkt_sys_lbk_en_set(struct aq_hw_s *aq_hw,
-				      u32 tx_pkt_sys_lbk_en)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_TPO_PKT_SYS_LBK_ADR,
-			    HW_ATL_TPO_PKT_SYS_LBK_MSK,
-			    HW_ATL_TPO_PKT_SYS_LBK_SHIFT,
-			    tx_pkt_sys_lbk_en);
-}
-
 /* TPS: tx packet scheduler */
 void hw_atl_tps_tx_pkt_shed_data_arb_mode_set(struct aq_hw_s *aq_hw,
 					      u32 tx_pkt_shed_data_arb_mode)
@@ -1422,58 +1155,7 @@ void hw_atl_tx_tx_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 tx_reg_res_dis)
 			    HW_ATL_TX_REG_RES_DSBL_SHIFT, tx_reg_res_dis);
 }
 
-/* msm */
-u32 hw_atl_msm_reg_access_status_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg_bit(aq_hw, HW_ATL_MSM_REG_ACCESS_BUSY_ADR,
-				  HW_ATL_MSM_REG_ACCESS_BUSY_MSK,
-				  HW_ATL_MSM_REG_ACCESS_BUSY_SHIFT);
-}
-
-void hw_atl_msm_reg_addr_for_indirect_addr_set(struct aq_hw_s *aq_hw,
-					       u32 reg_addr_for_indirect_addr)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_ADDR_ADR,
-			    HW_ATL_MSM_REG_ADDR_MSK,
-			    HW_ATL_MSM_REG_ADDR_SHIFT,
-			    reg_addr_for_indirect_addr);
-}
-
-void hw_atl_msm_reg_rd_strobe_set(struct aq_hw_s *aq_hw, u32 reg_rd_strobe)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_RD_STROBE_ADR,
-			    HW_ATL_MSM_REG_RD_STROBE_MSK,
-			    HW_ATL_MSM_REG_RD_STROBE_SHIFT,
-			    reg_rd_strobe);
-}
-
-u32 hw_atl_msm_reg_rd_data_get(struct aq_hw_s *aq_hw)
-{
-	return aq_hw_read_reg(aq_hw, HW_ATL_MSM_REG_RD_DATA_ADR);
-}
-
-void hw_atl_msm_reg_wr_data_set(struct aq_hw_s *aq_hw, u32 reg_wr_data)
-{
-	aq_hw_write_reg(aq_hw, HW_ATL_MSM_REG_WR_DATA_ADR, reg_wr_data);
-}
-
-void hw_atl_msm_reg_wr_strobe_set(struct aq_hw_s *aq_hw, u32 reg_wr_strobe)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_WR_STROBE_ADR,
-			    HW_ATL_MSM_REG_WR_STROBE_MSK,
-			    HW_ATL_MSM_REG_WR_STROBE_SHIFT,
-			    reg_wr_strobe);
-}
-
 /* pci */
-void hw_atl_pci_pci_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 pci_reg_res_dis)
-{
-	aq_hw_write_reg_bit(aq_hw, HW_ATL_PCI_REG_RES_DSBL_ADR,
-			    HW_ATL_PCI_REG_RES_DSBL_MSK,
-			    HW_ATL_PCI_REG_RES_DSBL_SHIFT,
-			    pci_reg_res_dis);
-}
-
 void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
 					u32 glb_cpu_scratch_scp,
 					u32 scratch_scp)
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_llh.h b/drivers/net/atlantic/hw_atl/hw_atl_llh.h
index e30083cea5..493fd88934 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_llh.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_llh.h
@@ -21,15 +21,6 @@ void hw_atl_reg_glb_cpu_sem_set(struct aq_hw_s *aq_hw,	u32 glb_cpu_sem,
 /* get global microprocessor semaphore */
 u32 hw_atl_reg_glb_cpu_sem_get(struct aq_hw_s *aq_hw, u32 semaphore);
 
-/* set global register reset disable */
-void hw_atl_glb_glb_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 glb_reg_res_dis);
-
-/* set soft reset */
-void hw_atl_glb_soft_res_set(struct aq_hw_s *aq_hw, u32 soft_res);
-
-/* get soft reset */
-u32 hw_atl_glb_soft_res_get(struct aq_hw_s *aq_hw);
-
 /* stats */
 
 u32 hw_atl_rpb_rx_dma_drop_pkt_cnt_get(struct aq_hw_s *aq_hw);
@@ -130,9 +121,6 @@ void hw_atl_itr_irq_msk_clearlsw_set(struct aq_hw_s *aq_hw,
 /* set interrupt mask set lsw */
 void hw_atl_itr_irq_msk_setlsw_set(struct aq_hw_s *aq_hw, u32 irq_msk_setlsw);
 
-/* set interrupt register reset disable */
-void hw_atl_itr_irq_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 irq_reg_res_dis);
-
 /* set interrupt status clear lsw */
 void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
 					u32 irq_status_clearlsw);
@@ -140,12 +128,6 @@ void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
 /* get interrupt status lsw */
 u32 hw_atl_itr_irq_statuslsw_get(struct aq_hw_s *aq_hw);
 
-/* get reset interrupt */
-u32 hw_atl_itr_res_irq_get(struct aq_hw_s *aq_hw);
-
-/* set reset interrupt */
-void hw_atl_itr_res_irq_set(struct aq_hw_s *aq_hw, u32 res_irq);
-
 /* rdm */
 
 /* set cpu id */
@@ -175,9 +157,6 @@ void hw_atl_rdm_rx_desc_head_splitting_set(struct aq_hw_s *aq_hw,
 					   u32 rx_desc_head_splitting,
 				    u32 descriptor);
 
-/* get rx descriptor head pointer */
-u32 hw_atl_rdm_rx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
 /* set rx descriptor length */
 void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
 				u32 descriptor);
@@ -199,29 +178,15 @@ void hw_atl_rdm_rx_desc_head_buff_size_set(struct aq_hw_s *aq_hw,
 					   u32 rx_desc_head_buff_size,
 					   u32 descriptor);
 
-/* set rx descriptor reset */
-void hw_atl_rdm_rx_desc_res_set(struct aq_hw_s *aq_hw, u32 rx_desc_res,
-				u32 descriptor);
-
-/* Set RDM Interrupt Moderation Enable */
-void hw_atl_rdm_rdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
-				      u32 rdm_intr_moder_en);
-
 /* reg */
 
 /* set general interrupt mapping register */
 void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
 				u32 regidx);
 
-/* get general interrupt status register */
-u32 hw_atl_reg_gen_irq_status_get(struct aq_hw_s *aq_hw);
-
 /* set interrupt global control register */
 void hw_atl_reg_irq_glb_ctl_set(struct aq_hw_s *aq_hw, u32 intr_glb_ctl);
 
-/* set interrupt throttle register */
-void hw_atl_reg_irq_thr_set(struct aq_hw_s *aq_hw, u32 intr_thr, u32 throttle);
-
 /* set rx dma descriptor base address lsw */
 void hw_atl_reg_rx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
 					       u32 rx_dma_desc_base_addrlsw,
@@ -232,9 +197,6 @@ void hw_atl_reg_rx_dma_desc_base_addressmswset(struct aq_hw_s *aq_hw,
 					       u32 rx_dma_desc_base_addrmsw,
 					u32 descriptor);
 
-/* get rx dma descriptor status register */
-u32 hw_atl_reg_rx_dma_desc_status_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
 /* set rx dma descriptor tail pointer register */
 void hw_atl_reg_rx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
 					 u32 rx_dma_desc_tail_ptr,
@@ -252,18 +214,6 @@ void hw_atl_reg_rx_flr_mcst_flr_set(struct aq_hw_s *aq_hw, u32 rx_flr_mcst_flr,
 void hw_atl_reg_rx_flr_rss_control1set(struct aq_hw_s *aq_hw,
 				       u32 rx_flr_rss_control1);
 
-/* Set RX Filter Control Register 2 */
-void hw_atl_reg_rx_flr_control2_set(struct aq_hw_s *aq_hw, u32 rx_flr_control2);
-
-/* Set RX Interrupt Moderation Control Register */
-void hw_atl_reg_rx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
-				       u32 rx_intr_moderation_ctl,
-				u32 queue);
-
-/* set tx dma debug control */
-void hw_atl_reg_tx_dma_debug_ctl_set(struct aq_hw_s *aq_hw,
-				     u32 tx_dma_debug_ctl);
-
 /* set tx dma descriptor base address lsw */
 void hw_atl_reg_tx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
 					       u32 tx_dma_desc_base_addrlsw,
@@ -279,11 +229,6 @@ void hw_atl_reg_tx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
 					 u32 tx_dma_desc_tail_ptr,
 					 u32 descriptor);
 
-/* Set TX Interrupt Moderation Control Register */
-void hw_atl_reg_tx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
-				       u32 tx_intr_moderation_ctl,
-				       u32 queue);
-
 /* set global microprocessor scratch pad */
 void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
 					u32 glb_cpu_scratch_scp,
@@ -291,16 +236,10 @@ void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
 
 /* rpb */
 
-/* set dma system loopback */
-void hw_atl_rpb_dma_sys_lbk_set(struct aq_hw_s *aq_hw, u32 dma_sys_lbk);
-
 /* set rx traffic class mode */
 void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
 					   u32 rx_traf_class_mode);
 
-/* get rx traffic class mode */
-u32 hw_atl_rpb_rpf_rx_traf_class_mode_get(struct aq_hw_s *aq_hw);
-
 /* set rx buffer enable */
 void hw_atl_rpb_rx_buff_en_set(struct aq_hw_s *aq_hw, u32 rx_buff_en);
 
@@ -341,11 +280,6 @@ void hw_atl_rpfl2broadcast_en_set(struct aq_hw_s *aq_hw, u32 l2broadcast_en);
 void hw_atl_rpfl2broadcast_flr_act_set(struct aq_hw_s *aq_hw,
 				       u32 l2broadcast_flr_act);
 
-/* set l2 multicast filter enable */
-void hw_atl_rpfl2multicast_flr_en_set(struct aq_hw_s *aq_hw,
-				      u32 l2multicast_flr_en,
-				      u32 filter);
-
 /* set l2 promiscuous mode enable */
 void hw_atl_rpfl2promiscuous_mode_en_set(struct aq_hw_s *aq_hw,
 					 u32 l2promiscuous_mode_en);
@@ -403,10 +337,6 @@ u32 hw_atl_rpf_rss_redir_wr_en_get(struct aq_hw_s *aq_hw);
 /* set rss redirection write enable */
 void hw_atl_rpf_rss_redir_wr_en_set(struct aq_hw_s *aq_hw, u32 rss_redir_wr_en);
 
-/* set tpo to rpf system loopback */
-void hw_atl_rpf_tpo_to_rpf_sys_lbk_set(struct aq_hw_s *aq_hw,
-				       u32 tpo_to_rpf_sys_lbk);
-
 /* set vlan inner ethertype */
 void hw_atl_rpf_vlan_inner_etht_set(struct aq_hw_s *aq_hw, u32 vlan_inner_etht);
 
@@ -417,14 +347,6 @@ void hw_atl_rpf_vlan_outer_etht_set(struct aq_hw_s *aq_hw, u32 vlan_outer_etht);
 void hw_atl_rpf_vlan_prom_mode_en_set(struct aq_hw_s *aq_hw,
 				      u32 vlan_prom_mode_en);
 
-/* Set VLAN untagged action */
-void hw_atl_rpf_vlan_untagged_act_set(struct aq_hw_s *aq_hw,
-				      u32 vlan_untagged_act);
-
-/* Set VLAN accept untagged packets */
-void hw_atl_rpf_vlan_accept_untagged_packets_set(struct aq_hw_s *aq_hw,
-						 u32 vlan_acc_untagged_packets);
-
 /* Set VLAN filter enable */
 void hw_atl_rpf_vlan_flr_en_set(struct aq_hw_s *aq_hw, u32 vlan_flr_en,
 				u32 filter);
@@ -437,40 +359,6 @@ void hw_atl_rpf_vlan_flr_act_set(struct aq_hw_s *aq_hw, u32 vlan_filter_act,
 void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr,
 				u32 filter);
 
-/* set ethertype filter enable */
-void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en,
-				u32 filter);
-
-/* set  ethertype user-priority enable */
-void hw_atl_rpf_etht_user_priority_en_set(struct aq_hw_s *aq_hw,
-					  u32 etht_user_priority_en,
-					  u32 filter);
-
-/* set  ethertype rx queue enable */
-void hw_atl_rpf_etht_rx_queue_en_set(struct aq_hw_s *aq_hw,
-				     u32 etht_rx_queue_en,
-				     u32 filter);
-
-/* set ethertype rx queue */
-void hw_atl_rpf_etht_rx_queue_set(struct aq_hw_s *aq_hw, u32 etht_rx_queue,
-				  u32 filter);
-
-/* set ethertype user-priority */
-void hw_atl_rpf_etht_user_priority_set(struct aq_hw_s *aq_hw,
-				       u32 etht_user_priority,
-				       u32 filter);
-
-/* set ethertype management queue */
-void hw_atl_rpf_etht_mgt_queue_set(struct aq_hw_s *aq_hw, u32 etht_mgt_queue,
-				   u32 filter);
-
-/* set ethertype filter action */
-void hw_atl_rpf_etht_flr_act_set(struct aq_hw_s *aq_hw, u32 etht_flr_act,
-				 u32 filter);
-
-/* set ethertype filter */
-void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter);
-
 /* rpo */
 
 /* set ipv4 header checksum offload enable */
@@ -552,9 +440,6 @@ void hw_atl_tdm_tx_dca_mode_set(struct aq_hw_s *aq_hw, u32 tx_dca_mode);
 void hw_atl_tdm_tx_desc_dca_en_set(struct aq_hw_s *aq_hw, u32 tx_desc_dca_en,
 				   u32 dca);
 
-/* get tx descriptor head pointer */
-u32 hw_atl_tdm_tx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
 /* set tx descriptor length */
 void hw_atl_tdm_tx_desc_len_set(struct aq_hw_s *aq_hw, u32 tx_desc_len,
 				u32 descriptor);
@@ -568,9 +453,6 @@ void hw_atl_tdm_tx_desc_wr_wb_threshold_set(struct aq_hw_s *aq_hw,
 					    u32 tx_desc_wr_wb_threshold,
 				     u32 descriptor);
 
-/* Set TDM Interrupt Moderation Enable */
-void hw_atl_tdm_tdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
-				      u32 tdm_irq_moderation_en);
 /* thm */
 
 /* set lso tcp flag of first packet */
@@ -591,9 +473,6 @@ void hw_atl_thm_lso_tcp_flag_of_middle_pkt_set(struct aq_hw_s *aq_hw,
 void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw,
 				   u32 tx_traf_class_mode);
 
-/* get TX Traffic Class Mode */
-u32 hw_atl_rpb_tps_tx_tc_mode_get(struct aq_hw_s *aq_hw);
-
 /* set tx buffer enable */
 void hw_atl_tpb_tx_buff_en_set(struct aq_hw_s *aq_hw, u32 tx_buff_en);
 
@@ -607,10 +486,6 @@ void hw_atl_tpb_tx_buff_lo_threshold_per_tc_set(struct aq_hw_s *aq_hw,
 						u32 tx_buff_lo_threshold_per_tc,
 					 u32 buffer);
 
-/* set tx dma system loopback enable */
-void hw_atl_tpb_tx_dma_sys_lbk_en_set(struct aq_hw_s *aq_hw,
-				      u32 tx_dma_sys_lbk_en);
-
 /* set tx packet buffer size (per tc) */
 void hw_atl_tpb_tx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
 					    u32 tx_pkt_buff_size_per_tc,
@@ -630,10 +505,6 @@ void hw_atl_tpo_ipv4header_crc_offload_en_set(struct aq_hw_s *aq_hw,
 void hw_atl_tpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
 					   u32 tcp_udp_crc_offload_en);
 
-/* set tx pkt system loopback enable */
-void hw_atl_tpo_tx_pkt_sys_lbk_en_set(struct aq_hw_s *aq_hw,
-				      u32 tx_pkt_sys_lbk_en);
-
 /* tps */
 
 /* set tx packet scheduler data arbitration mode */
@@ -681,32 +552,8 @@ void hw_atl_tps_tx_pkt_shed_tc_data_weight_set(struct aq_hw_s *aq_hw,
 /* set tx register reset disable */
 void hw_atl_tx_tx_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 tx_reg_res_dis);
 
-/* msm */
-
-/* get register access status */
-u32 hw_atl_msm_reg_access_status_get(struct aq_hw_s *aq_hw);
-
-/* set  register address for indirect address */
-void hw_atl_msm_reg_addr_for_indirect_addr_set(struct aq_hw_s *aq_hw,
-					       u32 reg_addr_for_indirect_addr);
-
-/* set register read strobe */
-void hw_atl_msm_reg_rd_strobe_set(struct aq_hw_s *aq_hw, u32 reg_rd_strobe);
-
-/* get  register read data */
-u32 hw_atl_msm_reg_rd_data_get(struct aq_hw_s *aq_hw);
-
-/* set  register write data */
-void hw_atl_msm_reg_wr_data_set(struct aq_hw_s *aq_hw, u32 reg_wr_data);
-
-/* set register write strobe */
-void hw_atl_msm_reg_wr_strobe_set(struct aq_hw_s *aq_hw, u32 reg_wr_strobe);
-
 /* pci */
 
-/* set pci register reset disable */
-void hw_atl_pci_pci_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 pci_reg_res_dis);
-
 /* set uP Force Interrupt */
 void hw_atl_mcp_up_force_intr_set(struct aq_hw_s *aq_hw, u32 up_force_intr);
 
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
index 84d11ab3a5..c94f5112f1 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
@@ -682,37 +682,6 @@ static int hw_atl_utils_get_mac_permanent(struct aq_hw_s *self,
 	return err;
 }
 
-unsigned int hw_atl_utils_mbps_2_speed_index(unsigned int mbps)
-{
-	unsigned int ret = 0U;
-
-	switch (mbps) {
-	case 100U:
-		ret = 5U;
-		break;
-
-	case 1000U:
-		ret = 4U;
-		break;
-
-	case 2500U:
-		ret = 3U;
-		break;
-
-	case 5000U:
-		ret = 1U;
-		break;
-
-	case 10000U:
-		ret = 0U;
-		break;
-
-	default:
-		break;
-	}
-	return ret;
-}
-
 void hw_atl_utils_hw_chip_features_init(struct aq_hw_s *self, u32 *p)
 {
 	u32 chip_features = 0U;
@@ -795,11 +764,6 @@ int hw_atl_utils_update_stats(struct aq_hw_s *self)
 	return 0;
 }
 
-struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self)
-{
-	return &self->curr_stats;
-}
-
 static const u32 hw_atl_utils_hw_mac_regs[] = {
 	0x00005580U, 0x00005590U, 0x000055B0U, 0x000055B4U,
 	0x000055C0U, 0x00005B00U, 0x00005B04U, 0x00005B08U,
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.h b/drivers/net/atlantic/hw_atl/hw_atl_utils.h
index d8fab010cf..f5e2b472a9 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.h
@@ -617,8 +617,6 @@ void hw_atl_utils_mpi_set(struct aq_hw_s *self,
 
 int hw_atl_utils_mpi_get_link_status(struct aq_hw_s *self);
 
-unsigned int hw_atl_utils_mbps_2_speed_index(unsigned int mbps);
-
 unsigned int hw_atl_utils_hw_get_reg_length(void);
 
 int hw_atl_utils_hw_get_regs(struct aq_hw_s *self,
@@ -633,8 +631,6 @@ int hw_atl_utils_get_fw_version(struct aq_hw_s *self, u32 *fw_version);
 
 int hw_atl_utils_update_stats(struct aq_hw_s *self);
 
-struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self);
-
 int hw_atl_utils_fw_downld_dwords(struct aq_hw_s *self, u32 a,
 				  u32 *p, u32 cnt);
 
diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c
index 61f99c6408..7ade8f42d3 100644
--- a/drivers/net/bnx2x/ecore_sp.c
+++ b/drivers/net/bnx2x/ecore_sp.c
@@ -456,23 +456,6 @@ static void __ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
 	}
 }
 
-/**
- * ecore_vlan_mac_h_write_unlock - unlock the vlan mac head list writer lock
- *
- * @sc:			device handle
- * @o:			vlan_mac object
- *
- * @details Notice if a pending execution exists, it would perform it -
- *          possibly releasing and reclaiming the execution queue lock.
- */
-void ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
-				   struct ecore_vlan_mac_obj *o)
-{
-	ECORE_SPIN_LOCK_BH(&o->exe_queue.lock);
-	__ecore_vlan_mac_h_write_unlock(sc, o);
-	ECORE_SPIN_UNLOCK_BH(&o->exe_queue.lock);
-}
-
 /**
  * __ecore_vlan_mac_h_read_lock - lock the vlan mac head list reader lock
  *
diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h
index d58072dac0..bfb55e8d01 100644
--- a/drivers/net/bnx2x/ecore_sp.h
+++ b/drivers/net/bnx2x/ecore_sp.h
@@ -1871,8 +1871,6 @@ void ecore_vlan_mac_h_read_unlock(struct bnx2x_softc *sc,
 				  struct ecore_vlan_mac_obj *o);
 int ecore_vlan_mac_h_write_lock(struct bnx2x_softc *sc,
 				struct ecore_vlan_mac_obj *o);
-void ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
-					  struct ecore_vlan_mac_obj *o);
 int ecore_config_vlan_mac(struct bnx2x_softc *sc,
 			   struct ecore_vlan_mac_ramrod_params *p);
 
diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c
index b65126d718..67ebdaaa44 100644
--- a/drivers/net/bnx2x/elink.c
+++ b/drivers/net/bnx2x/elink.c
@@ -1154,931 +1154,6 @@ static uint32_t elink_get_cfg_pin(struct bnx2x_softc *sc, uint32_t pin_cfg,
 	return ELINK_STATUS_OK;
 }
 
-/******************************************************************/
-/*				ETS section			  */
-/******************************************************************/
-static void elink_ets_e2e3a0_disabled(struct elink_params *params)
-{
-	/* ETS disabled configuration*/
-	struct bnx2x_softc *sc = params->sc;
-
-	ELINK_DEBUG_P0(sc, "ETS E2E3 disabled configuration");
-
-	/* mapping between entry  priority to client number (0,1,2 -debug and
-	 * management clients, 3 - COS0 client, 4 - COS client)(HIGHEST)
-	 * 3bits client num.
-	 *   PRI4    |    PRI3    |    PRI2    |    PRI1    |    PRI0
-	 * cos1-100     cos0-011     dbg1-010     dbg0-001     MCP-000
-	 */
-
-	REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT, 0x4688);
-	/* Bitmap of 5bits length. Each bit specifies whether the entry behaves
-	 * as strict.  Bits 0,1,2 - debug and management entries, 3 -
-	 * COS0 entry, 4 - COS1 entry.
-	 * COS1 | COS0 | DEBUG1 | DEBUG0 | MGMT
-	 * bit4   bit3	  bit2   bit1	  bit0
-	 * MCP and debug are strict
-	 */
-
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x7);
-	/* defines which entries (clients) are subjected to WFQ arbitration */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0);
-	/* For strict priority entries defines the number of consecutive
-	 * slots for the highest priority.
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
-	/* mapping between the CREDIT_WEIGHT registers and actual client
-	 * numbers
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP, 0);
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, 0);
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, 0);
-
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0, 0);
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1, 0);
-	REG_WR(sc, PBF_REG_HIGH_PRIORITY_COS_NUM, 0);
-	/* ETS mode disable */
-	REG_WR(sc, PBF_REG_ETS_ENABLED, 0);
-	/* If ETS mode is enabled (there is no strict priority) defines a WFQ
-	 * weight for COS0/COS1.
-	 */
-	REG_WR(sc, PBF_REG_COS0_WEIGHT, 0x2710);
-	REG_WR(sc, PBF_REG_COS1_WEIGHT, 0x2710);
-	/* Upper bound that COS0_WEIGHT can reach in the WFQ arbiter */
-	REG_WR(sc, PBF_REG_COS0_UPPER_BOUND, 0x989680);
-	REG_WR(sc, PBF_REG_COS1_UPPER_BOUND, 0x989680);
-	/* Defines the number of consecutive slots for the strict priority */
-	REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0);
-}
-/******************************************************************************
- * Description:
- *	Getting min_w_val will be set according to line speed .
- *.
- ******************************************************************************/
-static uint32_t elink_ets_get_min_w_val_nig(const struct elink_vars *vars)
-{
-	uint32_t min_w_val = 0;
-	/* Calculate min_w_val.*/
-	if (vars->link_up) {
-		if (vars->line_speed == ELINK_SPEED_20000)
-			min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_20GBPS;
-		else
-			min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_UP_TO_10GBPS;
-	} else {
-		min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_20GBPS;
-	}
-	/* If the link isn't up (static configuration for example ) The
-	 * link will be according to 20GBPS.
-	 */
-	return min_w_val;
-}
-/******************************************************************************
- * Description:
- *	Getting credit upper bound form min_w_val.
- *.
- ******************************************************************************/
-static uint32_t elink_ets_get_credit_upper_bound(const uint32_t min_w_val)
-{
-	const uint32_t credit_upper_bound = (uint32_t)
-						ELINK_MAXVAL((150 * min_w_val),
-							ELINK_MAX_PACKET_SIZE);
-	return credit_upper_bound;
-}
-/******************************************************************************
- * Description:
- *	Set credit upper bound for NIG.
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_set_credit_upper_bound_nig(
-	const struct elink_params *params,
-	const uint32_t min_w_val)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint8_t port = params->port;
-	const uint32_t credit_upper_bound =
-	    elink_ets_get_credit_upper_bound(min_w_val);
-
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_0 :
-		NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0, credit_upper_bound);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_1 :
-		   NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1, credit_upper_bound);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_2 :
-		   NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_2, credit_upper_bound);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_3 :
-		   NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_3, credit_upper_bound);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_4 :
-		   NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_4, credit_upper_bound);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_5 :
-		   NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_5, credit_upper_bound);
-
-	if (!port) {
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_6,
-			credit_upper_bound);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_7,
-			credit_upper_bound);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_8,
-			credit_upper_bound);
-	}
-}
-/******************************************************************************
- * Description:
- *	Will return the NIG ETS registers to init values.Except
- *	credit_upper_bound.
- *	That isn't used in this configuration (No WFQ is enabled) and will be
- *	configured according to spec
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_nig_disabled(const struct elink_params *params,
-					const struct elink_vars *vars)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint8_t port = params->port;
-	const uint32_t min_w_val = elink_ets_get_min_w_val_nig(vars);
-	/* Mapping between entry  priority to client number (0,1,2 -debug and
-	 * management clients, 3 - COS0 client, 4 - COS1, ... 8 -
-	 * COS5)(HIGHEST) 4bits client num.TODO_ETS - Should be done by
-	 * reset value or init tool
-	 */
-	if (port) {
-		REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_LSB, 0x543210);
-		REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_MSB, 0x0);
-	} else {
-		REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_LSB, 0x76543210);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_MSB, 0x8);
-	}
-	/* For strict priority entries defines the number of consecutive
-	 * slots for the highest priority.
-	 */
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS :
-		   NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
-	/* Mapping between the CREDIT_WEIGHT registers and actual client
-	 * numbers
-	 */
-	if (port) {
-		/*Port 1 has 6 COS*/
-		REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_CREDIT_MAP2_LSB, 0x210543);
-		REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_CREDIT_MAP2_MSB, 0x0);
-	} else {
-		/*Port 0 has 9 COS*/
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP2_LSB,
-		       0x43210876);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP2_MSB, 0x5);
-	}
-
-	/* Bitmap of 5bits length. Each bit specifies whether the entry behaves
-	 * as strict.  Bits 0,1,2 - debug and management entries, 3 -
-	 * COS0 entry, 4 - COS1 entry.
-	 * COS1 | COS0 | DEBUG1 | DEBUG0 | MGMT
-	 * bit4   bit3	  bit2   bit1	  bit0
-	 * MCP and debug are strict
-	 */
-	if (port)
-		REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_IS_STRICT, 0x3f);
-	else
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x1ff);
-	/* defines which entries (clients) are subjected to WFQ arbitration */
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_SUBJECT2WFQ :
-		   NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0);
-
-	/* Please notice the register address are note continuous and a
-	 * for here is note appropriate.In 2 port mode port0 only COS0-5
-	 * can be used. DEBUG1,DEBUG1,MGMT are never used for WFQ* In 4
-	 * port mode port1 only COS0-2 can be used. DEBUG1,DEBUG1,MGMT
-	 * are never used for WFQ
-	 */
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_0 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, 0x0);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_1 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, 0x0);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_2 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_2, 0x0);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_3 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_3, 0x0);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_4 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_4, 0x0);
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_5 :
-		   NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_5, 0x0);
-	if (!port) {
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_6, 0x0);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_7, 0x0);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_8, 0x0);
-	}
-
-	elink_ets_e3b0_set_credit_upper_bound_nig(params, min_w_val);
-}
-/******************************************************************************
- * Description:
- *	Set credit upper bound for PBF.
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_set_credit_upper_bound_pbf(
-	const struct elink_params *params,
-	const uint32_t min_w_val)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint32_t credit_upper_bound =
-	    elink_ets_get_credit_upper_bound(min_w_val);
-	const uint8_t port = params->port;
-	uint32_t base_upper_bound = 0;
-	uint8_t max_cos = 0;
-	uint8_t i = 0;
-	/* In 2 port mode port0 has COS0-5 that can be used for WFQ.In 4
-	 * port mode port1 has COS0-2 that can be used for WFQ.
-	 */
-	if (!port) {
-		base_upper_bound = PBF_REG_COS0_UPPER_BOUND_P0;
-		max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-	} else {
-		base_upper_bound = PBF_REG_COS0_UPPER_BOUND_P1;
-		max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1;
-	}
-
-	for (i = 0; i < max_cos; i++)
-		REG_WR(sc, base_upper_bound + (i << 2), credit_upper_bound);
-}
-
-/******************************************************************************
- * Description:
- *	Will return the PBF ETS registers to init values.Except
- *	credit_upper_bound.
- *	That isn't used in this configuration (No WFQ is enabled) and will be
- *	configured according to spec
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_pbf_disabled(const struct elink_params *params)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint8_t port = params->port;
-	const uint32_t min_w_val_pbf = ELINK_ETS_E3B0_PBF_MIN_W_VAL;
-	uint8_t i = 0;
-	uint32_t base_weight = 0;
-	uint8_t max_cos = 0;
-
-	/* Mapping between entry  priority to client number 0 - COS0
-	 * client, 2 - COS1, ... 5 - COS5)(HIGHEST) 4bits client num.
-	 * TODO_ETS - Should be done by reset value or init tool
-	 */
-	if (port)
-		/*  0x688 (|011|0 10|00 1|000) */
-		REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P1, 0x688);
-	else
-		/*  (10 1|100 |011|0 10|00 1|000) */
-		REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P0, 0x2C688);
-
-	/* TODO_ETS - Should be done by reset value or init tool */
-	if (port)
-		/* 0x688 (|011|0 10|00 1|000)*/
-		REG_WR(sc, PBF_REG_ETS_ARB_CLIENT_CREDIT_MAP_P1, 0x688);
-	else
-	/* 0x2C688 (10 1|100 |011|0 10|00 1|000) */
-	REG_WR(sc, PBF_REG_ETS_ARB_CLIENT_CREDIT_MAP_P0, 0x2C688);
-
-	REG_WR(sc, (port) ? PBF_REG_ETS_ARB_NUM_STRICT_ARB_SLOTS_P1 :
-		   PBF_REG_ETS_ARB_NUM_STRICT_ARB_SLOTS_P0, 0x100);
-
-
-	REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P1 :
-		   PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P0, 0);
-
-	REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P1 :
-		   PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P0, 0);
-	/* In 2 port mode port0 has COS0-5 that can be used for WFQ.
-	 * In 4 port mode port1 has COS0-2 that can be used for WFQ.
-	 */
-	if (!port) {
-		base_weight = PBF_REG_COS0_WEIGHT_P0;
-		max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-	} else {
-		base_weight = PBF_REG_COS0_WEIGHT_P1;
-		max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1;
-	}
-
-	for (i = 0; i < max_cos; i++)
-		REG_WR(sc, base_weight + (0x4 * i), 0);
-
-	elink_ets_e3b0_set_credit_upper_bound_pbf(params, min_w_val_pbf);
-}
-/******************************************************************************
- * Description:
- *	E3B0 disable will return basicly the values to init values.
- *.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params,
-				   const struct elink_vars *vars)
-{
-	struct bnx2x_softc *sc = params->sc;
-
-	if (!CHIP_IS_E3B0(sc)) {
-		ELINK_DEBUG_P0(sc,
-		   "elink_ets_e3b0_disabled the chip isn't E3B0");
-		return ELINK_STATUS_ERROR;
-	}
-
-	elink_ets_e3b0_nig_disabled(params, vars);
-
-	elink_ets_e3b0_pbf_disabled(params);
-
-	return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- *	Disable will return basicly the values to init values.
- *
- ******************************************************************************/
-elink_status_t elink_ets_disabled(struct elink_params *params,
-		      struct elink_vars *vars)
-{
-	struct bnx2x_softc *sc = params->sc;
-	elink_status_t elink_status = ELINK_STATUS_OK;
-
-	if ((CHIP_IS_E2(sc)) || (CHIP_IS_E3A0(sc))) {
-		elink_ets_e2e3a0_disabled(params);
-	} else if (CHIP_IS_E3B0(sc)) {
-		elink_status = elink_ets_e3b0_disabled(params, vars);
-	} else {
-		ELINK_DEBUG_P0(sc, "elink_ets_disabled - chip not supported");
-		return ELINK_STATUS_ERROR;
-	}
-
-	return elink_status;
-}
-
-/******************************************************************************
- * Description
- *	Set the COS mappimg to SP and BW until this point all the COS are not
- *	set as SP or BW.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_cli_map(const struct elink_params *params,
-		  __rte_unused const struct elink_ets_params *ets_params,
-		  const uint8_t cos_sp_bitmap,
-		  const uint8_t cos_bw_bitmap)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint8_t port = params->port;
-	const uint8_t nig_cli_sp_bitmap = 0x7 | (cos_sp_bitmap << 3);
-	const uint8_t pbf_cli_sp_bitmap = cos_sp_bitmap;
-	const uint8_t nig_cli_subject2wfq_bitmap = cos_bw_bitmap << 3;
-	const uint8_t pbf_cli_subject2wfq_bitmap = cos_bw_bitmap;
-
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_STRICT :
-	       NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, nig_cli_sp_bitmap);
-
-	REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P1 :
-	       PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P0, pbf_cli_sp_bitmap);
-
-	REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_SUBJECT2WFQ :
-	       NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
-	       nig_cli_subject2wfq_bitmap);
-
-	REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P1 :
-	       PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P0,
-	       pbf_cli_subject2wfq_bitmap);
-
-	return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- *	This function is needed because NIG ARB_CREDIT_WEIGHT_X are
- *	not continues and ARB_CREDIT_WEIGHT_0 + offset is suitable.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_set_cos_bw(struct bnx2x_softc *sc,
-				     const uint8_t cos_entry,
-				     const uint32_t min_w_val_nig,
-				     const uint32_t min_w_val_pbf,
-				     const uint16_t total_bw,
-				     const uint8_t bw,
-				     const uint8_t port)
-{
-	uint32_t nig_reg_address_crd_weight = 0;
-	uint32_t pbf_reg_address_crd_weight = 0;
-	/* Calculate and set BW for this COS - use 1 instead of 0 for BW */
-	const uint32_t cos_bw_nig = ((bw ? bw : 1) * min_w_val_nig) / total_bw;
-	const uint32_t cos_bw_pbf = ((bw ? bw : 1) * min_w_val_pbf) / total_bw;
-
-	switch (cos_entry) {
-	case 0:
-	    nig_reg_address_crd_weight =
-		 (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_0 :
-		     NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0;
-	     pbf_reg_address_crd_weight = (port) ?
-		 PBF_REG_COS0_WEIGHT_P1 : PBF_REG_COS0_WEIGHT_P0;
-		break;
-	case 1:
-	     nig_reg_address_crd_weight = (port) ?
-		 NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_1 :
-		 NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1;
-	     pbf_reg_address_crd_weight = (port) ?
-		 PBF_REG_COS1_WEIGHT_P1 : PBF_REG_COS1_WEIGHT_P0;
-		break;
-	case 2:
-	     nig_reg_address_crd_weight = (port) ?
-		 NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_2 :
-		 NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_2;
-
-		 pbf_reg_address_crd_weight = (port) ?
-		     PBF_REG_COS2_WEIGHT_P1 : PBF_REG_COS2_WEIGHT_P0;
-		break;
-	case 3:
-		if (port)
-			return ELINK_STATUS_ERROR;
-		nig_reg_address_crd_weight =
-			NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_3;
-		pbf_reg_address_crd_weight =
-			PBF_REG_COS3_WEIGHT_P0;
-		break;
-	case 4:
-		if (port)
-		return ELINK_STATUS_ERROR;
-	     nig_reg_address_crd_weight =
-		 NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_4;
-	     pbf_reg_address_crd_weight = PBF_REG_COS4_WEIGHT_P0;
-		break;
-	case 5:
-		if (port)
-		return ELINK_STATUS_ERROR;
-	     nig_reg_address_crd_weight =
-		 NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_5;
-	     pbf_reg_address_crd_weight = PBF_REG_COS5_WEIGHT_P0;
-		break;
-	}
-
-	REG_WR(sc, nig_reg_address_crd_weight, cos_bw_nig);
-
-	REG_WR(sc, pbf_reg_address_crd_weight, cos_bw_pbf);
-
-	return ELINK_STATUS_OK;
-}
-/******************************************************************************
- * Description:
- *	Calculate the total BW.A value of 0 isn't legal.
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_get_total_bw(
-	const struct elink_params *params,
-	struct elink_ets_params *ets_params,
-	uint16_t *total_bw)
-{
-	struct bnx2x_softc *sc = params->sc;
-	uint8_t cos_idx = 0;
-	uint8_t is_bw_cos_exist = 0;
-
-	*total_bw = 0;
-	/* Calculate total BW requested */
-	for (cos_idx = 0; cos_idx < ets_params->num_of_cos; cos_idx++) {
-		if (ets_params->cos[cos_idx].state == elink_cos_state_bw) {
-			is_bw_cos_exist = 1;
-			if (!ets_params->cos[cos_idx].params.bw_params.bw) {
-				ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config BW"
-						   " was set to 0");
-				/* This is to prevent a state when ramrods
-				 * can't be sent
-				 */
-				ets_params->cos[cos_idx].params.bw_params.bw
-					 = 1;
-			}
-			*total_bw +=
-				ets_params->cos[cos_idx].params.bw_params.bw;
-		}
-	}
-
-	/* Check total BW is valid */
-	if ((is_bw_cos_exist == 1) && (*total_bw != 100)) {
-		if (*total_bw == 0) {
-			ELINK_DEBUG_P0(sc,
-			   "elink_ets_E3B0_config total BW shouldn't be 0");
-			return ELINK_STATUS_ERROR;
-		}
-		ELINK_DEBUG_P0(sc,
-		   "elink_ets_E3B0_config total BW should be 100");
-		/* We can handle a case whre the BW isn't 100 this can happen
-		 * if the TC are joined.
-		 */
-	}
-	return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- *	Invalidate all the sp_pri_to_cos.
- *
- ******************************************************************************/
-static void elink_ets_e3b0_sp_pri_to_cos_init(uint8_t *sp_pri_to_cos)
-{
-	uint8_t pri = 0;
-	for (pri = 0; pri < ELINK_DCBX_MAX_NUM_COS; pri++)
-		sp_pri_to_cos[pri] = DCBX_INVALID_COS;
-}
-/******************************************************************************
- * Description:
- *	Calculate and set the SP (ARB_PRIORITY_CLIENT) NIG and PBF registers
- *	according to sp_pri_to_cos.
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_sp_pri_to_cos_set(
-					    const struct elink_params *params,
-					    uint8_t *sp_pri_to_cos,
-					    const uint8_t pri,
-					    const uint8_t cos_entry)
-{
-	struct bnx2x_softc *sc = params->sc;
-	const uint8_t port = params->port;
-	const uint8_t max_num_of_cos = (port) ?
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-
-	if (pri >= max_num_of_cos) {
-		ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_pri_to_cos_set invalid "
-		   "parameter Illegal strict priority");
-		return ELINK_STATUS_ERROR;
-	}
-
-	if (sp_pri_to_cos[pri] != DCBX_INVALID_COS) {
-		ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_pri_to_cos_set invalid "
-				   "parameter There can't be two COS's with "
-				   "the same strict pri");
-		return ELINK_STATUS_ERROR;
-	}
-
-	sp_pri_to_cos[pri] = cos_entry;
-	return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- *	Returns the correct value according to COS and priority in
- *	the sp_pri_cli register.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg(const uint8_t cos,
-					 const uint8_t cos_offset,
-					 const uint8_t pri_set,
-					 const uint8_t pri_offset,
-					 const uint8_t entry_size)
-{
-	uint64_t pri_cli_nig = 0;
-	pri_cli_nig = ((uint64_t)(cos + cos_offset)) << (entry_size *
-						    (pri_set + pri_offset));
-
-	return pri_cli_nig;
-}
-/******************************************************************************
- * Description:
- *	Returns the correct value according to COS and priority in the
- *	sp_pri_cli register for NIG.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg_nig(const uint8_t cos,
-						  const uint8_t pri_set)
-{
-	/* MCP Dbg0 and dbg1 are always with higher strict pri*/
-	const uint8_t nig_cos_offset = 3;
-	const uint8_t nig_pri_offset = 3;
-
-	return elink_e3b0_sp_get_pri_cli_reg(cos, nig_cos_offset, pri_set,
-		nig_pri_offset, 4);
-}
-
-/******************************************************************************
- * Description:
- *	Returns the correct value according to COS and priority in the
- *	sp_pri_cli register for PBF.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg_pbf(const uint8_t cos,
-						  const uint8_t pri_set)
-{
-	const uint8_t pbf_cos_offset = 0;
-	const uint8_t pbf_pri_offset = 0;
-
-	return elink_e3b0_sp_get_pri_cli_reg(cos, pbf_cos_offset, pri_set,
-		pbf_pri_offset, 3);
-}
-
-/******************************************************************************
- * Description:
- *	Calculate and set the SP (ARB_PRIORITY_CLIENT) NIG and PBF registers
- *	according to sp_pri_to_cos.(which COS has higher priority)
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_sp_set_pri_cli_reg(
-					     const struct elink_params *params,
-					     uint8_t *sp_pri_to_cos)
-{
-	struct bnx2x_softc *sc = params->sc;
-	uint8_t i = 0;
-	const uint8_t port = params->port;
-	/* MCP Dbg0 and dbg1 are always with higher strict pri*/
-	uint64_t pri_cli_nig = 0x210;
-	uint32_t pri_cli_pbf = 0x0;
-	uint8_t pri_set = 0;
-	uint8_t pri_bitmask = 0;
-	const uint8_t max_num_of_cos = (port) ?
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-
-	uint8_t cos_bit_to_set = (1 << max_num_of_cos) - 1;
-
-	/* Set all the strict priority first */
-	for (i = 0; i < max_num_of_cos; i++) {
-		if (sp_pri_to_cos[i] != DCBX_INVALID_COS) {
-			if (sp_pri_to_cos[i] >= ELINK_DCBX_MAX_NUM_COS) {
-				ELINK_DEBUG_P0(sc,
-					   "elink_ets_e3b0_sp_set_pri_cli_reg "
-					   "invalid cos entry");
-				return ELINK_STATUS_ERROR;
-			}
-
-			pri_cli_nig |= elink_e3b0_sp_get_pri_cli_reg_nig(
-			    sp_pri_to_cos[i], pri_set);
-
-			pri_cli_pbf |= elink_e3b0_sp_get_pri_cli_reg_pbf(
-			    sp_pri_to_cos[i], pri_set);
-			pri_bitmask = 1 << sp_pri_to_cos[i];
-			/* COS is used remove it from bitmap.*/
-			if (!(pri_bitmask & cos_bit_to_set)) {
-				ELINK_DEBUG_P0(sc,
-					"elink_ets_e3b0_sp_set_pri_cli_reg "
-					"invalid There can't be two COS's with"
-					" the same strict pri");
-				return ELINK_STATUS_ERROR;
-			}
-			cos_bit_to_set &= ~pri_bitmask;
-			pri_set++;
-		}
-	}
-
-	/* Set all the Non strict priority i= COS*/
-	for (i = 0; i < max_num_of_cos; i++) {
-		pri_bitmask = 1 << i;
-		/* Check if COS was already used for SP */
-		if (pri_bitmask & cos_bit_to_set) {
-			/* COS wasn't used for SP */
-			pri_cli_nig |= elink_e3b0_sp_get_pri_cli_reg_nig(
-			    i, pri_set);
-
-			pri_cli_pbf |= elink_e3b0_sp_get_pri_cli_reg_pbf(
-			    i, pri_set);
-			/* COS is used remove it from bitmap.*/
-			cos_bit_to_set &= ~pri_bitmask;
-			pri_set++;
-		}
-	}
-
-	if (pri_set != max_num_of_cos) {
-		ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_set_pri_cli_reg not all "
-				   "entries were set");
-		return ELINK_STATUS_ERROR;
-	}
-
-	if (port) {
-		/* Only 6 usable clients*/
-		REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_LSB,
-		       (uint32_t)pri_cli_nig);
-
-		REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P1, pri_cli_pbf);
-	} else {
-		/* Only 9 usable clients*/
-		const uint32_t pri_cli_nig_lsb = (uint32_t)(pri_cli_nig);
-		const uint32_t pri_cli_nig_msb = (uint32_t)
-						((pri_cli_nig >> 32) & 0xF);
-
-		REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_LSB,
-		       pri_cli_nig_lsb);
-		REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_MSB,
-		       pri_cli_nig_msb);
-
-		REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P0, pri_cli_pbf);
-	}
-	return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- *	Configure the COS to ETS according to BW and SP settings.
- ******************************************************************************/
-elink_status_t elink_ets_e3b0_config(const struct elink_params *params,
-			 const struct elink_vars *vars,
-			 struct elink_ets_params *ets_params)
-{
-	struct bnx2x_softc *sc = params->sc;
-	elink_status_t elink_status = ELINK_STATUS_OK;
-	const uint8_t port = params->port;
-	uint16_t total_bw = 0;
-	const uint32_t min_w_val_nig = elink_ets_get_min_w_val_nig(vars);
-	const uint32_t min_w_val_pbf = ELINK_ETS_E3B0_PBF_MIN_W_VAL;
-	uint8_t cos_bw_bitmap = 0;
-	uint8_t cos_sp_bitmap = 0;
-	uint8_t sp_pri_to_cos[ELINK_DCBX_MAX_NUM_COS] = {0};
-	const uint8_t max_num_of_cos = (port) ?
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
-		ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-	uint8_t cos_entry = 0;
-
-	if (!CHIP_IS_E3B0(sc)) {
-		ELINK_DEBUG_P0(sc,
-		   "elink_ets_e3b0_disabled the chip isn't E3B0");
-		return ELINK_STATUS_ERROR;
-	}
-
-	if (ets_params->num_of_cos > max_num_of_cos) {
-		ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config the number of COS "
-				   "isn't supported");
-		return ELINK_STATUS_ERROR;
-	}
-
-	/* Prepare sp strict priority parameters*/
-	elink_ets_e3b0_sp_pri_to_cos_init(sp_pri_to_cos);
-
-	/* Prepare BW parameters*/
-	elink_status = elink_ets_e3b0_get_total_bw(params, ets_params,
-						   &total_bw);
-	if (elink_status != ELINK_STATUS_OK) {
-		ELINK_DEBUG_P0(sc,
-		   "elink_ets_E3B0_config get_total_bw failed");
-		return ELINK_STATUS_ERROR;
-	}
-
-	/* Upper bound is set according to current link speed (min_w_val
-	 * should be the same for upper bound and COS credit val).
-	 */
-	elink_ets_e3b0_set_credit_upper_bound_nig(params, min_w_val_nig);
-	elink_ets_e3b0_set_credit_upper_bound_pbf(params, min_w_val_pbf);
-
-
-	for (cos_entry = 0; cos_entry < ets_params->num_of_cos; cos_entry++) {
-		if (elink_cos_state_bw == ets_params->cos[cos_entry].state) {
-			cos_bw_bitmap |= (1 << cos_entry);
-			/* The function also sets the BW in HW(not the mappin
-			 * yet)
-			 */
-			elink_status = elink_ets_e3b0_set_cos_bw(
-				sc, cos_entry, min_w_val_nig, min_w_val_pbf,
-				total_bw,
-				ets_params->cos[cos_entry].params.bw_params.bw,
-				 port);
-		} else if (elink_cos_state_strict ==
-			ets_params->cos[cos_entry].state){
-			cos_sp_bitmap |= (1 << cos_entry);
-
-			elink_status = elink_ets_e3b0_sp_pri_to_cos_set(
-				params,
-				sp_pri_to_cos,
-				ets_params->cos[cos_entry].params.sp_params.pri,
-				cos_entry);
-
-		} else {
-			ELINK_DEBUG_P0(sc,
-			   "elink_ets_e3b0_config cos state not valid");
-			return ELINK_STATUS_ERROR;
-		}
-		if (elink_status != ELINK_STATUS_OK) {
-			ELINK_DEBUG_P0(sc,
-			   "elink_ets_e3b0_config set cos bw failed");
-			return elink_status;
-		}
-	}
-
-	/* Set SP register (which COS has higher priority) */
-	elink_status = elink_ets_e3b0_sp_set_pri_cli_reg(params,
-							 sp_pri_to_cos);
-
-	if (elink_status != ELINK_STATUS_OK) {
-		ELINK_DEBUG_P0(sc,
-		   "elink_ets_E3B0_config set_pri_cli_reg failed");
-		return elink_status;
-	}
-
-	/* Set client mapping of BW and strict */
-	elink_status = elink_ets_e3b0_cli_map(params, ets_params,
-					      cos_sp_bitmap,
-					      cos_bw_bitmap);
-
-	if (elink_status != ELINK_STATUS_OK) {
-		ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config SP failed");
-		return elink_status;
-	}
-	return ELINK_STATUS_OK;
-}
-static void elink_ets_bw_limit_common(const struct elink_params *params)
-{
-	/* ETS disabled configuration */
-	struct bnx2x_softc *sc = params->sc;
-	ELINK_DEBUG_P0(sc, "ETS enabled BW limit configuration");
-	/* Defines which entries (clients) are subjected to WFQ arbitration
-	 * COS0 0x8
-	 * COS1 0x10
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0x18);
-	/* Mapping between the ARB_CREDIT_WEIGHT registers and actual
-	 * client numbers (WEIGHT_0 does not actually have to represent
-	 * client 0)
-	 *    PRI4    |    PRI3    |    PRI2    |    PRI1    |    PRI0
-	 *  cos1-001     cos0-000     dbg1-100     dbg0-011     MCP-010
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP, 0x111A);
-
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0,
-	       ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1,
-	       ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-
-	/* ETS mode enabled*/
-	REG_WR(sc, PBF_REG_ETS_ENABLED, 1);
-
-	/* Defines the number of consecutive slots for the strict priority */
-	REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0);
-	/* Bitmap of 5bits length. Each bit specifies whether the entry behaves
-	 * as strict.  Bits 0,1,2 - debug and management entries, 3 - COS0
-	 * entry, 4 - COS1 entry.
-	 * COS1 | COS0 | DEBUG21 | DEBUG0 | MGMT
-	 * bit4   bit3	  bit2     bit1	   bit0
-	 * MCP and debug are strict
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x7);
-
-	/* Upper bound that COS0_WEIGHT can reach in the WFQ arbiter.*/
-	REG_WR(sc, PBF_REG_COS0_UPPER_BOUND,
-	       ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-	REG_WR(sc, PBF_REG_COS1_UPPER_BOUND,
-	       ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-}
-
-void elink_ets_bw_limit(const struct elink_params *params,
-			const uint32_t cos0_bw,
-			const uint32_t cos1_bw)
-{
-	/* ETS disabled configuration*/
-	struct bnx2x_softc *sc = params->sc;
-	const uint32_t total_bw = cos0_bw + cos1_bw;
-	uint32_t cos0_credit_weight = 0;
-	uint32_t cos1_credit_weight = 0;
-
-	ELINK_DEBUG_P0(sc, "ETS enabled BW limit configuration");
-
-	if ((!total_bw) ||
-	    (!cos0_bw) ||
-	    (!cos1_bw)) {
-		ELINK_DEBUG_P0(sc, "Total BW can't be zero");
-		return;
-	}
-
-	cos0_credit_weight = (cos0_bw * ELINK_ETS_BW_LIMIT_CREDIT_WEIGHT) /
-		total_bw;
-	cos1_credit_weight = (cos1_bw * ELINK_ETS_BW_LIMIT_CREDIT_WEIGHT) /
-		total_bw;
-
-	elink_ets_bw_limit_common(params);
-
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, cos0_credit_weight);
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, cos1_credit_weight);
-
-	REG_WR(sc, PBF_REG_COS0_WEIGHT, cos0_credit_weight);
-	REG_WR(sc, PBF_REG_COS1_WEIGHT, cos1_credit_weight);
-}
-
-elink_status_t elink_ets_strict(const struct elink_params *params,
-				const uint8_t strict_cos)
-{
-	/* ETS disabled configuration*/
-	struct bnx2x_softc *sc = params->sc;
-	uint32_t val	= 0;
-
-	ELINK_DEBUG_P0(sc, "ETS enabled strict configuration");
-	/* Bitmap of 5bits length. Each bit specifies whether the entry behaves
-	 * as strict.  Bits 0,1,2 - debug and management entries,
-	 * 3 - COS0 entry, 4 - COS1 entry.
-	 *  COS1 | COS0 | DEBUG21 | DEBUG0 | MGMT
-	 *  bit4   bit3	  bit2      bit1     bit0
-	 * MCP and debug are strict
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x1F);
-	/* For strict priority entries defines the number of consecutive slots
-	 * for the highest priority.
-	 */
-	REG_WR(sc, NIG_REG_P0_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
-	/* ETS mode disable */
-	REG_WR(sc, PBF_REG_ETS_ENABLED, 0);
-	/* Defines the number of consecutive slots for the strict priority */
-	REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0x100);
-
-	/* Defines the number of consecutive slots for the strict priority */
-	REG_WR(sc, PBF_REG_HIGH_PRIORITY_COS_NUM, strict_cos);
-
-	/* Mapping between entry  priority to client number (0,1,2 -debug and
-	 * management clients, 3 - COS0 client, 4 - COS client)(HIGHEST)
-	 * 3bits client num.
-	 *   PRI4    |    PRI3    |    PRI2    |    PRI1    |    PRI0
-	 * dbg0-010     dbg1-001     cos1-100     cos0-011     MCP-000
-	 * dbg0-010     dbg1-001     cos0-011     cos1-100     MCP-000
-	 */
-	val = (!strict_cos) ? 0x2318 : 0x22E0;
-	REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT, val);
-
-	return ELINK_STATUS_OK;
-}
-
 /******************************************************************/
 /*			PFC section				  */
 /******************************************************************/
@@ -2143,56 +1218,6 @@ static void elink_update_pfc_xmac(struct elink_params *params,
 	DELAY(30);
 }
 
-static void elink_emac_get_pfc_stat(struct elink_params *params,
-				    uint32_t pfc_frames_sent[2],
-				    uint32_t pfc_frames_received[2])
-{
-	/* Read pfc statistic */
-	struct bnx2x_softc *sc = params->sc;
-	uint32_t emac_base = params->port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
-	uint32_t val_xon = 0;
-	uint32_t val_xoff = 0;
-
-	ELINK_DEBUG_P0(sc, "pfc statistic read from EMAC");
-
-	/* PFC received frames */
-	val_xoff = REG_RD(sc, emac_base +
-				EMAC_REG_RX_PFC_STATS_XOFF_RCVD);
-	val_xoff &= EMAC_REG_RX_PFC_STATS_XOFF_RCVD_COUNT;
-	val_xon = REG_RD(sc, emac_base + EMAC_REG_RX_PFC_STATS_XON_RCVD);
-	val_xon &= EMAC_REG_RX_PFC_STATS_XON_RCVD_COUNT;
-
-	pfc_frames_received[0] = val_xon + val_xoff;
-
-	/* PFC received sent */
-	val_xoff = REG_RD(sc, emac_base +
-				EMAC_REG_RX_PFC_STATS_XOFF_SENT);
-	val_xoff &= EMAC_REG_RX_PFC_STATS_XOFF_SENT_COUNT;
-	val_xon = REG_RD(sc, emac_base + EMAC_REG_RX_PFC_STATS_XON_SENT);
-	val_xon &= EMAC_REG_RX_PFC_STATS_XON_SENT_COUNT;
-
-	pfc_frames_sent[0] = val_xon + val_xoff;
-}
-
-/* Read pfc statistic*/
-void elink_pfc_statistic(struct elink_params *params, struct elink_vars *vars,
-			 uint32_t pfc_frames_sent[2],
-			 uint32_t pfc_frames_received[2])
-{
-	/* Read pfc statistic */
-	struct bnx2x_softc *sc = params->sc;
-
-	ELINK_DEBUG_P0(sc, "pfc statistic");
-
-	if (!vars->link_up)
-		return;
-
-	if (vars->mac_type == ELINK_MAC_TYPE_EMAC) {
-		ELINK_DEBUG_P0(sc, "About to read PFC stats from EMAC");
-		elink_emac_get_pfc_stat(params, pfc_frames_sent,
-					pfc_frames_received);
-	}
-}
 /******************************************************************/
 /*			MAC/PBF section				  */
 /******************************************************************/
@@ -2877,54 +1902,6 @@ static void elink_update_pfc_bmac2(struct elink_params *params,
 	REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_BMAC_CONTROL, wb_data, 2);
 }
 
-/******************************************************************************
- * Description:
- *  This function is needed because NIG ARB_CREDIT_WEIGHT_X are
- *  not continues and ARB_CREDIT_WEIGHT_0 + offset is suitable.
- ******************************************************************************/
-static elink_status_t elink_pfc_nig_rx_priority_mask(struct bnx2x_softc *sc,
-					   uint8_t cos_entry,
-					   uint32_t priority_mask, uint8_t port)
-{
-	uint32_t nig_reg_rx_priority_mask_add = 0;
-
-	switch (cos_entry) {
-	case 0:
-	     nig_reg_rx_priority_mask_add = (port) ?
-		 NIG_REG_P1_RX_COS0_PRIORITY_MASK :
-		 NIG_REG_P0_RX_COS0_PRIORITY_MASK;
-		break;
-	case 1:
-	    nig_reg_rx_priority_mask_add = (port) ?
-		NIG_REG_P1_RX_COS1_PRIORITY_MASK :
-		NIG_REG_P0_RX_COS1_PRIORITY_MASK;
-		break;
-	case 2:
-	    nig_reg_rx_priority_mask_add = (port) ?
-		NIG_REG_P1_RX_COS2_PRIORITY_MASK :
-		NIG_REG_P0_RX_COS2_PRIORITY_MASK;
-		break;
-	case 3:
-		if (port)
-		return ELINK_STATUS_ERROR;
-	    nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS3_PRIORITY_MASK;
-		break;
-	case 4:
-		if (port)
-		return ELINK_STATUS_ERROR;
-	    nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS4_PRIORITY_MASK;
-		break;
-	case 5:
-		if (port)
-		return ELINK_STATUS_ERROR;
-	    nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS5_PRIORITY_MASK;
-		break;
-	}
-
-	REG_WR(sc, nig_reg_rx_priority_mask_add, priority_mask);
-
-	return ELINK_STATUS_OK;
-}
 static void elink_update_mng(struct elink_params *params, uint32_t link_status)
 {
 	struct bnx2x_softc *sc = params->sc;
@@ -2934,157 +1911,6 @@ static void elink_update_mng(struct elink_params *params, uint32_t link_status)
 			port_mb[params->port].link_status), link_status);
 }
 
-static void elink_update_pfc_nig(struct elink_params *params,
-		__rte_unused struct elink_vars *vars,
-		struct elink_nig_brb_pfc_port_params *nig_params)
-{
-	uint32_t xcm_mask = 0, ppp_enable = 0, pause_enable = 0;
-	uint32_t llfc_out_en = 0;
-	uint32_t llfc_enable = 0, xcm_out_en = 0, hwpfc_enable = 0;
-	uint32_t pkt_priority_to_cos = 0;
-	struct bnx2x_softc *sc = params->sc;
-	uint8_t port = params->port;
-
-	int set_pfc = params->feature_config_flags &
-		ELINK_FEATURE_CONFIG_PFC_ENABLED;
-	ELINK_DEBUG_P0(sc, "updating pfc nig parameters");
-
-	/* When NIG_LLH0_XCM_MASK_REG_LLHX_XCM_MASK_BCN bit is set
-	 * MAC control frames (that are not pause packets)
-	 * will be forwarded to the XCM.
-	 */
-	xcm_mask = REG_RD(sc, port ? NIG_REG_LLH1_XCM_MASK :
-			  NIG_REG_LLH0_XCM_MASK);
-	/* NIG params will override non PFC params, since it's possible to
-	 * do transition from PFC to SAFC
-	 */
-	if (set_pfc) {
-		pause_enable = 0;
-		llfc_out_en = 0;
-		llfc_enable = 0;
-		if (CHIP_IS_E3(sc))
-			ppp_enable = 0;
-		else
-			ppp_enable = 1;
-		xcm_mask &= ~(port ? NIG_LLH1_XCM_MASK_REG_LLH1_XCM_MASK_BCN :
-				     NIG_LLH0_XCM_MASK_REG_LLH0_XCM_MASK_BCN);
-		xcm_out_en = 0;
-		hwpfc_enable = 1;
-	} else  {
-		if (nig_params) {
-			llfc_out_en = nig_params->llfc_out_en;
-			llfc_enable = nig_params->llfc_enable;
-			pause_enable = nig_params->pause_enable;
-		} else  /* Default non PFC mode - PAUSE */
-			pause_enable = 1;
-
-		xcm_mask |= (port ? NIG_LLH1_XCM_MASK_REG_LLH1_XCM_MASK_BCN :
-			NIG_LLH0_XCM_MASK_REG_LLH0_XCM_MASK_BCN);
-		xcm_out_en = 1;
-	}
-
-	if (CHIP_IS_E3(sc))
-		REG_WR(sc, port ? NIG_REG_BRB1_PAUSE_IN_EN :
-		       NIG_REG_BRB0_PAUSE_IN_EN, pause_enable);
-	REG_WR(sc, port ? NIG_REG_LLFC_OUT_EN_1 :
-	       NIG_REG_LLFC_OUT_EN_0, llfc_out_en);
-	REG_WR(sc, port ? NIG_REG_LLFC_ENABLE_1 :
-	       NIG_REG_LLFC_ENABLE_0, llfc_enable);
-	REG_WR(sc, port ? NIG_REG_PAUSE_ENABLE_1 :
-	       NIG_REG_PAUSE_ENABLE_0, pause_enable);
-
-	REG_WR(sc, port ? NIG_REG_PPP_ENABLE_1 :
-	       NIG_REG_PPP_ENABLE_0, ppp_enable);
-
-	REG_WR(sc, port ? NIG_REG_LLH1_XCM_MASK :
-	       NIG_REG_LLH0_XCM_MASK, xcm_mask);
-
-	REG_WR(sc, port ? NIG_REG_LLFC_EGRESS_SRC_ENABLE_1 :
-	       NIG_REG_LLFC_EGRESS_SRC_ENABLE_0, 0x7);
-
-	/* Output enable for RX_XCM # IF */
-	REG_WR(sc, port ? NIG_REG_XCM1_OUT_EN :
-	       NIG_REG_XCM0_OUT_EN, xcm_out_en);
-
-	/* HW PFC TX enable */
-	REG_WR(sc, port ? NIG_REG_P1_HWPFC_ENABLE :
-	       NIG_REG_P0_HWPFC_ENABLE, hwpfc_enable);
-
-	if (nig_params) {
-		uint8_t i = 0;
-		pkt_priority_to_cos = nig_params->pkt_priority_to_cos;
-
-		for (i = 0; i < nig_params->num_of_rx_cos_priority_mask; i++)
-			elink_pfc_nig_rx_priority_mask(sc, i,
-		nig_params->rx_cos_priority_mask[i], port);
-
-		REG_WR(sc, port ? NIG_REG_LLFC_HIGH_PRIORITY_CLASSES_1 :
-		       NIG_REG_LLFC_HIGH_PRIORITY_CLASSES_0,
-		       nig_params->llfc_high_priority_classes);
-
-		REG_WR(sc, port ? NIG_REG_LLFC_LOW_PRIORITY_CLASSES_1 :
-		       NIG_REG_LLFC_LOW_PRIORITY_CLASSES_0,
-		       nig_params->llfc_low_priority_classes);
-	}
-	REG_WR(sc, port ? NIG_REG_P1_PKT_PRIORITY_TO_COS :
-	       NIG_REG_P0_PKT_PRIORITY_TO_COS,
-	       pkt_priority_to_cos);
-}
-
-elink_status_t elink_update_pfc(struct elink_params *params,
-		      struct elink_vars *vars,
-		      struct elink_nig_brb_pfc_port_params *pfc_params)
-{
-	/* The PFC and pause are orthogonal to one another, meaning when
-	 * PFC is enabled, the pause are disabled, and when PFC is
-	 * disabled, pause are set according to the pause result.
-	 */
-	uint32_t val;
-	struct bnx2x_softc *sc = params->sc;
-	uint8_t bmac_loopback = (params->loopback_mode == ELINK_LOOPBACK_BMAC);
-
-	if (params->feature_config_flags & ELINK_FEATURE_CONFIG_PFC_ENABLED)
-		vars->link_status |= LINK_STATUS_PFC_ENABLED;
-	else
-		vars->link_status &= ~LINK_STATUS_PFC_ENABLED;
-
-	elink_update_mng(params, vars->link_status);
-
-	/* Update NIG params */
-	elink_update_pfc_nig(params, vars, pfc_params);
-
-	if (!vars->link_up)
-		return ELINK_STATUS_OK;
-
-	ELINK_DEBUG_P0(sc, "About to update PFC in BMAC");
-
-	if (CHIP_IS_E3(sc)) {
-		if (vars->mac_type == ELINK_MAC_TYPE_XMAC)
-			elink_update_pfc_xmac(params, vars, 0);
-	} else {
-		val = REG_RD(sc, MISC_REG_RESET_REG_2);
-		if ((val &
-		     (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << params->port))
-		    == 0) {
-			ELINK_DEBUG_P0(sc, "About to update PFC in EMAC");
-			elink_emac_enable(params, vars, 0);
-			return ELINK_STATUS_OK;
-		}
-		if (CHIP_IS_E2(sc))
-			elink_update_pfc_bmac2(params, vars, bmac_loopback);
-		else
-			elink_update_pfc_bmac1(params, vars);
-
-		val = 0;
-		if ((params->feature_config_flags &
-		     ELINK_FEATURE_CONFIG_PFC_ENABLED) ||
-		    (vars->flow_ctrl & ELINK_FLOW_CTRL_TX))
-			val = 1;
-		REG_WR(sc, NIG_REG_BMAC0_PAUSE_OUT_EN + params->port * 4, val);
-	}
-	return ELINK_STATUS_OK;
-}
-
 static elink_status_t elink_bmac1_enable(struct elink_params *params,
 			      struct elink_vars *vars,
 			      uint8_t is_lb)
@@ -4030,40 +2856,6 @@ static void elink_cl45_read_and_write(struct bnx2x_softc *sc,
 	elink_cl45_write(sc, phy, devad, reg, val & and_val);
 }
 
-elink_status_t elink_phy_read(struct elink_params *params, uint8_t phy_addr,
-		   uint8_t devad, uint16_t reg, uint16_t *ret_val)
-{
-	uint8_t phy_index;
-	/* Probe for the phy according to the given phy_addr, and execute
-	 * the read request on it
-	 */
-	for (phy_index = 0; phy_index < params->num_phys; phy_index++) {
-		if (params->phy[phy_index].addr == phy_addr) {
-			return elink_cl45_read(params->sc,
-					       &params->phy[phy_index], devad,
-					       reg, ret_val);
-		}
-	}
-	return ELINK_STATUS_ERROR;
-}
-
-elink_status_t elink_phy_write(struct elink_params *params, uint8_t phy_addr,
-		    uint8_t devad, uint16_t reg, uint16_t val)
-{
-	uint8_t phy_index;
-	/* Probe for the phy according to the given phy_addr, and execute
-	 * the write request on it
-	 */
-	for (phy_index = 0; phy_index < params->num_phys; phy_index++) {
-		if (params->phy[phy_index].addr == phy_addr) {
-			return elink_cl45_write(params->sc,
-						&params->phy[phy_index], devad,
-						reg, val);
-		}
-	}
-	return ELINK_STATUS_ERROR;
-}
-
 static uint8_t elink_get_warpcore_lane(__rte_unused struct elink_phy *phy,
 				  struct elink_params *params)
 {
@@ -7108,47 +5900,6 @@ static elink_status_t elink_null_format_ver(__rte_unused uint32_t spirom_ver,
 	return ELINK_STATUS_OK;
 }
 
-elink_status_t elink_get_ext_phy_fw_version(struct elink_params *params,
-				 uint8_t *version,
-				 uint16_t len)
-{
-	struct bnx2x_softc *sc;
-	uint32_t spirom_ver = 0;
-	elink_status_t status = ELINK_STATUS_OK;
-	uint8_t *ver_p = version;
-	uint16_t remain_len = len;
-	if (version == NULL || params == NULL)
-		return ELINK_STATUS_ERROR;
-	sc = params->sc;
-
-	/* Extract first external phy*/
-	version[0] = '\0';
-	spirom_ver = REG_RD(sc, params->phy[ELINK_EXT_PHY1].ver_addr);
-
-	if (params->phy[ELINK_EXT_PHY1].format_fw_ver) {
-		status |= params->phy[ELINK_EXT_PHY1].format_fw_ver(spirom_ver,
-							      ver_p,
-							      &remain_len);
-		ver_p += (len - remain_len);
-	}
-	if ((params->num_phys == ELINK_MAX_PHYS) &&
-	    (params->phy[ELINK_EXT_PHY2].ver_addr != 0)) {
-		spirom_ver = REG_RD(sc, params->phy[ELINK_EXT_PHY2].ver_addr);
-		if (params->phy[ELINK_EXT_PHY2].format_fw_ver) {
-			*ver_p = '/';
-			ver_p++;
-			remain_len--;
-			status |= params->phy[ELINK_EXT_PHY2].format_fw_ver(
-				spirom_ver,
-				ver_p,
-				&remain_len);
-			ver_p = version + (len - remain_len);
-		}
-	}
-	*ver_p = '\0';
-	return status;
-}
-
 static void elink_set_xgxs_loopback(struct elink_phy *phy,
 				    struct elink_params *params)
 {
@@ -7360,99 +6111,6 @@ elink_status_t elink_set_led(struct elink_params *params,
 
 }
 
-/* This function comes to reflect the actual link state read DIRECTLY from the
- * HW
- */
-elink_status_t elink_test_link(struct elink_params *params,
-			       __rte_unused struct elink_vars *vars,
-		    uint8_t is_serdes)
-{
-	struct bnx2x_softc *sc = params->sc;
-	uint16_t gp_status = 0, phy_index = 0;
-	uint8_t ext_phy_link_up = 0, serdes_phy_type;
-	struct elink_vars temp_vars;
-	struct elink_phy *int_phy = &params->phy[ELINK_INT_PHY];
-#ifdef ELINK_INCLUDE_FPGA
-	if (CHIP_REV_IS_FPGA(sc))
-		return ELINK_STATUS_OK;
-#endif
-#ifdef ELINK_INCLUDE_EMUL
-	if (CHIP_REV_IS_EMUL(sc))
-		return ELINK_STATUS_OK;
-#endif
-
-	if (CHIP_IS_E3(sc)) {
-		uint16_t link_up;
-		if (params->req_line_speed[ELINK_LINK_CONFIG_IDX(ELINK_INT_PHY)]
-		    > ELINK_SPEED_10000) {
-			/* Check 20G link */
-			elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
-					1, &link_up);
-			elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
-					1, &link_up);
-			link_up &= (1 << 2);
-		} else {
-			/* Check 10G link and below*/
-			uint8_t lane = elink_get_warpcore_lane(int_phy, params);
-			elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
-					MDIO_WC_REG_GP2_STATUS_GP_2_1,
-					&gp_status);
-			gp_status = ((gp_status >> 8) & 0xf) |
-				((gp_status >> 12) & 0xf);
-			link_up = gp_status & (1 << lane);
-		}
-		if (!link_up)
-			return ELINK_STATUS_NO_LINK;
-	} else {
-		CL22_RD_OVER_CL45(sc, int_phy,
-			  MDIO_REG_BANK_GP_STATUS,
-			  MDIO_GP_STATUS_TOP_AN_STATUS1,
-			  &gp_status);
-	/* Link is up only if both local phy and external phy are up */
-	if (!(gp_status & MDIO_GP_STATUS_TOP_AN_STATUS1_LINK_STATUS))
-		return ELINK_STATUS_NO_LINK;
-	}
-	/* In XGXS loopback mode, do not check external PHY */
-	if (params->loopback_mode == ELINK_LOOPBACK_XGXS)
-		return ELINK_STATUS_OK;
-
-	switch (params->num_phys) {
-	case 1:
-		/* No external PHY */
-		return ELINK_STATUS_OK;
-	case 2:
-		ext_phy_link_up = params->phy[ELINK_EXT_PHY1].read_status(
-			&params->phy[ELINK_EXT_PHY1],
-			params, &temp_vars);
-		break;
-	case 3: /* Dual Media */
-		for (phy_index = ELINK_EXT_PHY1; phy_index < params->num_phys;
-		      phy_index++) {
-			serdes_phy_type = ((params->phy[phy_index].media_type ==
-					    ELINK_ETH_PHY_SFPP_10G_FIBER) ||
-					   (params->phy[phy_index].media_type ==
-					    ELINK_ETH_PHY_SFP_1G_FIBER) ||
-					   (params->phy[phy_index].media_type ==
-					    ELINK_ETH_PHY_XFP_FIBER) ||
-					   (params->phy[phy_index].media_type ==
-					    ELINK_ETH_PHY_DA_TWINAX));
-
-			if (is_serdes != serdes_phy_type)
-				continue;
-			if (params->phy[phy_index].read_status) {
-				ext_phy_link_up |=
-					params->phy[phy_index].read_status(
-						&params->phy[phy_index],
-						params, &temp_vars);
-			}
-		}
-		break;
-	}
-	if (ext_phy_link_up)
-		return ELINK_STATUS_OK;
-	return ELINK_STATUS_NO_LINK;
-}
-
 static elink_status_t elink_link_initialize(struct elink_params *params,
 				 struct elink_vars *vars)
 {
@@ -12443,31 +11101,6 @@ static elink_status_t elink_7101_format_ver(uint32_t spirom_ver, uint8_t *str,
 	return ELINK_STATUS_OK;
 }
 
-void elink_sfx7101_sp_sw_reset(struct bnx2x_softc *sc, struct elink_phy *phy)
-{
-	uint16_t val, cnt;
-
-	elink_cl45_read(sc, phy,
-			MDIO_PMA_DEVAD,
-			MDIO_PMA_REG_7101_RESET, &val);
-
-	for (cnt = 0; cnt < 10; cnt++) {
-		DELAY(1000 * 50);
-		/* Writes a self-clearing reset */
-		elink_cl45_write(sc, phy,
-				 MDIO_PMA_DEVAD,
-				 MDIO_PMA_REG_7101_RESET,
-				 (val | (1 << 15)));
-		/* Wait for clear */
-		elink_cl45_read(sc, phy,
-				MDIO_PMA_DEVAD,
-				MDIO_PMA_REG_7101_RESET, &val);
-
-		if ((val & (1 << 15)) == 0)
-			break;
-	}
-}
-
 static void elink_7101_hw_reset(__rte_unused struct elink_phy *phy,
 				struct elink_params *params) {
 	/* Low power mode is controlled by GPIO 2 */
diff --git a/drivers/net/bnx2x/elink.h b/drivers/net/bnx2x/elink.h
index dd70ac6c66..f5cdf7440b 100644
--- a/drivers/net/bnx2x/elink.h
+++ b/drivers/net/bnx2x/elink.h
@@ -515,26 +515,10 @@ elink_status_t elink_lfa_reset(struct elink_params *params, struct elink_vars *v
 /* elink_link_update should be called upon link interrupt */
 elink_status_t elink_link_update(struct elink_params *params, struct elink_vars *vars);
 
-/* use the following phy functions to read/write from external_phy
- * In order to use it to read/write internal phy registers, use
- * ELINK_DEFAULT_PHY_DEV_ADDR as devad, and (_bank + (_addr & 0xf)) as
- * the register
- */
-elink_status_t elink_phy_read(struct elink_params *params, uint8_t phy_addr,
-		   uint8_t devad, uint16_t reg, uint16_t *ret_val);
-
-elink_status_t elink_phy_write(struct elink_params *params, uint8_t phy_addr,
-		    uint8_t devad, uint16_t reg, uint16_t val);
-
 /* Reads the link_status from the shmem,
    and update the link vars accordingly */
 void elink_link_status_update(struct elink_params *input,
 			    struct elink_vars *output);
-/* returns string representing the fw_version of the external phy */
-elink_status_t elink_get_ext_phy_fw_version(struct elink_params *params,
-				 uint8_t *version,
-				 uint16_t len);
-
 /* Set/Unset the led
    Basically, the CLC takes care of the led for the link, but in case one needs
    to set/unset the led unnaturally, set the "mode" to ELINK_LED_MODE_OPER to
@@ -551,14 +535,6 @@ elink_status_t elink_set_led(struct elink_params *params,
  */
 void elink_handle_module_detect_int(struct elink_params *params);
 
-/* Get the actual link status. In case it returns ELINK_STATUS_OK, link is up,
- * otherwise link is down
- */
-elink_status_t elink_test_link(struct elink_params *params,
-		    struct elink_vars *vars,
-		    uint8_t is_serdes);
-
-
 /* One-time initialization for external phy after power up */
 elink_status_t elink_common_init_phy(struct bnx2x_softc *sc, uint32_t shmem_base_path[],
 			  uint32_t shmem2_base_path[], uint32_t chip_id,
@@ -567,9 +543,6 @@ elink_status_t elink_common_init_phy(struct bnx2x_softc *sc, uint32_t shmem_base
 /* Reset the external PHY using GPIO */
 void elink_ext_phy_hw_reset(struct bnx2x_softc *sc, uint8_t port);
 
-/* Reset the external of SFX7101 */
-void elink_sfx7101_sp_sw_reset(struct bnx2x_softc *sc, struct elink_phy *phy);
-
 /* Read "byte_cnt" bytes from address "addr" from the SFP+ EEPROM */
 elink_status_t elink_read_sfp_module_eeprom(struct elink_phy *phy,
 				 struct elink_params *params, uint8_t dev_addr,
@@ -650,36 +623,6 @@ struct elink_ets_params {
 	struct elink_ets_cos_params cos[ELINK_DCBX_MAX_NUM_COS];
 };
 
-/* Used to update the PFC attributes in EMAC, BMAC, NIG and BRB
- * when link is already up
- */
-elink_status_t elink_update_pfc(struct elink_params *params,
-		      struct elink_vars *vars,
-		      struct elink_nig_brb_pfc_port_params *pfc_params);
-
-
-/* Used to configure the ETS to disable */
-elink_status_t elink_ets_disabled(struct elink_params *params,
-		       struct elink_vars *vars);
-
-/* Used to configure the ETS to BW limited */
-void elink_ets_bw_limit(const struct elink_params *params,
-			const uint32_t cos0_bw,
-			const uint32_t cos1_bw);
-
-/* Used to configure the ETS to strict */
-elink_status_t elink_ets_strict(const struct elink_params *params,
-				const uint8_t strict_cos);
-
-
-/*  Configure the COS to ETS according to BW and SP settings.*/
-elink_status_t elink_ets_e3b0_config(const struct elink_params *params,
-			 const struct elink_vars *vars,
-			 struct elink_ets_params *ets_params);
-/* Read pfc statistic*/
-void elink_pfc_statistic(struct elink_params *params, struct elink_vars *vars,
-			 uint32_t pfc_frames_sent[2],
-			 uint32_t pfc_frames_received[2]);
 void elink_init_mod_abs_int(struct bnx2x_softc *sc, struct elink_vars *vars,
 			    uint32_t chip_id, uint32_t shmem_base, uint32_t shmem2_base,
 			    uint8_t port);
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index 918cabf19c..cdb13607d5 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -227,62 +227,6 @@ ba_alloc_reverse(struct bitalloc *pool)
 	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
 }
 
-static int
-ba_alloc_index_helper(struct bitalloc *pool,
-		      int              offset,
-		      int              words,
-		      unsigned int     size,
-		      int             *index,
-		      int             *clear)
-{
-	bitalloc_word_t *storage = &pool->storage[offset];
-	int       loc;
-	int       r;
-
-	if (pool->size > size)
-		r = ba_alloc_index_helper(pool,
-					  offset + words + 1,
-					  storage[words],
-					  size * 32,
-					  index,
-					  clear);
-	else
-		r = 1; /* Check if already allocated */
-
-	loc = (*index % 32);
-	*index = *index / 32;
-
-	if (r == 1) {
-		r = (storage[*index] & (1 << loc)) ? 0 : -1;
-		if (r == 0) {
-			*clear = 1;
-			pool->free_count--;
-		}
-	}
-
-	if (*clear) {
-		storage[*index] &= ~(1 << loc);
-		*clear = (storage[*index] == 0);
-	}
-
-	return r;
-}
-
-int
-ba_alloc_index(struct bitalloc *pool, int index)
-{
-	int clear = 0;
-	int index_copy = index;
-
-	if (index < 0 || index >= (int)pool->size)
-		return -1;
-
-	if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
-		return index;
-	else
-		return -1;
-}
-
 static int
 ba_inuse_helper(struct bitalloc *pool,
 		int              offset,
@@ -365,107 +309,7 @@ ba_free(struct bitalloc *pool, int index)
 	return ba_free_helper(pool, 0, 1, 32, &index);
 }
 
-int
-ba_inuse_free(struct bitalloc *pool, int index)
-{
-	if (index < 0 || index >= (int)pool->size)
-		return -1;
-
-	return ba_free_helper(pool, 0, 1, 32, &index) + 1;
-}
-
-int
-ba_free_count(struct bitalloc *pool)
-{
-	return (int)pool->free_count;
-}
-
 int ba_inuse_count(struct bitalloc *pool)
 {
 	return (int)(pool->size) - (int)(pool->free_count);
 }
-
-static int
-ba_find_next_helper(struct bitalloc *pool,
-		    int              offset,
-		    int              words,
-		    unsigned int     size,
-		    int             *index,
-		    int              free)
-{
-	bitalloc_word_t *storage = &pool->storage[offset];
-	int       loc, r, bottom = 0;
-
-	if (pool->size > size)
-		r = ba_find_next_helper(pool,
-					offset + words + 1,
-					storage[words],
-					size * 32,
-					index,
-					free);
-	else
-		bottom = 1; /* Bottom of tree */
-
-	loc = (*index % 32);
-	*index = *index / 32;
-
-	if (bottom) {
-		int bit_index = *index * 32;
-
-		loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
-		if (loc > 0) {
-			loc--;
-			r = (bit_index + loc);
-			if (r >= (int)pool->size)
-				r = -1;
-		} else {
-			/* Loop over array at bottom of tree */
-			r = -1;
-			bit_index += 32;
-			*index = *index + 1;
-			while ((int)pool->size > bit_index) {
-				loc = ba_ffs(~storage[*index]);
-
-				if (loc > 0) {
-					loc--;
-					r = (bit_index + loc);
-					if (r >= (int)pool->size)
-						r = -1;
-					break;
-				}
-				bit_index += 32;
-				*index = *index + 1;
-			}
-		}
-	}
-
-	if (r >= 0 && (free)) {
-		if (bottom)
-			pool->free_count++;
-		storage[*index] |= (1 << loc);
-	}
-
-	return r;
-}
-
-int
-ba_find_next_inuse(struct bitalloc *pool, int index)
-{
-	if (index < 0 ||
-	    index >= (int)pool->size ||
-	    pool->free_count == pool->size)
-		return -1;
-
-	return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
-}
-
-int
-ba_find_next_inuse_free(struct bitalloc *pool, int index)
-{
-	if (index < 0 ||
-	    index >= (int)pool->size ||
-	    pool->free_count == pool->size)
-		return -1;
-
-	return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
-}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 2825bb37e5..9ac6eadd81 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -70,7 +70,6 @@ int ba_init(struct bitalloc *pool, int size);
  * Returns -1 on failure, or index of allocated entry
  */
 int ba_alloc(struct bitalloc *pool);
-int ba_alloc_index(struct bitalloc *pool, int index);
 
 /**
  * Returns -1 on failure, or index of allocated entry
@@ -85,37 +84,12 @@ int ba_alloc_reverse(struct bitalloc *pool);
  */
 int ba_inuse(struct bitalloc *pool, int index);
 
-/**
- * Variant of ba_inuse that frees the index if it is allocated, same
- * return codes as ba_inuse
- */
-int ba_inuse_free(struct bitalloc *pool, int index);
-
-/**
- * Find next index that is in use, start checking at index 'idx'
- *
- * Returns next index that is in use on success, or
- * -1 if no in use index is found
- */
-int ba_find_next_inuse(struct bitalloc *pool, int idx);
-
-/**
- * Variant of ba_find_next_inuse that also frees the next in use index,
- * same return codes as ba_find_next_inuse
- */
-int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
-
 /**
  * Multiple freeing of the same index has no negative side effects,
  * but will return -1.  returns -1 on failure, 0 on success.
  */
 int ba_free(struct bitalloc *pool, int index);
 
-/**
- * Returns the pool's free count
- */
-int ba_free_count(struct bitalloc *pool);
-
 /**
  * Returns the pool's in use count
  */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 954806377e..bda415e82e 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -88,28 +88,3 @@ stack_pop(struct stack *st, uint32_t *x)
 
 	return 0;
 }
-
-/* Dump the stack
- */
-void stack_dump(struct stack *st)
-{
-	int i, j;
-
-	printf("top=%d\n", st->top);
-	printf("max=%d\n", st->max);
-
-	if (st->top == -1) {
-		printf("stack is empty\n");
-		return;
-	}
-
-	for (i = 0; i < st->max + 7 / 8; i++) {
-		printf("item[%d] 0x%08x", i, st->items[i]);
-
-		for (j = 0; j < 7; j++) {
-			if (i++ < st->max - 1)
-				printf(" 0x%08x", st->items[i]);
-		}
-		printf("\n");
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index 6732e03132..7e2f5dfec6 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -102,16 +102,4 @@ int stack_push(struct stack *st, uint32_t x);
  */
 int stack_pop(struct stack *st, uint32_t *x);
 
-/** Dump stack information
- *
- * Warning: Don't use for large stacks due to prints
- *
- * [in] st
- *   pointer to the stack
- *
- * return
- *    none
- */
-void stack_dump(struct stack *st);
-
 #endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f49a00256..a4276d1bcc 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -90,69 +90,6 @@ tf_open_session(struct tf *tfp,
 	return 0;
 }
 
-int
-tf_attach_session(struct tf *tfp,
-		  struct tf_attach_session_parms *parms)
-{
-	int rc;
-	unsigned int domain, bus, slot, device;
-	struct tf_session_attach_session_parms aparms;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Verify control channel */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* Verify 'attach' channel */
-	rc = sscanf(parms->attach_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device attach_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* Prepare return value of session_id, using ctrl_chan_name
-	 * device values as it becomes the session id.
-	 */
-	parms->session_id.internal.domain = domain;
-	parms->session_id.internal.bus = bus;
-	parms->session_id.internal.device = device;
-	aparms.attach_cfg = parms;
-	rc = tf_session_attach_session(tfp,
-				       &aparms);
-	/* Logging handled by dev_bind */
-	if (rc)
-		return rc;
-
-	TFP_DRV_LOG(INFO,
-		    "Attached to session, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return rc;
-}
-
 int
 tf_close_session(struct tf *tfp)
 {
@@ -792,14 +729,6 @@ tf_set_tcam_entry(struct tf *tfp,
 	return 0;
 }
 
-int
-tf_get_tcam_entry(struct tf *tfp __rte_unused,
-		  struct tf_get_tcam_entry_parms *parms __rte_unused)
-{
-	TF_CHECK_PARMS2(tfp, parms);
-	return -EOPNOTSUPP;
-}
-
 int
 tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
@@ -1228,80 +1157,6 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	struct tf_tbl_get_bulk_parms bparms;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Can't do static initialization due to UT enum check */
-	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Retrieve the device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup device, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-		return rc;
-	}
-
-	/* Internal table type processing */
-
-	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
-	bparms.dir = parms->dir;
-	bparms.type = parms->type;
-	bparms.starting_idx = parms->starting_idx;
-	bparms.num_entries = parms->num_entries;
-	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
-	bparms.physical_mem_addr = parms->physical_mem_addr;
-	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table get bulk failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	return rc;
-}
-
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -1340,44 +1195,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
-int
-tf_map_tbl_scope(struct tf *tfp,
-		   struct tf_map_tbl_scope_parms *parms)
-{
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	int rc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to lookup session, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Retrieve the device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to lookup device, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	if (dev->ops->tf_dev_map_tbl_scope != NULL) {
-		rc = dev->ops->tf_dev_map_tbl_scope(tfp, parms);
-	} else {
-		TFP_DRV_LOG(ERR,
-			    "Map table scope not supported by device\n");
-		return -EINVAL;
-	}
-
-	return rc;
-}
 
 int
 tf_free_tbl_scope(struct tf *tfp,
@@ -1475,61 +1292,3 @@ tf_set_if_tbl_entry(struct tf *tfp,
 
 	return 0;
 }
-
-int
-tf_get_if_tbl_entry(struct tf *tfp,
-		    struct tf_get_if_tbl_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	struct tf_if_tbl_get_parms gparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Retrieve the device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup device, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	if (dev->ops->tf_dev_get_if_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	gparms.dir = parms->dir;
-	gparms.type = parms->type;
-	gparms.idx = parms->idx;
-	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
-	gparms.data = parms->data;
-
-	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: If_tbl get failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fa8ab52af1..2d556be752 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -657,27 +657,6 @@ struct tf_attach_session_parms {
 	union tf_session_id session_id;
 };
 
-/**
- * Experimental
- *
- * Allows a 2nd application instance to attach to an existing
- * session. Used when a session is to be shared between two processes.
- *
- * Attach will increment a ref count as to manage the shared session data.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to attach parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_attach_session(struct tf *tfp,
-		      struct tf_attach_session_parms *parms);
-
 /**
  * Closes an existing session client or the session it self. The
  * session client is default closed and if the session reference count
@@ -961,25 +940,6 @@ struct tf_map_tbl_scope_parms {
 int tf_alloc_tbl_scope(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms);
 
-/**
- * map a table scope (legacy device only Wh+/SR)
- *
- * Map a table scope to one or more partition interfaces (parifs).
- * The parif can be remapped in the L2 context lookup for legacy devices.  This
- * API allows a number of parifs to be mapped to the same table scope.  On
- * legacy devices a table scope identifies one of 16 sets of EEM table base
- * addresses and is associated with a PF communication channel.  The associated
- * PF must be configured for the table scope to operate.
- *
- * An L2 context TCAM lookup returns a remapped parif value used to
- * index into the set of 16 parif_to_pf registers which are used to map to one
- * of the 16 table scopes.  This API allows the user to map the parifs in the
- * mask to the previously allocated table scope (EEM table).
-
- * Returns success or failure code.
- */
-int tf_map_tbl_scope(struct tf *tfp,
-		      struct tf_map_tbl_scope_parms *parms);
 /**
  * free a table scope
  *
@@ -1256,18 +1216,6 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/**
- * get TCAM entry
- *
- * Program a TCAM table entry for a TruFlow session.
- *
- * If the entry has not been allocated, an error will be returned.
- *
- * Returns success or failure code.
- */
-int tf_get_tcam_entry(struct tf *tfp,
-		      struct tf_get_tcam_entry_parms *parms);
-
 /**
  * tf_free_tcam_entry parameter definition
  */
@@ -1638,22 +1586,6 @@ struct tf_bulk_get_tbl_entry_parms {
 	uint64_t physical_mem_addr;
 };
 
-/**
- * Bulk get index table entry
- *
- * Used to retrieve a set of index table entries.
- *
- * Entries within the range may not have been allocated using
- * tf_alloc_tbl_entry() at the time of access. But the range must
- * be within the bounds determined from tf_open_session() for the
- * given table type.  Currently, this is only used for collecting statistics.
- *
- * Returns success or failure code. Failure will be returned if the
- * provided data buffer is too small for the data type requested.
- */
-int tf_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
-
 /**
  * @page exact_match Exact Match Table
  *
@@ -2066,17 +1998,4 @@ struct tf_get_if_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/**
- * get interface table entry
- *
- * Used to retrieve an interface table entry.
- *
- * Reads the interface table entry value
- *
- * Returns success or failure code. Failure will be returned if the
- * provided data buffer is too small for the data type requested.
- */
-int tf_get_if_tbl_entry(struct tf *tfp,
-			struct tf_get_if_tbl_entry_parms *parms);
-
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 5615eedbbe..e4fe5fe055 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -148,14 +148,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-int
-tf_msg_session_attach(struct tf *tfp __rte_unused,
-		      char *ctrl_chan_name __rte_unused,
-		      uint8_t tf_fw_session_id __rte_unused)
-{
-	return -1;
-}
-
 int
 tf_msg_session_client_register(struct tf *tfp,
 			       char *ctrl_channel_name,
@@ -266,38 +258,6 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-int
-tf_msg_session_qcfg(struct tf *tfp)
-{
-	int rc;
-	struct hwrm_tf_session_qcfg_input req = { 0 };
-	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-	uint8_t fw_session_id;
-
-	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Unable to lookup FW id, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Populate the request */
-	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
-
-	parms.tf_type = HWRM_TF_SESSION_QCFG,
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 72bf850487..4483017ada 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -38,26 +38,6 @@ int tf_msg_session_open(struct tf *tfp,
 			uint8_t *fw_session_id,
 			uint8_t *fw_session_client_id);
 
-/**
- * Sends session close request to Firmware
- *
- * [in] session
- *   Pointer to session handle
- *
- * [in] ctrl_chan_name
- *   PCI name of the control channel
- *
- * [in] fw_session_id
- *   Pointer to the fw_session_id that is assigned to the session at
- *   time of session open
- *
- * Returns:
- *   0 on Success else internal Truflow error
- */
-int tf_msg_session_attach(struct tf *tfp,
-			  char *ctrl_channel_name,
-			  uint8_t tf_fw_session_id);
-
 /**
  * Sends session client register request to Firmware
  *
@@ -105,17 +85,6 @@ int tf_msg_session_client_unregister(struct tf *tfp,
  */
 int tf_msg_session_close(struct tf *tfp);
 
-/**
- * Sends session query config request to TF Firmware
- *
- * [in] session
- *   Pointer to session handle
- *
- * Returns:
- *   0 on Success else internal Truflow error
- */
-int tf_msg_session_qcfg(struct tf *tfp);
-
 /**
  * Sends session HW resource query capability request to TF Firmware
  *
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c95c4bdbd3..912b2837f9 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -749,36 +749,3 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
-
-int
-tf_session_get_session_id(struct tf *tfp,
-			  union tf_session_id *session_id)
-{
-	int rc;
-	struct tf_session *tfs = NULL;
-
-	if (tfp->session == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Session not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	if (session_id == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Invalid Argument(s), rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Using internal version as session client may not exist yet */
-	rc = tf_session_get_session_internal(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	*session_id = tfs->session_id;
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 6a5c894033..37d4703cc1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -394,20 +394,4 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
-/**
- * Looks up the Session id the requested TF handle.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [out] session_id
- *   Pointer to the session_id
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_session_get_session_id(struct tf *tfp,
-			      union tf_session_id *session_id);
-
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
index a4207eb3ab..2caf4f8747 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -637,59 +637,6 @@ tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms)
 	return 0;
 }
 
-int
-tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms)
-{
-	uint16_t idx;
-	struct tf_shadow_tbl_ctxt *ctxt;
-	struct tf_tbl_set_parms *sparms;
-	struct tf_shadow_tbl_db *shadow_db;
-	struct tf_shadow_tbl_shadow_result_entry *sr_entry;
-
-	if (!parms || !parms->sparms) {
-		TFP_DRV_LOG(ERR, "Null parms\n");
-		return -EINVAL;
-	}
-
-	sparms = parms->sparms;
-	if (!sparms->data || !sparms->data_sz_in_bytes) {
-		TFP_DRV_LOG(ERR, "%s:%s No result to set.\n",
-			    tf_dir_2_str(sparms->dir),
-			    tf_tbl_type_2_str(sparms->type));
-		return -EINVAL;
-	}
-
-	shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
-	ctxt = tf_shadow_tbl_ctxt_get(shadow_db, sparms->type);
-	if (!ctxt) {
-		/* We aren't tracking this table, so return success */
-		TFP_DRV_LOG(DEBUG, "%s Unable to get tbl mgr context\n",
-			    tf_tbl_type_2_str(sparms->type));
-		return 0;
-	}
-
-	idx = TF_SHADOW_IDX_TO_SHIDX(ctxt, sparms->idx);
-	if (idx >= tf_shadow_tbl_sh_num_entries_get(ctxt)) {
-		TFP_DRV_LOG(ERR, "%s:%s Invalid idx(0x%x)\n",
-			    tf_dir_2_str(sparms->dir),
-			    tf_tbl_type_2_str(sparms->type),
-			    sparms->idx);
-		return -EINVAL;
-	}
-
-	/* Write the result table, the key/hash has been written already */
-	sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[idx];
-
-	/*
-	 * If the handle is not valid, the bind was never called.  We aren't
-	 * tracking this entry.
-	 */
-	if (!TF_SHADOW_HB_HANDLE_IS_VALID(sr_entry->hb_handle))
-		return 0;
-
-	return 0;
-}
-
 int
 tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
index 96a34309b2..bbd8cfd3a9 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -225,20 +225,6 @@ int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
  */
 int tf_shadow_tbl_bind_index(struct tf_shadow_tbl_bind_index_parms *parms);
 
-/**
- * Inserts an element into the Shadow table DB. Will fail if the
- * elements ref_count is different from 0. Ref_count after insert will
- * be incremented.
- *
- * [in] parms
- *   Pointer to insert parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
-
 /**
  * Removes an element from the Shadow table DB. Will fail if the
  * elements ref_count is 0. Ref_count after removal will be
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 7679d09eea..e3fec46926 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -683,10 +683,3 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 
 	return 0;
 }
-
-int
-tf_tcam_get(struct tf *tfp __rte_unused,
-	    struct tf_tcam_get_parms *parms __rte_unused)
-{
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 280f138dd3..9614cf52c7 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -355,21 +355,4 @@ int tf_tcam_alloc_search(struct tf *tfp,
 int tf_tcam_set(struct tf *tfp,
 		struct tf_tcam_set_parms *parms);
 
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tcam_get(struct tf *tfp,
-		struct tf_tcam_get_parms *parms);
-
 #endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 0f6d63cc00..49ca034241 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -135,33 +135,6 @@ tfp_memcpy(void *dest, void *src, size_t n)
 	rte_memcpy(dest, src, n);
 }
 
-/**
- * Used to initialize portable spin lock
- */
-void
-tfp_spinlock_init(struct tfp_spinlock_parms *parms)
-{
-	rte_spinlock_init(&parms->slock);
-}
-
-/**
- * Used to lock portable spin lock
- */
-void
-tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
-{
-	rte_spinlock_lock(&parms->slock);
-}
-
-/**
- * Used to unlock portable spin lock
- */
-void
-tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
-{
-	rte_spinlock_unlock(&parms->slock);
-}
-
 int
 tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
 {
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 551b9c569f..fc2409371a 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -202,10 +202,6 @@ int tfp_calloc(struct tfp_calloc_parms *parms);
 void tfp_memcpy(void *dest, void *src, size_t n);
 void tfp_free(void *addr);
 
-void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
-void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
-void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
-
 /**
  * Lookup of the FID in the platform specific structure.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index 45025516f4..4a6105a05e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -214,74 +214,6 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
 	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
 }
 
-/*
- * DMA-in the raw counter data from the HW and accumulate in the
- * local accumulator table using the TF-Core API
- *
- * tfp [in] The TF-Core context
- *
- * fc_info [in] The ULP Flow counter info ptr
- *
- * dir [in] The direction of the flow
- *
- * num_counters [in] The number of counters
- *
- */
-__rte_unused static int32_t
-ulp_bulk_get_flow_stats(struct tf *tfp,
-			struct bnxt_ulp_fc_info *fc_info,
-			enum tf_dir dir,
-			struct bnxt_ulp_device_params *dparms)
-/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
-{
-	int rc = 0;
-	struct tf_tbl_get_bulk_parms parms = { 0 };
-	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
-	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
-	uint64_t *stats = NULL;
-	uint16_t i = 0;
-
-	parms.dir = dir;
-	parms.type = stype;
-	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
-	parms.num_entries = dparms->flow_count_db_entries / 2; /* direction */
-	/*
-	 * TODO:
-	 * Size of an entry needs to obtained from template
-	 */
-	parms.entry_sz_in_bytes = sizeof(uint64_t);
-	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
-	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
-
-	if (!stats) {
-		PMD_DRV_LOG(ERR,
-			    "BULK: Memory not initialized id:0x%x dir:%d\n",
-			    parms.starting_idx, dir);
-		return -EINVAL;
-	}
-
-	rc = tf_tbl_bulk_get(tfp, &parms);
-	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "BULK: Get failed for id:0x%x rc:%d\n",
-			    parms.starting_idx, rc);
-		return rc;
-	}
-
-	for (i = 0; i < parms.num_entries; i++) {
-		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
-		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
-		if (!sw_acc_tbl_entry->valid)
-			continue;
-		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i],
-							      dparms);
-		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i],
-								dparms);
-	}
-
-	return rc;
-}
-
 static int ulp_get_single_flow_stat(struct bnxt_ulp_context *ctxt,
 				    struct tf *tfp,
 				    struct bnxt_ulp_fc_info *fc_info,
@@ -387,16 +319,6 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-	/*
-	 * Commented for now till GET_BULK is resolved, just get the first flow
-	 * stat for now
-	 for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
-					     dparms->flow_count_db_entries);
-		if (rc)
-			break;
-	}
-	*/
 
 	/* reset the parent accumulation counters before accumulation if any */
 	ulp_flow_db_parent_flow_count_reset(ctxt);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 4b4eaeb126..2d1dbb7e6e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -226,37 +226,6 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 	return 0;
 }
 
-/*
- * Api to get the function id for a given ulp ifindex.
- *
- * ulp_ctxt [in] Ptr to ulp context
- * ifindex [in] ulp ifindex
- * func_id [out] the function id of the given ifindex.
- *
- * Returns 0 on success or negative number on failure.
- */
-int32_t
-ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint32_t fid_type,
-			    uint16_t *func_id)
-{
-	struct bnxt_ulp_port_db *port_db;
-
-	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
-	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
-		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
-		return -EINVAL;
-	}
-
-	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
-		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
-	else
-		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
-
-	return 0;
-}
-
 /*
  * Api to get the svif for a given ulp ifindex.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 7b85987a0c..bd7032004f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -122,20 +122,6 @@ int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 				  uint32_t port_id, uint32_t *ifindex);
 
-/*
- * Api to get the function id for a given ulp ifindex.
- *
- * ulp_ctxt [in] Ptr to ulp context
- * ifindex [in] ulp ifindex
- * func_id [out] the function id of the given ifindex.
- *
- * Returns 0 on success or negative number on failure.
- */
-int32_t
-ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint32_t fid_type,
-			    uint16_t *func_id);
-
 /*
  * Api to get the svif for a given ulp ifindex.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index a13a3bbf65..b5a4f85fcf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -803,17 +803,6 @@ int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
 	return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
 }
 
-/* Function to check if bitmap is zero.Return 1 on success */
-uint32_t ulp_bitmap_is_zero(uint8_t *bitmap, int32_t size)
-{
-	while (size-- > 0) {
-		if (*bitmap != 0)
-			return 0;
-		bitmap++;
-	}
-	return 1;
-}
-
 /* Function to check if bitmap is ones. Return 1 on success */
 uint32_t ulp_bitmap_is_ones(uint8_t *bitmap, int32_t size)
 {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
index 749ac06d87..a45a2705da 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -384,9 +384,6 @@ ulp_encap_buffer_copy(uint8_t *dst,
  */
 int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
 
-/* Function to check if bitmap is zero.Return 1 on success */
-uint32_t ulp_bitmap_is_zero(uint8_t *bitmap, int32_t size);
-
 /* Function to check if bitmap is ones. Return 1 on success */
 uint32_t ulp_bitmap_is_ones(uint8_t *bitmap, int32_t size);
 
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index 8f198bd50e..e5645a10ab 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -224,10 +224,6 @@ int
 mac_address_set(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *new_mac_addr);
 
-int
-mac_address_get(struct rte_eth_dev *eth_dev,
-		struct rte_ether_addr *dst_mac_addr);
-
 int
 mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
 
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..23a4393f23 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -278,19 +278,6 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
 int
 rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
 
-/**
- * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
- *
- * @param bonded_port_id	Port ID of bonded device.
- *
- * @return
- *	Monitoring interval on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id);
-
-
 /**
  * Set the period in milliseconds for delaying the disabling of a bonded link
  * when the link down status has been detected
@@ -305,18 +292,6 @@ int
 rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
 				       uint32_t delay_ms);
 
-/**
- * Get the period in milliseconds set for delaying the disabling of a bonded
- * link when the link down status has been detected
- *
- * @param bonded_port_id	Port ID of bonded device.
- *
- * @return
- *  Delay period on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id);
-
 /**
  * Set the period in milliseconds for delaying the enabling of a bonded link
  * when the link up status has been detected
@@ -331,19 +306,6 @@ int
 rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id,
 				    uint32_t delay_ms);
 
-/**
- * Get the period in milliseconds set for delaying the enabling of a bonded
- * link when the link up status has been detected
- *
- * @param bonded_port_id	Port ID of bonded device.
- *
- * @return
- *  Delay period on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id);
-
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 55c8e3167c..1c09d2e4ba 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -981,19 +981,6 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms)
 	return 0;
 }
 
-int
-rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id)
-{
-	struct bond_dev_private *internals;
-
-	if (valid_bonded_port_id(bonded_port_id) != 0)
-		return -1;
-
-	internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
-	return internals->link_status_polling_interval_ms;
-}
-
 int
 rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
 				       uint32_t delay_ms)
@@ -1010,19 +997,6 @@ rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
 	return 0;
 }
 
-int
-rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id)
-{
-	struct bond_dev_private *internals;
-
-	if (valid_bonded_port_id(bonded_port_id) != 0)
-		return -1;
-
-	internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
-	return internals->link_down_delay_ms;
-}
-
 int
 rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, uint32_t delay_ms)
 
@@ -1037,16 +1011,3 @@ rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, uint32_t delay_ms)
 
 	return 0;
 }
-
-int
-rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id)
-{
-	struct bond_dev_private *internals;
-
-	if (valid_bonded_port_id(bonded_port_id) != 0)
-		return -1;
-
-	internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
-	return internals->link_up_delay_ms;
-}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 057b1ada54..d9a0154de1 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1396,28 +1396,6 @@ link_properties_valid(struct rte_eth_dev *ethdev,
 	return 0;
 }
 
-int
-mac_address_get(struct rte_eth_dev *eth_dev,
-		struct rte_ether_addr *dst_mac_addr)
-{
-	struct rte_ether_addr *mac_addr;
-
-	if (eth_dev == NULL) {
-		RTE_BOND_LOG(ERR, "NULL pointer eth_dev specified");
-		return -1;
-	}
-
-	if (dst_mac_addr == NULL) {
-		RTE_BOND_LOG(ERR, "NULL pointer MAC specified");
-		return -1;
-	}
-
-	mac_addr = eth_dev->data->mac_addrs;
-
-	rte_ether_addr_copy(mac_addr, dst_mac_addr);
-	return 0;
-}
-
 int
 mac_address_set(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *new_mac_addr)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 8fe8e2a36b..6e360bc42d 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -363,8 +363,6 @@ int t4vf_get_vfres(struct adapter *adap);
 int t4_fixup_host_params_compat(struct adapter *adap, unsigned int page_size,
 				unsigned int cache_line_size,
 				enum chip_type chip_compat);
-int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
-			 unsigned int cache_line_size);
 int t4_fw_initialize(struct adapter *adap, unsigned int mbox);
 int t4_query_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
 		    unsigned int vf, unsigned int nparams, const u32 *params,
@@ -485,9 +483,6 @@ static inline int t4vf_wr_mbox_ns(struct adapter *adapter, const void *cmd,
 void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
 		      unsigned int data_reg, u32 *vals, unsigned int nregs,
 		      unsigned int start_idx);
-void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
-		       unsigned int data_reg, const u32 *vals,
-		       unsigned int nregs, unsigned int start_idx);
 
 int t4_get_vpd_params(struct adapter *adapter, struct vpd_params *p);
 int t4_get_pfres(struct adapter *adapter);
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index 9217956b42..d5b916ccf5 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -189,28 +189,6 @@ void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
 	}
 }
 
-/**
- * t4_write_indirect - write indirectly addressed registers
- * @adap: the adapter
- * @addr_reg: register holding the indirect addresses
- * @data_reg: register holding the value for the indirect registers
- * @vals: values to write
- * @nregs: how many indirect registers to write
- * @start_idx: address of first indirect register to write
- *
- * Writes a sequential block of registers that are accessed indirectly
- * through an address/data register pair.
- */
-void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
-		       unsigned int data_reg, const u32 *vals,
-		       unsigned int nregs, unsigned int start_idx)
-{
-	while (nregs--) {
-		t4_write_reg(adap, addr_reg, start_idx++);
-		t4_write_reg(adap, data_reg, *vals++);
-	}
-}
-
 /**
  * t4_report_fw_error - report firmware error
  * @adap: the adapter
@@ -3860,25 +3838,6 @@ int t4_fixup_host_params_compat(struct adapter *adap,
 	return 0;
 }
 
-/**
- * t4_fixup_host_params - fix up host-dependent parameters (T4 compatible)
- * @adap: the adapter
- * @page_size: the host's Base Page Size
- * @cache_line_size: the host's Cache Line Size
- *
- * Various registers in T4 contain values which are dependent on the
- * host's Base Page and Cache Line Sizes.  This function will fix all of
- * those registers with the appropriate values as passed in ...
- *
- * This routine makes changes which are compatible with T4 chips.
- */
-int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
-			 unsigned int cache_line_size)
-{
-	return t4_fixup_host_params_compat(adap, page_size, cache_line_size,
-					   T4_LAST_REV);
-}
-
 /**
  * t4_fw_initialize - ask FW to initialize the device
  * @adap: the adapter
diff --git a/drivers/net/dpaa/fmlib/fm_vsp.c b/drivers/net/dpaa/fmlib/fm_vsp.c
index 78efd93f22..0e261e3d1a 100644
--- a/drivers/net/dpaa/fmlib/fm_vsp.c
+++ b/drivers/net/dpaa/fmlib/fm_vsp.c
@@ -19,25 +19,6 @@
 #include "fm_vsp_ext.h"
 #include <dpaa_ethdev.h>
 
-uint32_t
-fm_port_vsp_alloc(t_handle h_fm_port,
-		  t_fm_port_vspalloc_params *p_params)
-{
-	t_device *p_dev = (t_device *)h_fm_port;
-	ioc_fm_port_vsp_alloc_params_t params;
-
-	_fml_dbg("Calling...\n");
-	memset(&params, 0, sizeof(ioc_fm_port_vsp_alloc_params_t));
-	memcpy(&params.params, p_params, sizeof(t_fm_port_vspalloc_params));
-
-	if (ioctl(p_dev->fd, FM_PORT_IOC_VSP_ALLOC, &params))
-		RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
-
-	_fml_dbg("Called.\n");
-
-	return E_OK;
-}
-
 t_handle
 fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params)
 {
diff --git a/drivers/net/dpaa/fmlib/fm_vsp_ext.h b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
index b51c46162d..97590ea4c0 100644
--- a/drivers/net/dpaa/fmlib/fm_vsp_ext.h
+++ b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
@@ -99,9 +99,6 @@ typedef struct ioc_fm_buffer_prefix_content_params_t {
 	ioc_fm_buffer_prefix_content_t fm_buffer_prefix_content;
 } ioc_fm_buffer_prefix_content_params_t;
 
-uint32_t fm_port_vsp_alloc(t_handle h_fm_port,
-			  t_fm_port_vspalloc_params *p_params);
-
 t_handle fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params);
 
 uint32_t fm_vsp_init(t_handle h_fm_vsp);
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 63f1ec7d30..dce9c55a9a 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -57,227 +57,6 @@ int dpdmux_open(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpdmux_close() - Close the control session of the object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPDMUX object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_close(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_CLOSE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_create() - Create the DPDMUX object
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id: returned object id
- *
- * Create the DPDMUX object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpdmux_cfg	*cfg,
-		  uint32_t *obj_id)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_create *cmd_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpdmux_cmd_create *)cmd.params;
-	cmd_params->method = cfg->method;
-	cmd_params->manip = cfg->manip;
-	cmd_params->num_ifs = cpu_to_le16(cfg->num_ifs);
-	cmd_params->adv_max_dmat_entries =
-			cpu_to_le16(cfg->adv.max_dmat_entries);
-	cmd_params->adv_max_mc_groups = cpu_to_le16(cfg->adv.max_mc_groups);
-	cmd_params->adv_max_vlan_ids = cpu_to_le16(cfg->adv.max_vlan_ids);
-	cmd_params->options = cpu_to_le64(cfg->adv.options);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpdmux_destroy() - Destroy the DPDMUX object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpdmux_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_destroy *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpdmux_cmd_destroy *)cmd.params;
-	cmd_params->dpdmux_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_enable() - Enable DPDMUX functionality
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_enable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_ENABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_disable() - Disable DPDMUX functionality
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_disable(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DISABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_is_enabled() - Check if the DPDMUX is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_rsp_is_enabled *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IS_ENABLED,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmux_rsp_is_enabled *)cmd.params;
-	*en = dpdmux_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpdmux_reset() - Reset the DPDMUX, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_reset(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_RESET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpdmux_get_attributes() - Retrieve DPDMUX attributes
  * @mc_io:	Pointer to MC portal's I/O object
@@ -318,407 +97,6 @@ int dpdmux_get_attributes(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpdmux_if_enable() - Enable Interface
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Interface Identifier
- *
- * Return:	Completion status. '0' on Success; Error code otherwise.
- */
-int dpdmux_if_enable(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     uint16_t if_id)
-{
-	struct dpdmux_cmd_if *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_ENABLE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_disable() - Disable Interface
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Interface Identifier
- *
- * Return:	Completion status. '0' on Success; Error code otherwise.
- */
-int dpdmux_if_disable(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      uint16_t if_id)
-{
-	struct dpdmux_cmd_if *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_DISABLE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_set_max_frame_length() - Set the maximum frame length in DPDMUX
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPDMUX object
- * @max_frame_length:	The required maximum frame length
- *
- * Update the maximum frame length on all DMUX interfaces.
- * In case of VEPA, the maximum frame length on all dmux interfaces
- * will be updated with the minimum value of the mfls of the connected
- * dpnis and the actual value of dmux mfl.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				uint16_t max_frame_length)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_set_max_frame_length *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_MAX_FRAME_LENGTH,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_set_max_frame_length *)cmd.params;
-	cmd_params->max_frame_length = cpu_to_le16(max_frame_length);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_ul_reset_counters() - Function resets the uplink counter
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_ul_reset_counters(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_UL_RESET_COUNTERS,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_set_accepted_frames() - Set the accepted frame types
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Interface ID (0 for uplink, or 1-num_ifs);
- * @cfg:	Frame types configuration
- *
- * if 'DPDMUX_ADMIT_ONLY_VLAN_TAGGED' is set - untagged frames or
- * priority-tagged frames are discarded.
- * if 'DPDMUX_ADMIT_ONLY_UNTAGGED' is set - untagged frames or
- * priority-tagged frames are accepted.
- * if 'DPDMUX_ADMIT_ALL' is set (default mode) - all VLAN tagged,
- * untagged and priority-tagged frame are accepted;
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_set_accepted_frames(struct fsl_mc_io *mc_io,
-				  uint32_t cmd_flags,
-				  uint16_t token,
-				  uint16_t if_id,
-				  const struct dpdmux_accepted_frames *cfg)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_set_accepted_frames *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_ACCEPTED_FRAMES,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_set_accepted_frames *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-	dpdmux_set_field(cmd_params->frames_options,
-			 ACCEPTED_FRAMES_TYPE,
-			 cfg->type);
-	dpdmux_set_field(cmd_params->frames_options,
-			 UNACCEPTED_FRAMES_ACTION,
-			 cfg->unaccept_act);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_attributes() - Obtain DPDMUX interface attributes
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Interface ID (0 for uplink, or 1-num_ifs);
- * @attr:	Interface attributes
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_attributes(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     struct dpdmux_if_attr *attr)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if *cmd_params;
-	struct dpdmux_rsp_if_get_attr *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_ATTR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmux_rsp_if_get_attr *)cmd.params;
-	attr->rate = le32_to_cpu(rsp_params->rate);
-	attr->enabled = dpdmux_get_field(rsp_params->enabled, ENABLE);
-	attr->is_default = dpdmux_get_field(rsp_params->enabled, IS_DEFAULT);
-	attr->accept_frame_type = dpdmux_get_field(
-				  rsp_params->accepted_frames_type,
-				  ACCEPTED_FRAMES_TYPE);
-
-	return 0;
-}
-
-/**
- * dpdmux_if_remove_l2_rule() - Remove L2 rule from DPDMUX table
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Destination interface ID
- * @rule:	L2 rule
- *
- * Function removes a L2 rule from DPDMUX table
- * or adds an interface to an existing multicast address
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_remove_l2_rule(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     const struct dpdmux_l2_rule *rule)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_l2_rule *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_REMOVE_L2_RULE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_l2_rule *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-	cmd_params->vlan_id = cpu_to_le16(rule->vlan_id);
-	cmd_params->mac_addr5 = rule->mac_addr[5];
-	cmd_params->mac_addr4 = rule->mac_addr[4];
-	cmd_params->mac_addr3 = rule->mac_addr[3];
-	cmd_params->mac_addr2 = rule->mac_addr[2];
-	cmd_params->mac_addr1 = rule->mac_addr[1];
-	cmd_params->mac_addr0 = rule->mac_addr[0];
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_add_l2_rule() - Add L2 rule into DPDMUX table
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPDMUX object
- * @if_id:	Destination interface ID
- * @rule:	L2 rule
- *
- * Function adds a L2 rule into DPDMUX table
- * or adds an interface to an existing multicast address
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_add_l2_rule(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint16_t if_id,
-			  const struct dpdmux_l2_rule *rule)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_l2_rule *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_ADD_L2_RULE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_l2_rule *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-	cmd_params->vlan_id = cpu_to_le16(rule->vlan_id);
-	cmd_params->mac_addr5 = rule->mac_addr[5];
-	cmd_params->mac_addr4 = rule->mac_addr[4];
-	cmd_params->mac_addr3 = rule->mac_addr[3];
-	cmd_params->mac_addr2 = rule->mac_addr[2];
-	cmd_params->mac_addr1 = rule->mac_addr[1];
-	cmd_params->mac_addr0 = rule->mac_addr[0];
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_counter() - Functions obtains specific counter of an interface
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id:  Interface Id
- * @counter_type: counter type
- * @counter: Returned specific counter information
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_counter(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint16_t if_id,
-			  enum dpdmux_counter_type counter_type,
-			  uint64_t *counter)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_get_counter *cmd_params;
-	struct dpdmux_rsp_if_get_counter *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_COUNTER,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_get_counter *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-	cmd_params->counter_type = counter_type;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmux_rsp_if_get_counter *)cmd.params;
-	*counter = le64_to_cpu(rsp_params->counter);
-
-	return 0;
-}
-
-/**
- * dpdmux_if_set_link_cfg() - set the link configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- * @cfg: Link configuration
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_set_link_cfg(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   uint16_t if_id,
-			   struct dpdmux_link_cfg *cfg)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_set_link_cfg *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_LINK_CFG,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_set_link_cfg *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-	cmd_params->rate = cpu_to_le32(cfg->rate);
-	cmd_params->options = cpu_to_le64(cfg->options);
-	cmd_params->advertising = cpu_to_le64(cfg->advertising);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_link_state - Return the link state
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- * @state: link state
- *
- * @returns	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_link_state(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     struct dpdmux_link_state *state)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_cmd_if_get_link_state *cmd_params;
-	struct dpdmux_rsp_if_get_link_state *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_LINK_STATE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_if_get_link_state *)cmd.params;
-	cmd_params->if_id = cpu_to_le16(if_id);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmux_rsp_if_get_link_state *)cmd.params;
-	state->rate = le32_to_cpu(rsp_params->rate);
-	state->options = le64_to_cpu(rsp_params->options);
-	state->up = dpdmux_get_field(rsp_params->up, UP);
-	state->state_valid = dpdmux_get_field(rsp_params->up, STATE_VALID);
-	state->supported = le64_to_cpu(rsp_params->supported);
-	state->advertising = le64_to_cpu(rsp_params->advertising);
-
-	return 0;
-}
-
 /**
  * dpdmux_if_set_default - Set default interface
  * @mc_io:	Pointer to MC portal's I/O object
@@ -747,41 +125,6 @@ int dpdmux_if_set_default(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpdmux_if_get_default - Get default interface
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- *
- * @returns	'0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_default(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token,
-		uint16_t *if_id)
-{
-	struct dpdmux_cmd_if *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_DEFAULT,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpdmux_cmd_if *)cmd.params;
-	*if_id = le16_to_cpu(rsp_params->if_id);
-
-	return 0;
-}
-
 /**
  * dpdmux_set_custom_key - Set a custom classification key.
  *
@@ -859,71 +202,3 @@ int dpdmux_add_custom_cls_entry(struct fsl_mc_io *mc_io,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
-
-/**
- * dpdmux_remove_custom_cls_entry - Removes a custom classification entry.
- *
- * This API is only available for DPDMUX instances created with
- * DPDMUX_METHOD_CUSTOM.  The API can be used to remove classification
- * entries previously inserted using dpdmux_add_custom_cls_entry.
- *
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @rule: Classification rule to remove
- *
- * @returns	'0' on Success; Error code otherwise.
- */
-int dpdmux_remove_custom_cls_entry(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token,
-		struct dpdmux_rule_cfg *rule)
-{
-	struct dpdmux_cmd_remove_custom_cls_entry *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_REMOVE_CUSTOM_CLS_ENTRY,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpdmux_cmd_remove_custom_cls_entry *)cmd.params;
-	cmd_params->key_size = rule->key_size;
-	cmd_params->key_iova = cpu_to_le64(rule->key_iova);
-	cmd_params->mask_iova = cpu_to_le64(rule->mask_iova);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_get_api_version() - Get Data Path Demux API version
- * @mc_io:  Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path demux API
- * @minor_ver:	Minor version of data path demux API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t *major_ver,
-			   uint16_t *minor_ver)
-{
-	struct mc_command cmd = { 0 };
-	struct dpdmux_rsp_get_api_version *rsp_params;
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpdmux_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 683d7bcc17..ad4df05dfc 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -80,99 +80,6 @@ int dpni_close(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_create() - Create the DPNI object
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPNI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpni_cfg *cfg,
-		uint32_t *obj_id)
-{
-	struct dpni_cmd_create *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dpni_cmd_create *)cmd.params;
-	cmd_params->options = cpu_to_le32(cfg->options);
-	cmd_params->num_queues = cfg->num_queues;
-	cmd_params->num_tcs = cfg->num_tcs;
-	cmd_params->mac_filter_entries = cfg->mac_filter_entries;
-	cmd_params->num_rx_tcs = cfg->num_rx_tcs;
-	cmd_params->vlan_filter_entries =  cfg->vlan_filter_entries;
-	cmd_params->qos_entries = cfg->qos_entries;
-	cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
-	cmd_params->num_cgs = cfg->num_cgs;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dpni_destroy() - Destroy the DPNI object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpni_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id)
-{
-	struct dpni_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	/* set object id to destroy */
-	cmd_params = (struct dpni_cmd_destroy *)cmd.params;
-	cmd_params->dpsw_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpni_set_pools() - Set buffer pools configuration
  * @mc_io:	Pointer to MC portal's I/O object
@@ -356,47 +263,6 @@ int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_irq_enable() - Get overall interrupt state
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @en:		Returned interrupt state - enable = 1, disable = 0
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			uint8_t irq_index,
-			uint8_t *en)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_get_irq_enable *cmd_params;
-	struct dpni_rsp_get_irq_enable *rsp_params;
-
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_irq_enable *)cmd.params;
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_irq_enable *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
 /**
  * dpni_set_irq_mask() - Set interrupt mask.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -434,49 +300,6 @@ int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_irq_mask() - Get interrupt mask.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @mask:	Returned event mask to trigger interrupt
- *
- * Every interrupt can have up to 32 causes and the interrupt model supports
- * masking/unmasking each cause independently
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      uint8_t irq_index,
-		      uint32_t *mask)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_get_irq_mask *cmd_params;
-	struct dpni_rsp_get_irq_mask *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_irq_mask *)cmd.params;
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_irq_mask *)cmd.params;
-	*mask = le32_to_cpu(rsp_params->mask);
-
-	return 0;
-}
-
 /**
  * dpni_get_irq_status() - Get the current status of any pending interrupts.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -633,57 +456,6 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_buffer_layout() - Retrieve buffer layout attributes.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue to retrieve configuration for
- * @layout:	Returns buffer layout attributes
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   enum dpni_queue_type qtype,
-			   struct dpni_buffer_layout *layout)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_get_buffer_layout *cmd_params;
-	struct dpni_rsp_get_buffer_layout *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_BUFFER_LAYOUT,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_buffer_layout *)cmd.params;
-	cmd_params->qtype = qtype;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_buffer_layout *)cmd.params;
-	layout->pass_timestamp =
-				(int)dpni_get_field(rsp_params->flags, PASS_TS);
-	layout->pass_parser_result =
-				(int)dpni_get_field(rsp_params->flags, PASS_PR);
-	layout->pass_frame_status =
-				(int)dpni_get_field(rsp_params->flags, PASS_FS);
-	layout->pass_sw_opaque =
-			(int)dpni_get_field(rsp_params->flags, PASS_SWO);
-	layout->private_data_size = le16_to_cpu(rsp_params->private_data_size);
-	layout->data_align = le16_to_cpu(rsp_params->data_align);
-	layout->data_head_room = le16_to_cpu(rsp_params->head_room);
-	layout->data_tail_room = le16_to_cpu(rsp_params->tail_room);
-
-	return 0;
-}
-
 /**
  * dpni_set_buffer_layout() - Set buffer layout configuration.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -758,50 +530,6 @@ int dpni_set_offload(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_offload() - Get DPNI offload configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @type:	Type of DPNI offload
- * @config:	Offload configuration.
- *			For checksum offloads, a value of 1 indicates that the
- *			offload is enabled.
- *
- * Return:	'0' on Success; Error code otherwise.
- *
- * @warning	Allowed only when DPNI is disabled
- */
-int dpni_get_offload(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     enum dpni_offload type,
-		     uint32_t *config)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_get_offload *cmd_params;
-	struct dpni_rsp_get_offload *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OFFLOAD,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_offload *)cmd.params;
-	cmd_params->dpni_offload = type;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_offload *)cmd.params;
-	*config = le32_to_cpu(rsp_params->config);
-
-	return 0;
-}
-
 /**
  * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
  *			for enqueue operations
@@ -844,41 +572,6 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @data_offset: Tx data offset (from start of buffer)
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
-			    uint32_t cmd_flags,
-			    uint16_t token,
-			    uint16_t *data_offset)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_rsp_get_tx_data_offset *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_tx_data_offset *)cmd.params;
-	*data_offset = le16_to_cpu(rsp_params->data_offset);
-
-	return 0;
-}
-
 /**
  * dpni_set_link_cfg() - set the link configuration.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -978,42 +671,6 @@ int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_max_frame_length() - Get the maximum received frame length.
- * @mc_io:		Pointer to MC portal's I/O object
- * @cmd_flags:		Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPNI object
- * @max_frame_length:	Maximum received frame length (in bytes);
- *			frame is discarded if its length exceeds this value
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
-			      uint32_t cmd_flags,
-			      uint16_t token,
-			      uint16_t *max_frame_length)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_rsp_get_max_frame_length *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_max_frame_length *)cmd.params;
-	*max_frame_length = le16_to_cpu(rsp_params->max_frame_length);
-
-	return 0;
-}
-
 /**
  * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1042,41 +699,6 @@ int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_multicast_promisc() - Get multicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Returns '1' if enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-			       uint32_t cmd_flags,
-			       uint16_t token,
-			       int *en)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_rsp_get_multicast_promisc *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_multicast_promisc *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
 /**
  * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1096,48 +718,13 @@ int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
 
 	/* prepare command */
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
-	dpni_set_field(cmd_params->enable, ENABLE, en);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_unicast_promisc() - Get unicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Returns '1' if enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     int *en)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_rsp_get_unicast_promisc *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_unicast_promisc *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
+	dpni_set_field(cmd_params->enable, ENABLE, en);
 
-	return 0;
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
 }
 
 /**
@@ -1281,39 +868,6 @@ int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @unicast:	Set to '1' to clear unicast addresses
- * @multicast:	Set to '1' to clear multicast addresses
- *
- * The primary MAC address is not cleared by this operation.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   int unicast,
-			   int multicast)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_clear_mac_filters *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_clear_mac_filters *)cmd.params;
-	dpni_set_field(cmd_params->flags, UNICAST_FILTERS, unicast);
-	dpni_set_field(cmd_params->flags, MULTICAST_FILTERS, multicast);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpni_get_port_mac_addr() - Retrieve MAC address associated to the physical
  *			port the DPNI is attached to
@@ -1453,29 +1007,6 @@ int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_clear_vlan_filters() - Clear all VLAN filters
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
-			    uint32_t cmd_flags,
-			    uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_VLAN_FILTERS,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1675,32 +1206,6 @@ int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_clear_qos_table() - Clear all QoS mapping entries
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- *
- * Following this function call, all frames are directed to
- * the default traffic class (0)
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpni_add_fs_entry() - Add Flow Steering entry for a specific traffic class
  *			(to select a flow ID)
@@ -1779,35 +1284,6 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_clear_fs_entries() - Clear all Flow Steering entries of a specific
- *			traffic class
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @tc_id:	Traffic class selection (0-7)
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint8_t tc_id)
-{
-	struct dpni_cmd_clear_fs_entries *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_FS_ENT,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_clear_fs_entries *)cmd.params;
-	cmd_params->tc_id = tc_id;
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dpni_set_congestion_notification() - Set traffic class congestion
  *	notification configuration
@@ -1858,94 +1334,6 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_congestion_notification() - Get traffic class congestion
- *	notification configuration
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @tc_id:	Traffic class selection (0-7)
- * @cfg:	congestion notification configuration
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpni_get_congestion_notification(struct fsl_mc_io *mc_io,
-				     uint32_t cmd_flags,
-				     uint16_t token,
-				     enum dpni_queue_type qtype,
-				     uint8_t tc_id,
-				struct dpni_congestion_notification_cfg *cfg)
-{
-	struct dpni_rsp_get_congestion_notification *rsp_params;
-	struct dpni_cmd_get_congestion_notification *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(
-					DPNI_CMDID_GET_CONGESTION_NOTIFICATION,
-					cmd_flags,
-					token);
-	cmd_params = (struct dpni_cmd_get_congestion_notification *)cmd.params;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc_id;
-	cmd_params->congestion_point = cfg->cg_point;
-	cmd_params->cgid = cfg->cgid;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpni_rsp_get_congestion_notification *)cmd.params;
-	cfg->units = dpni_get_field(rsp_params->type_units, CONG_UNITS);
-	cfg->threshold_entry = le32_to_cpu(rsp_params->threshold_entry);
-	cfg->threshold_exit = le32_to_cpu(rsp_params->threshold_exit);
-	cfg->message_ctx = le64_to_cpu(rsp_params->message_ctx);
-	cfg->message_iova = le64_to_cpu(rsp_params->message_iova);
-	cfg->notification_mode = le16_to_cpu(rsp_params->notification_mode);
-	cfg->dest_cfg.dest_id = le32_to_cpu(rsp_params->dest_id);
-	cfg->dest_cfg.priority = rsp_params->dest_priority;
-	cfg->dest_cfg.dest_type = dpni_get_field(rsp_params->type_units,
-						 DEST_TYPE);
-
-	return 0;
-}
-
-/**
- * dpni_get_api_version() - Get Data Path Network Interface API version
- * @mc_io:  Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path network interface API
- * @minor_ver:	Minor version of data path network interface API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver)
-{
-	struct dpni_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpni_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
-
 /**
  * dpni_set_queue() - Set queue parameters
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2184,67 +1572,6 @@ int dpni_set_taildrop(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_taildrop() - Get taildrop information
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cg_point:	Congestion point
- * @q_type:	Queue type on which the taildrop is configured.
- *		Only Rx queues are supported for now
- * @tc:		Traffic class to apply this taildrop to
- * @q_index:	Index of the queue if the DPNI supports multiple queues for
- *		traffic distribution. Ignored if CONGESTION_POINT is not 0.
- * @taildrop:	Taildrop structure
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type qtype,
-		      uint8_t tc,
-		      uint8_t index,
-		      struct dpni_taildrop *taildrop)
-{
-	struct mc_command cmd = { 0 };
-	struct dpni_cmd_get_taildrop *cmd_params;
-	struct dpni_rsp_get_taildrop *rsp_params;
-	uint8_t oal_lo, oal_hi;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TAILDROP,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_taildrop *)cmd.params;
-	cmd_params->congestion_point = cg_point;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc;
-	cmd_params->index = index;
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_taildrop *)cmd.params;
-	taildrop->enable = dpni_get_field(rsp_params->enable_oal_lo, ENABLE);
-	taildrop->units = rsp_params->units;
-	taildrop->threshold = le32_to_cpu(rsp_params->threshold);
-	oal_lo = dpni_get_field(rsp_params->enable_oal_lo, OAL_LO);
-	oal_hi = dpni_get_field(rsp_params->oal_hi, OAL_HI);
-	taildrop->oal = oal_hi << DPNI_OAL_LO_SIZE | oal_lo;
-
-	/* Fill the first 4 bits, 'oal' is a 2's complement value of 12 bits */
-	if (taildrop->oal >= 0x0800)
-		taildrop->oal |= 0xF000;
-
-	return 0;
-}
-
 /**
  * dpni_set_opr() - Set Order Restoration configuration.
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2290,69 +1617,6 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
 	return mc_send_command(mc_io, &cmd);
 }
 
-/**
- * dpni_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @tc:		Traffic class, in range 0 to NUM_TCS - 1
- * @index:	Selects the specific queue out of the set allocated
- *			for the same TC. Value must be in range 0 to
- *			NUM_QUEUES - 1
- * @cfg:	Returned OPR configuration
- * @qry:	Returned OPR query
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t tc,
-		 uint8_t index,
-		 struct opr_cfg *cfg,
-		 struct opr_qry *qry)
-{
-	struct dpni_rsp_get_opr *rsp_params;
-	struct dpni_cmd_get_opr *cmd_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OPR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
-	cmd_params->index = index;
-	cmd_params->tc_id = tc;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_opr *)cmd.params;
-	cfg->oloe = rsp_params->oloe;
-	cfg->oeane = rsp_params->oeane;
-	cfg->olws = rsp_params->olws;
-	cfg->oa = rsp_params->oa;
-	cfg->oprrws = rsp_params->oprrws;
-	qry->rip = dpni_get_field(rsp_params->flags, RIP);
-	qry->enable = dpni_get_field(rsp_params->flags, OPR_ENABLE);
-	qry->nesn = le16_to_cpu(rsp_params->nesn);
-	qry->ndsn = le16_to_cpu(rsp_params->ndsn);
-	qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
-	qry->tseq_nlis = dpni_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
-	qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
-	qry->hseq_nlis = dpni_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
-	qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
-	qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
-	qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
-	qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
-	return 0;
-}
-
 /**
  * dpni_set_rx_fs_dist() - Set Rx traffic class FS distribution
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2567,73 +1831,3 @@ int dpni_enable_sw_sequence(struct fsl_mc_io *mc_io,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
-
-/**
- * dpni_get_sw_sequence_layout() - Get the soft sequence layout
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @src:	Source of the layout (WRIOP Rx or Tx)
- * @ss_layout_iova:  I/O virtual address of 264 bytes DMA-able memory
- *
- * warning: After calling this function, call dpni_extract_sw_sequence_layout()
- *		to get the layout.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
-	      uint32_t cmd_flags,
-	      uint16_t token,
-		  enum dpni_soft_sequence_dest src,
-		  uint64_t ss_layout_iova)
-{
-	struct dpni_get_sw_sequence_layout *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SW_SEQUENCE_LAYOUT,
-					  cmd_flags,
-					  token);
-
-	cmd_params = (struct dpni_get_sw_sequence_layout *)cmd.params;
-	cmd_params->src = src;
-	cmd_params->layout_iova = cpu_to_le64(ss_layout_iova);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout:		software sequence layout
- * @sw_sequence_layout_buf:	Zeroed 264 bytes of memory before mapping it
- *				to DMA
- *
- * This function has to be called after dpni_get_sw_sequence_layout
- *
- */
-void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
-			     const uint8_t *sw_sequence_layout_buf)
-{
-	const struct dpni_sw_sequence_layout_entry *ext_params;
-	int i;
-	uint16_t ss_size, ss_offset;
-
-	ext_params = (const struct dpni_sw_sequence_layout_entry *)
-						sw_sequence_layout_buf;
-
-	for (i = 0; i < DPNI_SW_SEQUENCE_LAYOUT_SIZE; i++) {
-		ss_offset = le16_to_cpu(ext_params[i].ss_offset);
-		ss_size = le16_to_cpu(ext_params[i].ss_size);
-
-		if (ss_offset == 0 && ss_size == 0) {
-			layout->num_ss = i;
-			return;
-		}
-
-		layout->ss[i].ss_offset = ss_offset;
-		layout->ss[i].ss_size = ss_size;
-		layout->ss[i].param_offset = ext_params[i].param_offset;
-		layout->ss[i].param_size = ext_params[i].param_size;
-	}
-}
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..96e20bce81 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -54,213 +54,6 @@ int dprtc_open(struct fsl_mc_io *mc_io,
 	return err;
 }
 
-/**
- * dprtc_close() - Close the control session of the object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_close(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_CLOSE, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_create() - Create the DPRTC object.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token:	Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg:	Configuration structure
- * @obj_id:	Returned object id
- *
- * Create the DPRTC object, allocate required resources and
- * perform required initialization.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_create(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 const struct dprtc_cfg *cfg,
-		 uint32_t *obj_id)
-{
-	struct mc_command cmd = { 0 };
-	int err;
-
-	(void)(cfg); /* unused */
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_CREATE,
-					  cmd_flags,
-					  dprc_token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*obj_id = mc_cmd_read_object_id(&cmd);
-
-	return 0;
-}
-
-/**
- * dprtc_destroy() - Destroy the DPRTC object and release all its resources.
- * @mc_io:	Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id:	The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dprtc_destroy(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  uint32_t object_id)
-{
-	struct dprtc_cmd_destroy *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_DESTROY,
-					  cmd_flags,
-					  dprc_token);
-	cmd_params = (struct dprtc_cmd_destroy *)cmd.params;
-	cmd_params->object_id = cpu_to_le32(object_id);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_enable() - Enable the DPRTC.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_enable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_ENABLE, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_disable() - Disable the DPRTC.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_disable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_DISABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_is_enabled() - Check if the DPRTC is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_is_enabled(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     int *en)
-{
-	struct dprtc_rsp_is_enabled *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_IS_ENABLED, cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dprtc_rsp_is_enabled *)cmd.params;
-	*en = dprtc_get_field(rsp_params->en, ENABLE);
-
-	return 0;
-}
-
-/**
- * dprtc_reset() - Reset the DPRTC, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_reset(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token)
-{
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_RESET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
 /**
  * dprtc_get_attributes - Retrieve DPRTC attributes.
  *
@@ -299,101 +92,6 @@ int dprtc_get_attributes(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
-/**
- * dprtc_set_clock_offset() - Sets the clock's offset
- * (usually relative to another clock).
- *
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- * @offset:	New clock offset (in nanoseconds).
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_set_clock_offset(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   int64_t offset)
-{
-	struct dprtc_cmd_set_clock_offset *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_CLOCK_OFFSET,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dprtc_cmd_set_clock_offset *)cmd.params;
-	cmd_params->offset = cpu_to_le64(offset);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_set_freq_compensation() - Sets a new frequency compensation value.
- *
- * @mc_io:		Pointer to MC portal's I/O object
- * @cmd_flags:		Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPRTC object
- * @freq_compensation:	The new frequency compensation value to set.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				uint32_t freq_compensation)
-{
-	struct dprtc_get_freq_compensation *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FREQ_COMPENSATION,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dprtc_get_freq_compensation *)cmd.params;
-	cmd_params->freq_compensation = cpu_to_le32(freq_compensation);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_get_freq_compensation() - Retrieves the frequency compensation value
- *
- * @mc_io:		Pointer to MC portal's I/O object
- * @cmd_flags:		Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPRTC object
- * @freq_compensation:	Frequency compensation value
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				uint32_t *freq_compensation)
-{
-	struct dprtc_get_freq_compensation *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_FREQ_COMPENSATION,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dprtc_get_freq_compensation *)cmd.params;
-	*freq_compensation = le32_to_cpu(rsp_params->freq_compensation);
-
-	return 0;
-}
-
 /**
  * dprtc_get_time() - Returns the current RTC time.
  *
@@ -458,66 +156,3 @@ int dprtc_set_time(struct fsl_mc_io *mc_io,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
-
-/**
- * dprtc_set_alarm() - Defines and sets alarm.
- *
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPRTC object
- * @time:	In nanoseconds, the time when the alarm
- *			should go off - must be a multiple of
- *			1 microsecond
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dprtc_set_alarm(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token, uint64_t time)
-{
-	struct dprtc_time *cmd_params;
-	struct mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_ALARM,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dprtc_time *)cmd.params;
-	cmd_params->time = cpu_to_le64(time);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_get_api_version() - Get Data Path Real Time Counter API version
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path real time counter API
- * @minor_ver:	Minor version of data path real time counter API
- *
- * Return:  '0' on Success; Error code otherwise.
- */
-int dprtc_get_api_version(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t *major_ver,
-			  uint16_t *minor_ver)
-{
-	struct dprtc_rsp_get_api_version *rsp_params;
-	struct mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_API_VERSION,
-					cmd_flags,
-					0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dprtc_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index accd1ef5c1..eb768fafbb 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -21,10 +21,6 @@ int dpdmux_open(struct fsl_mc_io *mc_io,
 		int  dpdmux_id,
 		uint16_t  *token);
 
-int dpdmux_close(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
 /**
  * DPDMUX general options
  */
@@ -102,34 +98,6 @@ struct dpdmux_cfg {
 	} adv;
 };
 
-int dpdmux_create(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  const struct dpdmux_cfg *cfg,
-		  uint32_t *obj_id);
-
-int dpdmux_destroy(struct fsl_mc_io *mc_io,
-		   uint16_t dprc_token,
-		   uint32_t cmd_flags,
-		   uint32_t object_id);
-
-int dpdmux_enable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token);
-
-int dpdmux_disable(struct fsl_mc_io *mc_io,
-		   uint32_t cmd_flags,
-		   uint16_t token);
-
-int dpdmux_is_enabled(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      int *en);
-
-int dpdmux_reset(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
 /**
  * struct dpdmux_attr - Structure representing DPDMUX attributes
  * @id: DPDMUX object ID
@@ -153,11 +121,6 @@ int dpdmux_get_attributes(struct fsl_mc_io *mc_io,
 			  uint16_t token,
 			  struct dpdmux_attr *attr);
 
-int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				uint16_t max_frame_length);
-
 /**
  * enum dpdmux_counter_type - Counter types
  * @DPDMUX_CNT_ING_FRAME: Counts ingress frames
@@ -223,12 +186,6 @@ struct dpdmux_accepted_frames {
 	enum dpdmux_action unaccept_act;
 };
 
-int dpdmux_if_set_accepted_frames(struct fsl_mc_io *mc_io,
-				  uint32_t cmd_flags,
-				  uint16_t token,
-				  uint16_t if_id,
-				  const struct dpdmux_accepted_frames *cfg);
-
 /**
  * struct dpdmux_if_attr - Structure representing frame types configuration
  * @rate: Configured interface rate (in bits per second)
@@ -242,22 +199,6 @@ struct dpdmux_if_attr {
 	enum dpdmux_accepted_frames_type accept_frame_type;
 };
 
-int dpdmux_if_get_attributes(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     struct dpdmux_if_attr *attr);
-
-int dpdmux_if_enable(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     uint16_t if_id);
-
-int dpdmux_if_disable(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      uint16_t if_id);
-
 /**
  * struct dpdmux_l2_rule - Structure representing L2 rule
  * @mac_addr: MAC address
@@ -268,29 +209,6 @@ struct dpdmux_l2_rule {
 	uint16_t vlan_id;
 };
 
-int dpdmux_if_remove_l2_rule(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     const struct dpdmux_l2_rule *rule);
-
-int dpdmux_if_add_l2_rule(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint16_t if_id,
-			  const struct dpdmux_l2_rule *rule);
-
-int dpdmux_if_get_counter(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint16_t if_id,
-			  enum dpdmux_counter_type counter_type,
-			  uint64_t *counter);
-
-int dpdmux_ul_reset_counters(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token);
-
 /**
  * Enable auto-negotiation
  */
@@ -319,11 +237,6 @@ struct dpdmux_link_cfg {
 	uint64_t advertising;
 };
 
-int dpdmux_if_set_link_cfg(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   uint16_t if_id,
-			   struct dpdmux_link_cfg *cfg);
 /**
  * struct dpdmux_link_state - Structure representing DPDMUX link state
  * @rate: Rate
@@ -342,22 +255,11 @@ struct dpdmux_link_state {
 	uint64_t advertising;
 };
 
-int dpdmux_if_get_link_state(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     uint16_t if_id,
-			     struct dpdmux_link_state *state);
-
 int dpdmux_if_set_default(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token,
 		uint16_t if_id);
 
-int dpdmux_if_get_default(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token,
-		uint16_t *if_id);
-
 int dpdmux_set_custom_key(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -397,14 +299,4 @@ int dpdmux_add_custom_cls_entry(struct fsl_mc_io *mc_io,
 		struct dpdmux_rule_cfg *rule,
 		struct dpdmux_cls_action *action);
 
-int dpdmux_remove_custom_cls_entry(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token,
-		struct dpdmux_rule_cfg *rule);
-
-int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t *major_ver,
-			   uint16_t *minor_ver);
-
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 598911ddd1..2e2012d0bf 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -185,17 +185,6 @@ struct dpni_cfg {
 	uint8_t  num_cgs;
 };
 
-int dpni_create(struct fsl_mc_io *mc_io,
-		uint16_t dprc_token,
-		uint32_t cmd_flags,
-		const struct dpni_cfg *cfg,
-		uint32_t *obj_id);
-
-int dpni_destroy(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 uint32_t object_id);
-
 /**
  * struct dpni_pools_cfg - Structure representing buffer pools configuration
  * @num_dpbp:	Number of DPBPs
@@ -265,24 +254,12 @@ int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
 			uint8_t irq_index,
 			uint8_t en);
 
-int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
-			uint32_t cmd_flags,
-			uint16_t token,
-			uint8_t irq_index,
-			uint8_t *en);
-
 int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
 		      uint8_t irq_index,
 		      uint32_t mask);
 
-int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      uint8_t irq_index,
-		      uint32_t *mask);
-
 int dpni_get_irq_status(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -495,12 +472,6 @@ enum dpni_queue_type {
 	DPNI_QUEUE_RX_ERR,
 };
 
-int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   enum dpni_queue_type qtype,
-			   struct dpni_buffer_layout *layout);
-
 int dpni_set_buffer_layout(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
@@ -530,23 +501,12 @@ int dpni_set_offload(struct fsl_mc_io *mc_io,
 		     enum dpni_offload type,
 		     uint32_t config);
 
-int dpni_get_offload(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     enum dpni_offload type,
-		     uint32_t *config);
-
 int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
-int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
-			    uint32_t cmd_flags,
-			    uint16_t token,
-			    uint16_t *data_offset);
-
 #define DPNI_STATISTICS_CNT		7
 
 /**
@@ -736,11 +696,6 @@ int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
 			      uint16_t token,
 			      uint16_t max_frame_length);
 
-int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
-			      uint32_t cmd_flags,
-			      uint16_t token,
-			      uint16_t *max_frame_length);
-
 int dpni_set_mtu(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -756,21 +711,11 @@ int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
 			       uint16_t token,
 			       int en);
 
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-			       uint32_t cmd_flags,
-			       uint16_t token,
-			       int *en);
-
 int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
 			     uint32_t cmd_flags,
 			     uint16_t token,
 			     int en);
 
-int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
-			     uint32_t cmd_flags,
-			     uint16_t token,
-			     int *en);
-
 int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
 			      uint32_t cmd_flags,
 			      uint16_t token,
@@ -794,12 +739,6 @@ int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
 			 uint16_t token,
 			 const uint8_t mac_addr[6]);
 
-int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   int unicast,
-			   int multicast);
-
 int dpni_get_port_mac_addr(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
@@ -828,10 +767,6 @@ int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
 			uint16_t token,
 			uint16_t vlan_id);
 
-int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
-			    uint32_t cmd_flags,
-			    uint16_t token);
-
 /**
  * enum dpni_dist_mode - DPNI distribution mode
  * @DPNI_DIST_MODE_NONE: No distribution
@@ -1042,13 +977,6 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
 				     uint8_t tc_id,
 			const struct dpni_congestion_notification_cfg *cfg);
 
-int dpni_get_congestion_notification(struct fsl_mc_io *mc_io,
-				     uint32_t cmd_flags,
-				     uint16_t token,
-				     enum dpni_queue_type qtype,
-				     uint8_t tc_id,
-				struct dpni_congestion_notification_cfg *cfg);
-
 /* DPNI FLC stash options */
 
 /**
@@ -1212,10 +1140,6 @@ int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
 			  uint16_t token,
 			  const struct dpni_rule_cfg *cfg);
 
-int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t token);
-
 /**
  * Discard matching traffic.  If set, this takes precedence over any other
  * configuration and matching traffic is always discarded.
@@ -1273,16 +1197,6 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
 			 uint8_t tc_id,
 			 const struct dpni_rule_cfg *cfg);
 
-int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t token,
-			  uint8_t tc_id);
-
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
-			 uint32_t cmd_flags,
-			 uint16_t *major_ver,
-			 uint16_t *minor_ver);
-
 /**
  * Set User Context
  */
@@ -1372,15 +1286,6 @@ int dpni_set_taildrop(struct fsl_mc_io *mc_io,
 		      uint8_t q_index,
 		      struct dpni_taildrop *taildrop);
 
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
-		      uint32_t cmd_flags,
-		      uint16_t token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type q_type,
-		      uint8_t tc,
-		      uint8_t q_index,
-		      struct dpni_taildrop *taildrop);
-
 int dpni_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -1389,14 +1294,6 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
 		 uint8_t options,
 		 struct opr_cfg *cfg);
 
-int dpni_get_opr(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token,
-		 uint8_t tc,
-		 uint8_t index,
-		 struct opr_cfg *cfg,
-		 struct opr_qry *qry);
-
 /**
  * When used for queue_idx in function dpni_set_rx_dist_default_queue will
  * signal to dpni to drop all unclassified frames
@@ -1550,35 +1447,4 @@ struct dpni_sw_sequence_layout {
 	} ss[DPNI_SW_SEQUENCE_LAYOUT_SIZE];
 };
 
-/**
- * dpni_get_sw_sequence_layout() - Get the soft sequence layout
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @src:	Source of the layout (WRIOP Rx or Tx)
- * @ss_layout_iova:  I/O virtual address of 264 bytes DMA-able memory
- *
- * warning: After calling this function, call dpni_extract_sw_sequence_layout()
- *		to get the layout
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
-				uint32_t cmd_flags,
-				uint16_t token,
-				enum dpni_soft_sequence_dest src,
-				uint64_t ss_layout_iova);
-
-/**
- * dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout:		software sequence layout
- * @sw_sequence_layout_buf:	Zeroed 264 bytes of memory before mapping it
- *				to DMA
- *
- * This function has to be called after dpni_get_sw_sequence_layout
- *
- */
-void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
-				     const uint8_t *sw_sequence_layout_buf);
-
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..d8be107ef1 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -16,10 +16,6 @@ int dprtc_open(struct fsl_mc_io *mc_io,
 	       int dprtc_id,
 	       uint16_t *token);
 
-int dprtc_close(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token);
-
 /**
  * struct dprtc_cfg - Structure representing DPRTC configuration
  * @options:	place holder
@@ -28,49 +24,6 @@ struct dprtc_cfg {
 	uint32_t options;
 };
 
-int dprtc_create(struct fsl_mc_io *mc_io,
-		 uint16_t dprc_token,
-		 uint32_t cmd_flags,
-		 const struct dprtc_cfg *cfg,
-		 uint32_t *obj_id);
-
-int dprtc_destroy(struct fsl_mc_io *mc_io,
-		  uint16_t dprc_token,
-		  uint32_t cmd_flags,
-		  uint32_t object_id);
-
-int dprtc_enable(struct fsl_mc_io *mc_io,
-		 uint32_t cmd_flags,
-		 uint16_t token);
-
-int dprtc_disable(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token);
-
-int dprtc_is_enabled(struct fsl_mc_io *mc_io,
-		     uint32_t cmd_flags,
-		     uint16_t token,
-		     int *en);
-
-int dprtc_reset(struct fsl_mc_io *mc_io,
-		uint32_t cmd_flags,
-		uint16_t token);
-
-int dprtc_set_clock_offset(struct fsl_mc_io *mc_io,
-			   uint32_t cmd_flags,
-			   uint16_t token,
-			   int64_t offset);
-
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token,
-		  uint32_t freq_compensation);
-
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
-		  uint32_t cmd_flags,
-		  uint16_t token,
-		  uint32_t *freq_compensation);
-
 int dprtc_get_time(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token,
@@ -81,11 +34,6 @@ int dprtc_set_time(struct fsl_mc_io *mc_io,
 		   uint16_t token,
 		   uint64_t time);
 
-int dprtc_set_alarm(struct fsl_mc_io *mc_io,
-		    uint32_t cmd_flags,
-		    uint16_t token,
-		    uint64_t time);
-
 /**
  * struct dprtc_attr - Structure representing DPRTC attributes
  * @id:		DPRTC object ID
@@ -101,9 +49,4 @@ int dprtc_get_attributes(struct fsl_mc_io *mc_io,
 			 uint16_t token,
 			 struct dprtc_attr *attr);
 
-int dprtc_get_api_version(struct fsl_mc_io *mc_io,
-			  uint32_t cmd_flags,
-			  uint16_t *major_ver,
-			  uint16_t *minor_ver);
-
 #endif /* __FSL_DPRTC_H */
diff --git a/drivers/net/e1000/base/e1000_82542.c b/drivers/net/e1000/base/e1000_82542.c
index fd473c1c6f..e14e9e9e58 100644
--- a/drivers/net/e1000/base/e1000_82542.c
+++ b/drivers/net/e1000/base/e1000_82542.c
@@ -406,103 +406,6 @@ STATIC int e1000_rar_set_82542(struct e1000_hw *hw, u8 *addr, u32 index)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_translate_register_82542 - Translate the proper register offset
- *  @reg: e1000 register to be read
- *
- *  Registers in 82542 are located in different offsets than other adapters
- *  even though they function in the same manner.  This function takes in
- *  the name of the register to read and returns the correct offset for
- *  82542 silicon.
- **/
-u32 e1000_translate_register_82542(u32 reg)
-{
-	/*
-	 * Some of the 82542 registers are located at different
-	 * offsets than they are in newer adapters.
-	 * Despite the difference in location, the registers
-	 * function in the same manner.
-	 */
-	switch (reg) {
-	case E1000_RA:
-		reg = 0x00040;
-		break;
-	case E1000_RDTR:
-		reg = 0x00108;
-		break;
-	case E1000_RDBAL(0):
-		reg = 0x00110;
-		break;
-	case E1000_RDBAH(0):
-		reg = 0x00114;
-		break;
-	case E1000_RDLEN(0):
-		reg = 0x00118;
-		break;
-	case E1000_RDH(0):
-		reg = 0x00120;
-		break;
-	case E1000_RDT(0):
-		reg = 0x00128;
-		break;
-	case E1000_RDBAL(1):
-		reg = 0x00138;
-		break;
-	case E1000_RDBAH(1):
-		reg = 0x0013C;
-		break;
-	case E1000_RDLEN(1):
-		reg = 0x00140;
-		break;
-	case E1000_RDH(1):
-		reg = 0x00148;
-		break;
-	case E1000_RDT(1):
-		reg = 0x00150;
-		break;
-	case E1000_FCRTH:
-		reg = 0x00160;
-		break;
-	case E1000_FCRTL:
-		reg = 0x00168;
-		break;
-	case E1000_MTA:
-		reg = 0x00200;
-		break;
-	case E1000_TDBAL(0):
-		reg = 0x00420;
-		break;
-	case E1000_TDBAH(0):
-		reg = 0x00424;
-		break;
-	case E1000_TDLEN(0):
-		reg = 0x00428;
-		break;
-	case E1000_TDH(0):
-		reg = 0x00430;
-		break;
-	case E1000_TDT(0):
-		reg = 0x00438;
-		break;
-	case E1000_TIDV:
-		reg = 0x00440;
-		break;
-	case E1000_VFTA:
-		reg = 0x00600;
-		break;
-	case E1000_TDFH:
-		reg = 0x08010;
-		break;
-	case E1000_TDFT:
-		reg = 0x08018;
-		break;
-	default:
-		break;
-	}
-
-	return reg;
-}
-
 /**
  *  e1000_clear_hw_cntrs_82542 - Clear device specific hardware counters
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82543.c b/drivers/net/e1000/base/e1000_82543.c
index ca273b4368..992dffe1ff 100644
--- a/drivers/net/e1000/base/e1000_82543.c
+++ b/drivers/net/e1000/base/e1000_82543.c
@@ -364,84 +364,6 @@ STATIC bool e1000_init_phy_disabled_82543(struct e1000_hw *hw)
 	return ret_val;
 }
 
-/**
- *  e1000_tbi_adjust_stats_82543 - Adjust stats when TBI enabled
- *  @hw: pointer to the HW structure
- *  @stats: Struct containing statistic register values
- *  @frame_len: The length of the frame in question
- *  @mac_addr: The Ethernet destination address of the frame in question
- *  @max_frame_size: The maximum frame size
- *
- *  Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
- **/
-void e1000_tbi_adjust_stats_82543(struct e1000_hw *hw,
-				  struct e1000_hw_stats *stats, u32 frame_len,
-				  u8 *mac_addr, u32 max_frame_size)
-{
-	if (!(e1000_tbi_sbp_enabled_82543(hw)))
-		goto out;
-
-	/* First adjust the frame length. */
-	frame_len--;
-	/*
-	 * We need to adjust the statistics counters, since the hardware
-	 * counters overcount this packet as a CRC error and undercount
-	 * the packet as a good packet
-	 */
-	/* This packet should not be counted as a CRC error. */
-	stats->crcerrs--;
-	/* This packet does count as a Good Packet Received. */
-	stats->gprc++;
-
-	/* Adjust the Good Octets received counters */
-	stats->gorc += frame_len;
-
-	/*
-	 * Is this a broadcast or multicast?  Check broadcast first,
-	 * since the test for a multicast frame will test positive on
-	 * a broadcast frame.
-	 */
-	if ((mac_addr[0] == 0xff) && (mac_addr[1] == 0xff))
-		/* Broadcast packet */
-		stats->bprc++;
-	else if (*mac_addr & 0x01)
-		/* Multicast packet */
-		stats->mprc++;
-
-	/*
-	 * In this case, the hardware has over counted the number of
-	 * oversize frames.
-	 */
-	if ((frame_len == max_frame_size) && (stats->roc > 0))
-		stats->roc--;
-
-	/*
-	 * Adjust the bin counters when the extra byte put the frame in the
-	 * wrong bin. Remember that the frame_len was adjusted above.
-	 */
-	if (frame_len == 64) {
-		stats->prc64++;
-		stats->prc127--;
-	} else if (frame_len == 127) {
-		stats->prc127++;
-		stats->prc255--;
-	} else if (frame_len == 255) {
-		stats->prc255++;
-		stats->prc511--;
-	} else if (frame_len == 511) {
-		stats->prc511++;
-		stats->prc1023--;
-	} else if (frame_len == 1023) {
-		stats->prc1023++;
-		stats->prc1522--;
-	} else if (frame_len == 1522) {
-		stats->prc1522++;
-	}
-
-out:
-	return;
-}
-
 /**
  *  e1000_read_phy_reg_82543 - Read PHY register
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82543.h b/drivers/net/e1000/base/e1000_82543.h
index cf81e4e848..8af412bc77 100644
--- a/drivers/net/e1000/base/e1000_82543.h
+++ b/drivers/net/e1000/base/e1000_82543.h
@@ -16,10 +16,6 @@
 /* If TBI_COMPAT_ENABLED, then this is the current state (on/off) */
 #define TBI_SBP_ENABLED		0x2
 
-void e1000_tbi_adjust_stats_82543(struct e1000_hw *hw,
-				  struct e1000_hw_stats *stats,
-				  u32 frame_len, u8 *mac_addr,
-				  u32 max_frame_size);
 void e1000_set_tbi_compatibility_82543(struct e1000_hw *hw,
 				       bool state);
 bool e1000_tbi_sbp_enabled_82543(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_82571.c b/drivers/net/e1000/base/e1000_82571.c
index 9dc7f6025c..9da1fbf856 100644
--- a/drivers/net/e1000/base/e1000_82571.c
+++ b/drivers/net/e1000/base/e1000_82571.c
@@ -1467,41 +1467,6 @@ STATIC s32 e1000_led_on_82574(struct e1000_hw *hw)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_check_phy_82574 - check 82574 phy hung state
- *  @hw: pointer to the HW structure
- *
- *  Returns whether phy is hung or not
- **/
-bool e1000_check_phy_82574(struct e1000_hw *hw)
-{
-	u16 status_1kbt = 0;
-	u16 receive_errors = 0;
-	s32 ret_val;
-
-	DEBUGFUNC("e1000_check_phy_82574");
-
-	/* Read PHY Receive Error counter first, if its is max - all F's then
-	 * read the Base1000T status register If both are max then PHY is hung.
-	 */
-	ret_val = hw->phy.ops.read_reg(hw, E1000_RECEIVE_ERROR_COUNTER,
-				       &receive_errors);
-	if (ret_val)
-		return false;
-	if (receive_errors == E1000_RECEIVE_ERROR_MAX) {
-		ret_val = hw->phy.ops.read_reg(hw, E1000_BASE1000T_STATUS,
-					       &status_1kbt);
-		if (ret_val)
-			return false;
-		if ((status_1kbt & E1000_IDLE_ERROR_COUNT_MASK) ==
-		    E1000_IDLE_ERROR_COUNT_MASK)
-			return true;
-	}
-
-	return false;
-}
-
-
 /**
  *  e1000_setup_link_82571 - Setup flow control and link settings
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82571.h b/drivers/net/e1000/base/e1000_82571.h
index 0d8412678d..3c1840d0e8 100644
--- a/drivers/net/e1000/base/e1000_82571.h
+++ b/drivers/net/e1000/base/e1000_82571.h
@@ -29,7 +29,6 @@
 #define E1000_IDLE_ERROR_COUNT_MASK	0xFF
 #define E1000_RECEIVE_ERROR_COUNTER	21
 #define E1000_RECEIVE_ERROR_MAX		0xFFFF
-bool e1000_check_phy_82574(struct e1000_hw *hw);
 bool e1000_get_laa_state_82571(struct e1000_hw *hw);
 void e1000_set_laa_state_82571(struct e1000_hw *hw, bool state);
 
diff --git a/drivers/net/e1000/base/e1000_82575.c b/drivers/net/e1000/base/e1000_82575.c
index 7c78649393..074bd34f11 100644
--- a/drivers/net/e1000/base/e1000_82575.c
+++ b/drivers/net/e1000/base/e1000_82575.c
@@ -2119,62 +2119,6 @@ void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
 	E1000_WRITE_REG(hw, reg_offset, reg_val);
 }
 
-/**
- *  e1000_vmdq_set_loopback_pf - enable or disable vmdq loopback
- *  @hw: pointer to the hardware struct
- *  @enable: state to enter, either enabled or disabled
- *
- *  enables/disables L2 switch loopback functionality.
- **/
-void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable)
-{
-	u32 dtxswc;
-
-	switch (hw->mac.type) {
-	case e1000_82576:
-		dtxswc = E1000_READ_REG(hw, E1000_DTXSWC);
-		if (enable)
-			dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
-		else
-			dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
-		E1000_WRITE_REG(hw, E1000_DTXSWC, dtxswc);
-		break;
-	case e1000_i350:
-	case e1000_i354:
-		dtxswc = E1000_READ_REG(hw, E1000_TXSWC);
-		if (enable)
-			dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
-		else
-			dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
-		E1000_WRITE_REG(hw, E1000_TXSWC, dtxswc);
-		break;
-	default:
-		/* Currently no other hardware supports loopback */
-		break;
-	}
-
-
-}
-
-/**
- *  e1000_vmdq_set_replication_pf - enable or disable vmdq replication
- *  @hw: pointer to the hardware struct
- *  @enable: state to enter, either enabled or disabled
- *
- *  enables/disables replication of packets across multiple pools.
- **/
-void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable)
-{
-	u32 vt_ctl = E1000_READ_REG(hw, E1000_VT_CTL);
-
-	if (enable)
-		vt_ctl |= E1000_VT_CTL_VM_REPL_EN;
-	else
-		vt_ctl &= ~E1000_VT_CTL_VM_REPL_EN;
-
-	E1000_WRITE_REG(hw, E1000_VT_CTL, vt_ctl);
-}
-
 /**
  *  e1000_read_phy_reg_82580 - Read 82580 MDI control register
  *  @hw: pointer to the HW structure
@@ -2596,45 +2540,6 @@ STATIC s32 e1000_update_nvm_checksum_i350(struct e1000_hw *hw)
 	return ret_val;
 }
 
-/**
- *  __e1000_access_emi_reg - Read/write EMI register
- *  @hw: pointer to the HW structure
- *  @address: EMI address to program
- *  @data: pointer to value to read/write from/to the EMI address
- *  @read: boolean flag to indicate read or write
- **/
-STATIC s32 __e1000_access_emi_reg(struct e1000_hw *hw, u16 address,
-				  u16 *data, bool read)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("__e1000_access_emi_reg");
-
-	ret_val = hw->phy.ops.write_reg(hw, E1000_EMIADD, address);
-	if (ret_val)
-		return ret_val;
-
-	if (read)
-		ret_val = hw->phy.ops.read_reg(hw, E1000_EMIDATA, data);
-	else
-		ret_val = hw->phy.ops.write_reg(hw, E1000_EMIDATA, *data);
-
-	return ret_val;
-}
-
-/**
- *  e1000_read_emi_reg - Read Extended Management Interface register
- *  @hw: pointer to the HW structure
- *  @addr: EMI address to program
- *  @data: value to be read from the EMI address
- **/
-s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data)
-{
-	DEBUGFUNC("e1000_read_emi_reg");
-
-	return __e1000_access_emi_reg(hw, addr, data, true);
-}
-
 /**
  *  e1000_initialize_M88E1512_phy - Initialize M88E1512 PHY
  *  @hw: pointer to the HW structure
@@ -2823,179 +2728,6 @@ s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw)
 	return ret_val;
 }
 
-/**
- *  e1000_set_eee_i350 - Enable/disable EEE support
- *  @hw: pointer to the HW structure
- *  @adv1G: boolean flag enabling 1G EEE advertisement
- *  @adv100M: boolean flag enabling 100M EEE advertisement
- *
- *  Enable/disable EEE based on setting in dev_spec structure.
- *
- **/
-s32 e1000_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M)
-{
-	u32 ipcnfg, eeer;
-
-	DEBUGFUNC("e1000_set_eee_i350");
-
-	if ((hw->mac.type < e1000_i350) ||
-	    (hw->phy.media_type != e1000_media_type_copper))
-		goto out;
-	ipcnfg = E1000_READ_REG(hw, E1000_IPCNFG);
-	eeer = E1000_READ_REG(hw, E1000_EEER);
-
-	/* enable or disable per user setting */
-	if (!(hw->dev_spec._82575.eee_disable)) {
-		u32 eee_su = E1000_READ_REG(hw, E1000_EEE_SU);
-
-		if (adv100M)
-			ipcnfg |= E1000_IPCNFG_EEE_100M_AN;
-		else
-			ipcnfg &= ~E1000_IPCNFG_EEE_100M_AN;
-
-		if (adv1G)
-			ipcnfg |= E1000_IPCNFG_EEE_1G_AN;
-		else
-			ipcnfg &= ~E1000_IPCNFG_EEE_1G_AN;
-
-		eeer |= (E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
-			 E1000_EEER_LPI_FC);
-
-		/* This bit should not be set in normal operation. */
-		if (eee_su & E1000_EEE_SU_LPI_CLK_STP)
-			DEBUGOUT("LPI Clock Stop Bit should not be set!\n");
-	} else {
-		ipcnfg &= ~(E1000_IPCNFG_EEE_1G_AN | E1000_IPCNFG_EEE_100M_AN);
-		eeer &= ~(E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
-			  E1000_EEER_LPI_FC);
-	}
-	E1000_WRITE_REG(hw, E1000_IPCNFG, ipcnfg);
-	E1000_WRITE_REG(hw, E1000_EEER, eeer);
-	E1000_READ_REG(hw, E1000_IPCNFG);
-	E1000_READ_REG(hw, E1000_EEER);
-out:
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_set_eee_i354 - Enable/disable EEE support
- *  @hw: pointer to the HW structure
- *  @adv1G: boolean flag enabling 1G EEE advertisement
- *  @adv100M: boolean flag enabling 100M EEE advertisement
- *
- *  Enable/disable EEE legacy mode based on setting in dev_spec structure.
- *
- **/
-s32 e1000_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M)
-{
-	struct e1000_phy_info *phy = &hw->phy;
-	s32 ret_val = E1000_SUCCESS;
-	u16 phy_data;
-
-	DEBUGFUNC("e1000_set_eee_i354");
-
-	if ((hw->phy.media_type != e1000_media_type_copper) ||
-	    ((phy->id != M88E1543_E_PHY_ID) &&
-	    (phy->id != M88E1512_E_PHY_ID)))
-		goto out;
-
-	if (!hw->dev_spec._82575.eee_disable) {
-		/* Switch to PHY page 18. */
-		ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 18);
-		if (ret_val)
-			goto out;
-
-		ret_val = phy->ops.read_reg(hw, E1000_M88E1543_EEE_CTRL_1,
-					    &phy_data);
-		if (ret_val)
-			goto out;
-
-		phy_data |= E1000_M88E1543_EEE_CTRL_1_MS;
-		ret_val = phy->ops.write_reg(hw, E1000_M88E1543_EEE_CTRL_1,
-					     phy_data);
-		if (ret_val)
-			goto out;
-
-		/* Return the PHY to page 0. */
-		ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0);
-		if (ret_val)
-			goto out;
-
-		/* Turn on EEE advertisement. */
-		ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
-					       E1000_EEE_ADV_DEV_I354,
-					       &phy_data);
-		if (ret_val)
-			goto out;
-
-		if (adv100M)
-			phy_data |= E1000_EEE_ADV_100_SUPPORTED;
-		else
-			phy_data &= ~E1000_EEE_ADV_100_SUPPORTED;
-
-		if (adv1G)
-			phy_data |= E1000_EEE_ADV_1000_SUPPORTED;
-		else
-			phy_data &= ~E1000_EEE_ADV_1000_SUPPORTED;
-
-		ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
-						E1000_EEE_ADV_DEV_I354,
-						phy_data);
-	} else {
-		/* Turn off EEE advertisement. */
-		ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
-					       E1000_EEE_ADV_DEV_I354,
-					       &phy_data);
-		if (ret_val)
-			goto out;
-
-		phy_data &= ~(E1000_EEE_ADV_100_SUPPORTED |
-			      E1000_EEE_ADV_1000_SUPPORTED);
-		ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
-						E1000_EEE_ADV_DEV_I354,
-						phy_data);
-	}
-
-out:
-	return ret_val;
-}
-
-/**
- *  e1000_get_eee_status_i354 - Get EEE status
- *  @hw: pointer to the HW structure
- *  @status: EEE status
- *
- *  Get EEE status by guessing based on whether Tx or Rx LPI indications have
- *  been received.
- **/
-s32 e1000_get_eee_status_i354(struct e1000_hw *hw, bool *status)
-{
-	struct e1000_phy_info *phy = &hw->phy;
-	s32 ret_val = E1000_SUCCESS;
-	u16 phy_data;
-
-	DEBUGFUNC("e1000_get_eee_status_i354");
-
-	/* Check if EEE is supported on this device. */
-	if ((hw->phy.media_type != e1000_media_type_copper) ||
-	    ((phy->id != M88E1543_E_PHY_ID) &&
-	    (phy->id != M88E1512_E_PHY_ID)))
-		goto out;
-
-	ret_val = e1000_read_xmdio_reg(hw, E1000_PCS_STATUS_ADDR_I354,
-				       E1000_PCS_STATUS_DEV_I354,
-				       &phy_data);
-	if (ret_val)
-		goto out;
-
-	*status = phy_data & (E1000_PCS_STATUS_TX_LPI_RCVD |
-			      E1000_PCS_STATUS_RX_LPI_RCVD) ? true : false;
-
-out:
-	return ret_val;
-}
-
 /* Due to a hw errata, if the host tries to  configure the VFTA register
  * while performing queries from the BMC or DMA, then the VFTA in some
  * cases won't be written.
@@ -3044,36 +2776,6 @@ void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value)
 	E1000_WRITE_FLUSH(hw);
 }
 
-
-/**
- *  e1000_set_i2c_bb - Enable I2C bit-bang
- *  @hw: pointer to the HW structure
- *
- *  Enable I2C bit-bang interface
- *
- **/
-s32 e1000_set_i2c_bb(struct e1000_hw *hw)
-{
-	s32 ret_val = E1000_SUCCESS;
-	u32 ctrl_ext, i2cparams;
-
-	DEBUGFUNC("e1000_set_i2c_bb");
-
-	ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
-	ctrl_ext |= E1000_CTRL_I2C_ENA;
-	E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
-	E1000_WRITE_FLUSH(hw);
-
-	i2cparams = E1000_READ_REG(hw, E1000_I2CPARAMS);
-	i2cparams |= E1000_I2CBB_EN;
-	i2cparams |= E1000_I2C_DATA_OE_N;
-	i2cparams |= E1000_I2C_CLK_OE_N;
-	E1000_WRITE_REG(hw, E1000_I2CPARAMS, i2cparams);
-	E1000_WRITE_FLUSH(hw);
-
-	return ret_val;
-}
-
 /**
  *  e1000_read_i2c_byte_generic - Reads 8 bit word over I2C
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/e1000/base/e1000_82575.h b/drivers/net/e1000/base/e1000_82575.h
index 006b37ae98..03284ca946 100644
--- a/drivers/net/e1000/base/e1000_82575.h
+++ b/drivers/net/e1000/base/e1000_82575.h
@@ -361,9 +361,7 @@ s32 e1000_init_nvm_params_82575(struct e1000_hw *hw);
 
 /* Rx packet buffer size defines */
 #define E1000_RXPBS_SIZE_MASK_82576	0x0000007F
-void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable);
 void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf);
-void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable);
 
 enum e1000_promisc_type {
 	e1000_promisc_disabled = 0,   /* all promisc modes disabled */
@@ -373,15 +371,10 @@ enum e1000_promisc_type {
 	e1000_num_promisc_types
 };
 
-void e1000_vfta_set_vf(struct e1000_hw *, u16, bool);
 void e1000_rlpml_set_vf(struct e1000_hw *, u16);
 s32 e1000_promisc_set_vf(struct e1000_hw *, enum e1000_promisc_type type);
 void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value);
 u16 e1000_rxpbs_adjust_82580(u32 data);
-s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data);
-s32 e1000_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M);
-s32 e1000_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M);
-s32 e1000_get_eee_status_i354(struct e1000_hw *, bool *);
 s32 e1000_initialize_M88E1512_phy(struct e1000_hw *hw);
 s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw);
 
@@ -397,7 +390,6 @@ s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw);
 #define E1000_I2C_T_SU_STO	4
 #define E1000_I2C_T_BUF		5
 
-s32 e1000_set_i2c_bb(struct e1000_hw *hw);
 s32 e1000_read_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
 				u8 dev_addr, u8 *data);
 s32 e1000_write_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
diff --git a/drivers/net/e1000/base/e1000_api.c b/drivers/net/e1000/base/e1000_api.c
index 6a2376f40f..c3a8892c47 100644
--- a/drivers/net/e1000/base/e1000_api.c
+++ b/drivers/net/e1000/base/e1000_api.c
@@ -530,21 +530,6 @@ void e1000_clear_vfta(struct e1000_hw *hw)
 		hw->mac.ops.clear_vfta(hw);
 }
 
-/**
- *  e1000_write_vfta - Write value to VLAN filter table
- *  @hw: pointer to the HW structure
- *  @offset: the 32-bit offset in which to write the value to.
- *  @value: the 32-bit value to write at location offset.
- *
- *  This writes a 32-bit value to a 32-bit offset in the VLAN filter
- *  table. This is a function pointer entry point called by drivers.
- **/
-void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
-{
-	if (hw->mac.ops.write_vfta)
-		hw->mac.ops.write_vfta(hw, offset, value);
-}
-
 /**
  *  e1000_update_mc_addr_list - Update Multicast addresses
  *  @hw: pointer to the HW structure
@@ -562,19 +547,6 @@ void e1000_update_mc_addr_list(struct e1000_hw *hw, u8 *mc_addr_list,
 						mc_addr_count);
 }
 
-/**
- *  e1000_force_mac_fc - Force MAC flow control
- *  @hw: pointer to the HW structure
- *
- *  Force the MAC's flow control settings. Currently no func pointer exists
- *  and all implementations are handled in the generic version of this
- *  function.
- **/
-s32 e1000_force_mac_fc(struct e1000_hw *hw)
-{
-	return e1000_force_mac_fc_generic(hw);
-}
-
 /**
  *  e1000_check_for_link - Check/Store link connection
  *  @hw: pointer to the HW structure
@@ -591,34 +563,6 @@ s32 e1000_check_for_link(struct e1000_hw *hw)
 	return -E1000_ERR_CONFIG;
 }
 
-/**
- *  e1000_check_mng_mode - Check management mode
- *  @hw: pointer to the HW structure
- *
- *  This checks if the adapter has manageability enabled.
- *  This is a function pointer entry point called by drivers.
- **/
-bool e1000_check_mng_mode(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.check_mng_mode)
-		return hw->mac.ops.check_mng_mode(hw);
-
-	return false;
-}
-
-/**
- *  e1000_mng_write_dhcp_info - Writes DHCP info to host interface
- *  @hw: pointer to the HW structure
- *  @buffer: pointer to the host interface
- *  @length: size of the buffer
- *
- *  Writes the DHCP information to the host interface.
- **/
-s32 e1000_mng_write_dhcp_info(struct e1000_hw *hw, u8 *buffer, u16 length)
-{
-	return e1000_mng_write_dhcp_info_generic(hw, buffer, length);
-}
-
 /**
  *  e1000_reset_hw - Reset hardware
  *  @hw: pointer to the HW structure
@@ -665,86 +609,6 @@ s32 e1000_setup_link(struct e1000_hw *hw)
 	return -E1000_ERR_CONFIG;
 }
 
-/**
- *  e1000_get_speed_and_duplex - Returns current speed and duplex
- *  @hw: pointer to the HW structure
- *  @speed: pointer to a 16-bit value to store the speed
- *  @duplex: pointer to a 16-bit value to store the duplex.
- *
- *  This returns the speed and duplex of the adapter in the two 'out'
- *  variables passed in. This is a function pointer entry point called
- *  by drivers.
- **/
-s32 e1000_get_speed_and_duplex(struct e1000_hw *hw, u16 *speed, u16 *duplex)
-{
-	if (hw->mac.ops.get_link_up_info)
-		return hw->mac.ops.get_link_up_info(hw, speed, duplex);
-
-	return -E1000_ERR_CONFIG;
-}
-
-/**
- *  e1000_setup_led - Configures SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This prepares the SW controllable LED for use and saves the current state
- *  of the LED so it can be later restored. This is a function pointer entry
- *  point called by drivers.
- **/
-s32 e1000_setup_led(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.setup_led)
-		return hw->mac.ops.setup_led(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_cleanup_led - Restores SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This restores the SW controllable LED to the value saved off by
- *  e1000_setup_led. This is a function pointer entry point called by drivers.
- **/
-s32 e1000_cleanup_led(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.cleanup_led)
-		return hw->mac.ops.cleanup_led(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_blink_led - Blink SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This starts the adapter LED blinking. Request the LED to be setup first
- *  and cleaned up after. This is a function pointer entry point called by
- *  drivers.
- **/
-s32 e1000_blink_led(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.blink_led)
-		return hw->mac.ops.blink_led(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_id_led_init - store LED configurations in SW
- *  @hw: pointer to the HW structure
- *
- *  Initializes the LED config in SW. This is a function pointer entry point
- *  called by drivers.
- **/
-s32 e1000_id_led_init(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.id_led_init)
-		return hw->mac.ops.id_led_init(hw);
-
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_led_on - Turn on SW controllable LED
  *  @hw: pointer to the HW structure
@@ -775,43 +639,6 @@ s32 e1000_led_off(struct e1000_hw *hw)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_reset_adaptive - Reset adaptive IFS
- *  @hw: pointer to the HW structure
- *
- *  Resets the adaptive IFS. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-void e1000_reset_adaptive(struct e1000_hw *hw)
-{
-	e1000_reset_adaptive_generic(hw);
-}
-
-/**
- *  e1000_update_adaptive - Update adaptive IFS
- *  @hw: pointer to the HW structure
- *
- *  Updates adapter IFS. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-void e1000_update_adaptive(struct e1000_hw *hw)
-{
-	e1000_update_adaptive_generic(hw);
-}
-
-/**
- *  e1000_disable_pcie_master - Disable PCI-Express master access
- *  @hw: pointer to the HW structure
- *
- *  Disables PCI-Express master access and verifies there are no pending
- *  requests. Currently no func pointer exists and all implementations are
- *  handled in the generic version of this function.
- **/
-s32 e1000_disable_pcie_master(struct e1000_hw *hw)
-{
-	return e1000_disable_pcie_master_generic(hw);
-}
-
 /**
  *  e1000_config_collision_dist - Configure collision distance
  *  @hw: pointer to the HW structure
@@ -841,94 +668,6 @@ int e1000_rar_set(struct e1000_hw *hw, u8 *addr, u32 index)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_validate_mdi_setting - Ensures valid MDI/MDIX SW state
- *  @hw: pointer to the HW structure
- *
- *  Ensures that the MDI/MDIX SW state is valid.
- **/
-s32 e1000_validate_mdi_setting(struct e1000_hw *hw)
-{
-	if (hw->mac.ops.validate_mdi_setting)
-		return hw->mac.ops.validate_mdi_setting(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_hash_mc_addr - Determines address location in multicast table
- *  @hw: pointer to the HW structure
- *  @mc_addr: Multicast address to hash.
- *
- *  This hashes an address to determine its location in the multicast
- *  table. Currently no func pointer exists and all implementations
- *  are handled in the generic version of this function.
- **/
-u32 e1000_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
-{
-	return e1000_hash_mc_addr_generic(hw, mc_addr);
-}
-
-/**
- *  e1000_enable_tx_pkt_filtering - Enable packet filtering on TX
- *  @hw: pointer to the HW structure
- *
- *  Enables packet filtering on transmit packets if manageability is enabled
- *  and host interface is enabled.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-bool e1000_enable_tx_pkt_filtering(struct e1000_hw *hw)
-{
-	return e1000_enable_tx_pkt_filtering_generic(hw);
-}
-
-/**
- *  e1000_mng_host_if_write - Writes to the manageability host interface
- *  @hw: pointer to the HW structure
- *  @buffer: pointer to the host interface buffer
- *  @length: size of the buffer
- *  @offset: location in the buffer to write to
- *  @sum: sum of the data (not checksum)
- *
- *  This function writes the buffer content at the offset given on the host if.
- *  It also does alignment considerations to do the writes in most efficient
- *  way.  Also fills up the sum of the buffer in *buffer parameter.
- **/
-s32 e1000_mng_host_if_write(struct e1000_hw *hw, u8 *buffer, u16 length,
-			    u16 offset, u8 *sum)
-{
-	return e1000_mng_host_if_write_generic(hw, buffer, length, offset, sum);
-}
-
-/**
- *  e1000_mng_write_cmd_header - Writes manageability command header
- *  @hw: pointer to the HW structure
- *  @hdr: pointer to the host interface command header
- *
- *  Writes the command header after does the checksum calculation.
- **/
-s32 e1000_mng_write_cmd_header(struct e1000_hw *hw,
-			       struct e1000_host_mng_command_header *hdr)
-{
-	return e1000_mng_write_cmd_header_generic(hw, hdr);
-}
-
-/**
- *  e1000_mng_enable_host_if - Checks host interface is enabled
- *  @hw: pointer to the HW structure
- *
- *  Returns E1000_success upon success, else E1000_ERR_HOST_INTERFACE_COMMAND
- *
- *  This function checks whether the HOST IF is enabled for command operation
- *  and also checks whether the previous command is completed.  It busy waits
- *  in case of previous command is not completed.
- **/
-s32 e1000_mng_enable_host_if(struct e1000_hw *hw)
-{
-	return e1000_mng_enable_host_if_generic(hw);
-}
-
 /**
  *  e1000_check_reset_block - Verifies PHY can be reset
  *  @hw: pointer to the HW structure
@@ -944,126 +683,6 @@ s32 e1000_check_reset_block(struct e1000_hw *hw)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_read_phy_reg - Reads PHY register
- *  @hw: pointer to the HW structure
- *  @offset: the register to read
- *  @data: the buffer to store the 16-bit read.
- *
- *  Reads the PHY register and returns the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 e1000_read_phy_reg(struct e1000_hw *hw, u32 offset, u16 *data)
-{
-	if (hw->phy.ops.read_reg)
-		return hw->phy.ops.read_reg(hw, offset, data);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_write_phy_reg - Writes PHY register
- *  @hw: pointer to the HW structure
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes the PHY register at offset with the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 e1000_write_phy_reg(struct e1000_hw *hw, u32 offset, u16 data)
-{
-	if (hw->phy.ops.write_reg)
-		return hw->phy.ops.write_reg(hw, offset, data);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_release_phy - Generic release PHY
- *  @hw: pointer to the HW structure
- *
- *  Return if silicon family does not require a semaphore when accessing the
- *  PHY.
- **/
-void e1000_release_phy(struct e1000_hw *hw)
-{
-	if (hw->phy.ops.release)
-		hw->phy.ops.release(hw);
-}
-
-/**
- *  e1000_acquire_phy - Generic acquire PHY
- *  @hw: pointer to the HW structure
- *
- *  Return success if silicon family does not require a semaphore when
- *  accessing the PHY.
- **/
-s32 e1000_acquire_phy(struct e1000_hw *hw)
-{
-	if (hw->phy.ops.acquire)
-		return hw->phy.ops.acquire(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_cfg_on_link_up - Configure PHY upon link up
- *  @hw: pointer to the HW structure
- **/
-s32 e1000_cfg_on_link_up(struct e1000_hw *hw)
-{
-	if (hw->phy.ops.cfg_on_link_up)
-		return hw->phy.ops.cfg_on_link_up(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_read_kmrn_reg - Reads register using Kumeran interface
- *  @hw: pointer to the HW structure
- *  @offset: the register to read
- *  @data: the location to store the 16-bit value read.
- *
- *  Reads a register out of the Kumeran interface. Currently no func pointer
- *  exists and all implementations are handled in the generic version of
- *  this function.
- **/
-s32 e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data)
-{
-	return e1000_read_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- *  e1000_write_kmrn_reg - Writes register using Kumeran interface
- *  @hw: pointer to the HW structure
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes a register to the Kumeran interface. Currently no func pointer
- *  exists and all implementations are handled in the generic version of
- *  this function.
- **/
-s32 e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data)
-{
-	return e1000_write_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- *  e1000_get_cable_length - Retrieves cable length estimation
- *  @hw: pointer to the HW structure
- *
- *  This function estimates the cable length and stores them in
- *  hw->phy.min_length and hw->phy.max_length. This is a function pointer
- *  entry point called by drivers.
- **/
-s32 e1000_get_cable_length(struct e1000_hw *hw)
-{
-	if (hw->phy.ops.get_cable_length)
-		return hw->phy.ops.get_cable_length(hw);
-
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_get_phy_info - Retrieves PHY information from registers
  *  @hw: pointer to the HW structure
@@ -1095,65 +714,6 @@ s32 e1000_phy_hw_reset(struct e1000_hw *hw)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_phy_commit - Soft PHY reset
- *  @hw: pointer to the HW structure
- *
- *  Performs a soft PHY reset on those that apply. This is a function pointer
- *  entry point called by drivers.
- **/
-s32 e1000_phy_commit(struct e1000_hw *hw)
-{
-	if (hw->phy.ops.commit)
-		return hw->phy.ops.commit(hw);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_set_d0_lplu_state - Sets low power link up state for D0
- *  @hw: pointer to the HW structure
- *  @active: boolean used to enable/disable lplu
- *
- *  Success returns 0, Failure returns 1
- *
- *  The low power link up (lplu) state is set to the power management level D0
- *  and SmartSpeed is disabled when active is true, else clear lplu for D0
- *  and enable Smartspeed.  LPLU and Smartspeed are mutually exclusive.  LPLU
- *  is used during Dx states where the power conservation is most important.
- *  During driver activity, SmartSpeed should be enabled so performance is
- *  maintained.  This is a function pointer entry point called by drivers.
- **/
-s32 e1000_set_d0_lplu_state(struct e1000_hw *hw, bool active)
-{
-	if (hw->phy.ops.set_d0_lplu_state)
-		return hw->phy.ops.set_d0_lplu_state(hw, active);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_set_d3_lplu_state - Sets low power link up state for D3
- *  @hw: pointer to the HW structure
- *  @active: boolean used to enable/disable lplu
- *
- *  Success returns 0, Failure returns 1
- *
- *  The low power link up (lplu) state is set to the power management level D3
- *  and SmartSpeed is disabled when active is true, else clear lplu for D3
- *  and enable Smartspeed.  LPLU and Smartspeed are mutually exclusive.  LPLU
- *  is used during Dx states where the power conservation is most important.
- *  During driver activity, SmartSpeed should be enabled so performance is
- *  maintained.  This is a function pointer entry point called by drivers.
- **/
-s32 e1000_set_d3_lplu_state(struct e1000_hw *hw, bool active)
-{
-	if (hw->phy.ops.set_d3_lplu_state)
-		return hw->phy.ops.set_d3_lplu_state(hw, active);
-
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_read_mac_addr - Reads MAC address
  *  @hw: pointer to the HW structure
@@ -1170,52 +730,6 @@ s32 e1000_read_mac_addr(struct e1000_hw *hw)
 	return e1000_read_mac_addr_generic(hw);
 }
 
-/**
- *  e1000_read_pba_string - Read device part number string
- *  @hw: pointer to the HW structure
- *  @pba_num: pointer to device part number
- *  @pba_num_size: size of part number buffer
- *
- *  Reads the product board assembly (PBA) number from the EEPROM and stores
- *  the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 e1000_read_pba_string(struct e1000_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
-	return e1000_read_pba_string_generic(hw, pba_num, pba_num_size);
-}
-
-/**
- *  e1000_read_pba_length - Read device part number string length
- *  @hw: pointer to the HW structure
- *  @pba_num_size: size of part number buffer
- *
- *  Reads the product board assembly (PBA) number length from the EEPROM and
- *  stores the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 e1000_read_pba_length(struct e1000_hw *hw, u32 *pba_num_size)
-{
-	return e1000_read_pba_length_generic(hw, pba_num_size);
-}
-
-/**
- *  e1000_read_pba_num - Read device part number
- *  @hw: pointer to the HW structure
- *  @pba_num: pointer to device part number
- *
- *  Reads the product board assembly (PBA) number from the EEPROM and stores
- *  the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 e1000_read_pba_num(struct e1000_hw *hw, u32 *pba_num)
-{
-	return e1000_read_pba_num_generic(hw, pba_num);
-}
-
 /**
  *  e1000_validate_nvm_checksum - Verifies NVM (EEPROM) checksum
  *  @hw: pointer to the HW structure
@@ -1231,34 +745,6 @@ s32 e1000_validate_nvm_checksum(struct e1000_hw *hw)
 	return -E1000_ERR_CONFIG;
 }
 
-/**
- *  e1000_update_nvm_checksum - Updates NVM (EEPROM) checksum
- *  @hw: pointer to the HW structure
- *
- *  Updates the NVM checksum. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-s32 e1000_update_nvm_checksum(struct e1000_hw *hw)
-{
-	if (hw->nvm.ops.update)
-		return hw->nvm.ops.update(hw);
-
-	return -E1000_ERR_CONFIG;
-}
-
-/**
- *  e1000_reload_nvm - Reloads EEPROM
- *  @hw: pointer to the HW structure
- *
- *  Reloads the EEPROM by setting the "Reinitialize from EEPROM" bit in the
- *  extended control register.
- **/
-void e1000_reload_nvm(struct e1000_hw *hw)
-{
-	if (hw->nvm.ops.reload)
-		hw->nvm.ops.reload(hw);
-}
-
 /**
  *  e1000_read_nvm - Reads NVM (EEPROM)
  *  @hw: pointer to the HW structure
@@ -1295,22 +781,6 @@ s32 e1000_write_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_write_8bit_ctrl_reg - Writes 8bit Control register
- *  @hw: pointer to the HW structure
- *  @reg: 32bit register offset
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes the PHY register at offset with the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 e1000_write_8bit_ctrl_reg(struct e1000_hw *hw, u32 reg, u32 offset,
-			      u8 data)
-{
-	return e1000_write_8bit_ctrl_reg_generic(hw, reg, offset, data);
-}
-
 /**
  * e1000_power_up_phy - Restores link in case of PHY power down
  * @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_api.h b/drivers/net/e1000/base/e1000_api.h
index 6b38e2b7bb..1c240dfcdf 100644
--- a/drivers/net/e1000/base/e1000_api.h
+++ b/drivers/net/e1000/base/e1000_api.h
@@ -29,65 +29,25 @@ s32 e1000_init_phy_params(struct e1000_hw *hw);
 s32 e1000_init_mbx_params(struct e1000_hw *hw);
 s32 e1000_get_bus_info(struct e1000_hw *hw);
 void e1000_clear_vfta(struct e1000_hw *hw);
-void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value);
-s32 e1000_force_mac_fc(struct e1000_hw *hw);
 s32 e1000_check_for_link(struct e1000_hw *hw);
 s32 e1000_reset_hw(struct e1000_hw *hw);
 s32 e1000_init_hw(struct e1000_hw *hw);
 s32 e1000_setup_link(struct e1000_hw *hw);
-s32 e1000_get_speed_and_duplex(struct e1000_hw *hw, u16 *speed, u16 *duplex);
-s32 e1000_disable_pcie_master(struct e1000_hw *hw);
 void e1000_config_collision_dist(struct e1000_hw *hw);
 int e1000_rar_set(struct e1000_hw *hw, u8 *addr, u32 index);
-u32 e1000_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr);
 void e1000_update_mc_addr_list(struct e1000_hw *hw, u8 *mc_addr_list,
 			       u32 mc_addr_count);
-s32 e1000_setup_led(struct e1000_hw *hw);
-s32 e1000_cleanup_led(struct e1000_hw *hw);
 s32 e1000_check_reset_block(struct e1000_hw *hw);
-s32 e1000_blink_led(struct e1000_hw *hw);
 s32 e1000_led_on(struct e1000_hw *hw);
 s32 e1000_led_off(struct e1000_hw *hw);
-s32 e1000_id_led_init(struct e1000_hw *hw);
-void e1000_reset_adaptive(struct e1000_hw *hw);
-void e1000_update_adaptive(struct e1000_hw *hw);
-s32 e1000_get_cable_length(struct e1000_hw *hw);
-s32 e1000_validate_mdi_setting(struct e1000_hw *hw);
-s32 e1000_read_phy_reg(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_write_phy_reg(struct e1000_hw *hw, u32 offset, u16 data);
-s32 e1000_write_8bit_ctrl_reg(struct e1000_hw *hw, u32 reg, u32 offset,
-			      u8 data);
 s32 e1000_get_phy_info(struct e1000_hw *hw);
-void e1000_release_phy(struct e1000_hw *hw);
-s32 e1000_acquire_phy(struct e1000_hw *hw);
-s32 e1000_cfg_on_link_up(struct e1000_hw *hw);
 s32 e1000_phy_hw_reset(struct e1000_hw *hw);
-s32 e1000_phy_commit(struct e1000_hw *hw);
 void e1000_power_up_phy(struct e1000_hw *hw);
 void e1000_power_down_phy(struct e1000_hw *hw);
 s32 e1000_read_mac_addr(struct e1000_hw *hw);
-s32 e1000_read_pba_num(struct e1000_hw *hw, u32 *part_num);
-s32 e1000_read_pba_string(struct e1000_hw *hw, u8 *pba_num, u32 pba_num_size);
-s32 e1000_read_pba_length(struct e1000_hw *hw, u32 *pba_num_size);
-void e1000_reload_nvm(struct e1000_hw *hw);
-s32 e1000_update_nvm_checksum(struct e1000_hw *hw);
 s32 e1000_validate_nvm_checksum(struct e1000_hw *hw);
 s32 e1000_read_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
-s32 e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data);
 s32 e1000_write_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
-s32 e1000_set_d3_lplu_state(struct e1000_hw *hw, bool active);
-s32 e1000_set_d0_lplu_state(struct e1000_hw *hw, bool active);
-bool e1000_check_mng_mode(struct e1000_hw *hw);
-bool e1000_enable_tx_pkt_filtering(struct e1000_hw *hw);
-s32 e1000_mng_enable_host_if(struct e1000_hw *hw);
-s32 e1000_mng_host_if_write(struct e1000_hw *hw, u8 *buffer, u16 length,
-			    u16 offset, u8 *sum);
-s32 e1000_mng_write_cmd_header(struct e1000_hw *hw,
-			       struct e1000_host_mng_command_header *hdr);
-s32 e1000_mng_write_dhcp_info(struct e1000_hw *hw, u8 *buffer, u16 length);
-u32  e1000_translate_register_82542(u32 reg);
-
 
 
 /*
diff --git a/drivers/net/e1000/base/e1000_base.c b/drivers/net/e1000/base/e1000_base.c
index ab73e1e59e..958aca14b2 100644
--- a/drivers/net/e1000/base/e1000_base.c
+++ b/drivers/net/e1000/base/e1000_base.c
@@ -110,81 +110,3 @@ void e1000_power_down_phy_copper_base(struct e1000_hw *hw)
 	if (phy->ops.check_reset_block(hw))
 		e1000_power_down_phy_copper(hw);
 }
-
-/**
- *  e1000_rx_fifo_flush_base - Clean Rx FIFO after Rx enable
- *  @hw: pointer to the HW structure
- *
- *  After Rx enable, if manageability is enabled then there is likely some
- *  bad data at the start of the FIFO and possibly in the DMA FIFO.  This
- *  function clears the FIFOs and flushes any packets that came in as Rx was
- *  being enabled.
- **/
-void e1000_rx_fifo_flush_base(struct e1000_hw *hw)
-{
-	u32 rctl, rlpml, rxdctl[4], rfctl, temp_rctl, rx_enabled;
-	int i, ms_wait;
-
-	DEBUGFUNC("e1000_rx_fifo_flush_base");
-
-	/* disable IPv6 options as per hardware errata */
-	rfctl = E1000_READ_REG(hw, E1000_RFCTL);
-	rfctl |= E1000_RFCTL_IPV6_EX_DIS;
-	E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
-
-	if (!(E1000_READ_REG(hw, E1000_MANC) & E1000_MANC_RCV_TCO_EN))
-		return;
-
-	/* Disable all Rx queues */
-	for (i = 0; i < 4; i++) {
-		rxdctl[i] = E1000_READ_REG(hw, E1000_RXDCTL(i));
-		E1000_WRITE_REG(hw, E1000_RXDCTL(i),
-				rxdctl[i] & ~E1000_RXDCTL_QUEUE_ENABLE);
-	}
-	/* Poll all queues to verify they have shut down */
-	for (ms_wait = 0; ms_wait < 10; ms_wait++) {
-		msec_delay(1);
-		rx_enabled = 0;
-		for (i = 0; i < 4; i++)
-			rx_enabled |= E1000_READ_REG(hw, E1000_RXDCTL(i));
-		if (!(rx_enabled & E1000_RXDCTL_QUEUE_ENABLE))
-			break;
-	}
-
-	if (ms_wait == 10)
-		DEBUGOUT("Queue disable timed out after 10ms\n");
-
-	/* Clear RLPML, RCTL.SBP, RFCTL.LEF, and set RCTL.LPE so that all
-	 * incoming packets are rejected.  Set enable and wait 2ms so that
-	 * any packet that was coming in as RCTL.EN was set is flushed
-	 */
-	E1000_WRITE_REG(hw, E1000_RFCTL, rfctl & ~E1000_RFCTL_LEF);
-
-	rlpml = E1000_READ_REG(hw, E1000_RLPML);
-	E1000_WRITE_REG(hw, E1000_RLPML, 0);
-
-	rctl = E1000_READ_REG(hw, E1000_RCTL);
-	temp_rctl = rctl & ~(E1000_RCTL_EN | E1000_RCTL_SBP);
-	temp_rctl |= E1000_RCTL_LPE;
-
-	E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl);
-	E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl | E1000_RCTL_EN);
-	E1000_WRITE_FLUSH(hw);
-	msec_delay(2);
-
-	/* Enable Rx queues that were previously enabled and restore our
-	 * previous state
-	 */
-	for (i = 0; i < 4; i++)
-		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl[i]);
-	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
-	E1000_WRITE_FLUSH(hw);
-
-	E1000_WRITE_REG(hw, E1000_RLPML, rlpml);
-	E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
-
-	/* Flush receive errors generated by workaround */
-	E1000_READ_REG(hw, E1000_ROC);
-	E1000_READ_REG(hw, E1000_RNBC);
-	E1000_READ_REG(hw, E1000_MPC);
-}
diff --git a/drivers/net/e1000/base/e1000_base.h b/drivers/net/e1000/base/e1000_base.h
index 0d6172b6d8..16d7ca98a7 100644
--- a/drivers/net/e1000/base/e1000_base.h
+++ b/drivers/net/e1000/base/e1000_base.h
@@ -8,7 +8,6 @@
 /* forward declaration */
 s32 e1000_init_hw_base(struct e1000_hw *hw);
 void e1000_power_down_phy_copper_base(struct e1000_hw *hw);
-extern void e1000_rx_fifo_flush_base(struct e1000_hw *hw);
 s32 e1000_acquire_phy_base(struct e1000_hw *hw);
 void e1000_release_phy_base(struct e1000_hw *hw);
 
diff --git a/drivers/net/e1000/base/e1000_ich8lan.c b/drivers/net/e1000/base/e1000_ich8lan.c
index 14f86b7bdc..4f9a7bc3f1 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.c
+++ b/drivers/net/e1000/base/e1000_ich8lan.c
@@ -5467,60 +5467,6 @@ void e1000_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
 	return;
 }
 
-/**
- *  e1000_ipg3_phy_powerdown_workaround_ich8lan - Power down workaround on D3
- *  @hw: pointer to the HW structure
- *
- *  Workaround for 82566 power-down on D3 entry:
- *    1) disable gigabit link
- *    2) write VR power-down enable
- *    3) read it back
- *  Continue if successful, else issue LCD reset and repeat
- **/
-void e1000_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw)
-{
-	u32 reg;
-	u16 data;
-	u8  retry = 0;
-
-	DEBUGFUNC("e1000_igp3_phy_powerdown_workaround_ich8lan");
-
-	if (hw->phy.type != e1000_phy_igp_3)
-		return;
-
-	/* Try the workaround twice (if needed) */
-	do {
-		/* Disable link */
-		reg = E1000_READ_REG(hw, E1000_PHY_CTRL);
-		reg |= (E1000_PHY_CTRL_GBE_DISABLE |
-			E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
-		E1000_WRITE_REG(hw, E1000_PHY_CTRL, reg);
-
-		/* Call gig speed drop workaround on Gig disable before
-		 * accessing any PHY registers
-		 */
-		if (hw->mac.type == e1000_ich8lan)
-			e1000_gig_downshift_workaround_ich8lan(hw);
-
-		/* Write VR power-down enable */
-		hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
-		data &= ~IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
-		hw->phy.ops.write_reg(hw, IGP3_VR_CTRL,
-				      data | IGP3_VR_CTRL_MODE_SHUTDOWN);
-
-		/* Read it back and test */
-		hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
-		data &= IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
-		if ((data == IGP3_VR_CTRL_MODE_SHUTDOWN) || retry)
-			break;
-
-		/* Issue PHY reset and repeat at most one more time */
-		reg = E1000_READ_REG(hw, E1000_CTRL);
-		E1000_WRITE_REG(hw, E1000_CTRL, reg | E1000_CTRL_PHY_RST);
-		retry++;
-	} while (retry);
-}
-
 /**
  *  e1000_gig_downshift_workaround_ich8lan - WoL from S5 stops working
  *  @hw: pointer to the HW structure
@@ -5557,218 +5503,6 @@ void e1000_gig_downshift_workaround_ich8lan(struct e1000_hw *hw)
 				     reg_data);
 }
 
-/**
- *  e1000_suspend_workarounds_ich8lan - workarounds needed during S0->Sx
- *  @hw: pointer to the HW structure
- *
- *  During S0 to Sx transition, it is possible the link remains at gig
- *  instead of negotiating to a lower speed.  Before going to Sx, set
- *  'Gig Disable' to force link speed negotiation to a lower speed based on
- *  the LPLU setting in the NVM or custom setting.  For PCH and newer parts,
- *  the OEM bits PHY register (LED, GbE disable and LPLU configurations) also
- *  needs to be written.
- *  Parts that support (and are linked to a partner which support) EEE in
- *  100Mbps should disable LPLU since 100Mbps w/ EEE requires less power
- *  than 10Mbps w/o EEE.
- **/
-void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw)
-{
-	struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
-	u32 phy_ctrl;
-	s32 ret_val;
-
-	DEBUGFUNC("e1000_suspend_workarounds_ich8lan");
-
-	phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
-	phy_ctrl |= E1000_PHY_CTRL_GBE_DISABLE;
-
-	if (hw->phy.type == e1000_phy_i217) {
-		u16 phy_reg, device_id = hw->device_id;
-
-		if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) ||
-		    (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) ||
-		    (device_id == E1000_DEV_ID_PCH_I218_LM3) ||
-		    (device_id == E1000_DEV_ID_PCH_I218_V3) ||
-		    (hw->mac.type >= e1000_pch_spt)) {
-			u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6);
-
-			E1000_WRITE_REG(hw, E1000_FEXTNVM6,
-					fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK);
-		}
-
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			goto out;
-
-		if (!dev_spec->eee_disable) {
-			u16 eee_advert;
-
-			ret_val =
-			    e1000_read_emi_reg_locked(hw,
-						      I217_EEE_ADVERTISEMENT,
-						      &eee_advert);
-			if (ret_val)
-				goto release;
-
-			/* Disable LPLU if both link partners support 100BaseT
-			 * EEE and 100Full is advertised on both ends of the
-			 * link, and enable Auto Enable LPI since there will
-			 * be no driver to enable LPI while in Sx.
-			 */
-			if ((eee_advert & I82579_EEE_100_SUPPORTED) &&
-			    (dev_spec->eee_lp_ability &
-			     I82579_EEE_100_SUPPORTED) &&
-			    (hw->phy.autoneg_advertised & ADVERTISE_100_FULL)) {
-				phy_ctrl &= ~(E1000_PHY_CTRL_D0A_LPLU |
-					      E1000_PHY_CTRL_NOND0A_LPLU);
-
-				/* Set Auto Enable LPI after link up */
-				hw->phy.ops.read_reg_locked(hw,
-							    I217_LPI_GPIO_CTRL,
-							    &phy_reg);
-				phy_reg |= I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
-				hw->phy.ops.write_reg_locked(hw,
-							     I217_LPI_GPIO_CTRL,
-							     phy_reg);
-			}
-		}
-
-		/* For i217 Intel Rapid Start Technology support,
-		 * when the system is going into Sx and no manageability engine
-		 * is present, the driver must configure proxy to reset only on
-		 * power good.  LPI (Low Power Idle) state must also reset only
-		 * on power good, as well as the MTA (Multicast table array).
-		 * The SMBus release must also be disabled on LCD reset.
-		 */
-		if (!(E1000_READ_REG(hw, E1000_FWSM) &
-		      E1000_ICH_FWSM_FW_VALID)) {
-			/* Enable proxy to reset only on power good. */
-			hw->phy.ops.read_reg_locked(hw, I217_PROXY_CTRL,
-						    &phy_reg);
-			phy_reg |= I217_PROXY_CTRL_AUTO_DISABLE;
-			hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL,
-						     phy_reg);
-
-			/* Set bit enable LPI (EEE) to reset only on
-			 * power good.
-			*/
-			hw->phy.ops.read_reg_locked(hw, I217_SxCTRL, &phy_reg);
-			phy_reg |= I217_SxCTRL_ENABLE_LPI_RESET;
-			hw->phy.ops.write_reg_locked(hw, I217_SxCTRL, phy_reg);
-
-			/* Disable the SMB release on LCD reset. */
-			hw->phy.ops.read_reg_locked(hw, I217_MEMPWR, &phy_reg);
-			phy_reg &= ~I217_MEMPWR_DISABLE_SMB_RELEASE;
-			hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
-		}
-
-		/* Enable MTA to reset for Intel Rapid Start Technology
-		 * Support
-		 */
-		hw->phy.ops.read_reg_locked(hw, I217_CGFREG, &phy_reg);
-		phy_reg |= I217_CGFREG_ENABLE_MTA_RESET;
-		hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
-
-release:
-		hw->phy.ops.release(hw);
-	}
-out:
-	E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
-
-	if (hw->mac.type == e1000_ich8lan)
-		e1000_gig_downshift_workaround_ich8lan(hw);
-
-	if (hw->mac.type >= e1000_pchlan) {
-		e1000_oem_bits_config_ich8lan(hw, false);
-
-		/* Reset PHY to activate OEM bits on 82577/8 */
-		if (hw->mac.type == e1000_pchlan)
-			e1000_phy_hw_reset_generic(hw);
-
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			return;
-		e1000_write_smbus_addr(hw);
-		hw->phy.ops.release(hw);
-	}
-
-	return;
-}
-
-/**
- *  e1000_resume_workarounds_pchlan - workarounds needed during Sx->S0
- *  @hw: pointer to the HW structure
- *
- *  During Sx to S0 transitions on non-managed devices or managed devices
- *  on which PHY resets are not blocked, if the PHY registers cannot be
- *  accessed properly by the s/w toggle the LANPHYPC value to power cycle
- *  the PHY.
- *  On i217, setup Intel Rapid Start Technology.
- **/
-u32 e1000_resume_workarounds_pchlan(struct e1000_hw *hw)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("e1000_resume_workarounds_pchlan");
-	if (hw->mac.type < e1000_pch2lan)
-		return E1000_SUCCESS;
-
-	ret_val = e1000_init_phy_workarounds_pchlan(hw);
-	if (ret_val) {
-		DEBUGOUT1("Failed to init PHY flow ret_val=%d\n", ret_val);
-		return ret_val;
-	}
-
-	/* For i217 Intel Rapid Start Technology support when the system
-	 * is transitioning from Sx and no manageability engine is present
-	 * configure SMBus to restore on reset, disable proxy, and enable
-	 * the reset on MTA (Multicast table array).
-	 */
-	if (hw->phy.type == e1000_phy_i217) {
-		u16 phy_reg;
-
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val) {
-			DEBUGOUT("Failed to setup iRST\n");
-			return ret_val;
-		}
-
-		/* Clear Auto Enable LPI after link up */
-		hw->phy.ops.read_reg_locked(hw, I217_LPI_GPIO_CTRL, &phy_reg);
-		phy_reg &= ~I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
-		hw->phy.ops.write_reg_locked(hw, I217_LPI_GPIO_CTRL, phy_reg);
-
-		if (!(E1000_READ_REG(hw, E1000_FWSM) &
-		    E1000_ICH_FWSM_FW_VALID)) {
-			/* Restore clear on SMB if no manageability engine
-			 * is present
-			 */
-			ret_val = hw->phy.ops.read_reg_locked(hw, I217_MEMPWR,
-							      &phy_reg);
-			if (ret_val)
-				goto release;
-			phy_reg |= I217_MEMPWR_DISABLE_SMB_RELEASE;
-			hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
-
-			/* Disable Proxy */
-			hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL, 0);
-		}
-		/* Enable reset on MTA */
-		ret_val = hw->phy.ops.read_reg_locked(hw, I217_CGFREG,
-						      &phy_reg);
-		if (ret_val)
-			goto release;
-		phy_reg &= ~I217_CGFREG_ENABLE_MTA_RESET;
-		hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
-release:
-		if (ret_val)
-			DEBUGOUT1("Error %d in resume workarounds\n", ret_val);
-		hw->phy.ops.release(hw);
-		return ret_val;
-	}
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_cleanup_led_ich8lan - Restore the default LED operation
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_ich8lan.h b/drivers/net/e1000/base/e1000_ich8lan.h
index e456e5132e..e28ebb55ba 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.h
+++ b/drivers/net/e1000/base/e1000_ich8lan.h
@@ -281,10 +281,7 @@
 #define E1000_PCI_REVISION_ID_REG	0x08
 void e1000_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
 						 bool state);
-void e1000_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw);
 void e1000_gig_downshift_workaround_ich8lan(struct e1000_hw *hw);
-void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw);
-u32 e1000_resume_workarounds_pchlan(struct e1000_hw *hw);
 s32 e1000_configure_k1_ich8lan(struct e1000_hw *hw, bool k1_enable);
 s32 e1000_configure_k0s_lpt(struct e1000_hw *hw, u8 entry_latency, u8 min_time);
 void e1000_copy_rx_addrs_to_phy_ich8lan(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_mac.c b/drivers/net/e1000/base/e1000_mac.c
index d3b3a6bac9..fe1516bd92 100644
--- a/drivers/net/e1000/base/e1000_mac.c
+++ b/drivers/net/e1000/base/e1000_mac.c
@@ -124,20 +124,6 @@ void e1000_null_write_vfta(struct e1000_hw E1000_UNUSEDARG *hw,
 	return;
 }
 
-/**
- *  e1000_null_rar_set - No-op function, return 0
- *  @hw: pointer to the HW structure
- *  @h: dummy variable
- *  @a: dummy variable
- **/
-int e1000_null_rar_set(struct e1000_hw E1000_UNUSEDARG *hw,
-			u8 E1000_UNUSEDARG *h, u32 E1000_UNUSEDARG a)
-{
-	DEBUGFUNC("e1000_null_rar_set");
-	UNREFERENCED_3PARAMETER(hw, h, a);
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_get_bus_info_pci_generic - Get PCI(x) bus information
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_mac.h b/drivers/net/e1000/base/e1000_mac.h
index 86fcad23bb..0abaf2f452 100644
--- a/drivers/net/e1000/base/e1000_mac.h
+++ b/drivers/net/e1000/base/e1000_mac.h
@@ -13,7 +13,6 @@ s32  e1000_null_link_info(struct e1000_hw *hw, u16 *s, u16 *d);
 bool e1000_null_mng_mode(struct e1000_hw *hw);
 void e1000_null_update_mc(struct e1000_hw *hw, u8 *h, u32 a);
 void e1000_null_write_vfta(struct e1000_hw *hw, u32 a, u32 b);
-int  e1000_null_rar_set(struct e1000_hw *hw, u8 *h, u32 a);
 s32  e1000_blink_led_generic(struct e1000_hw *hw);
 s32  e1000_check_for_copper_link_generic(struct e1000_hw *hw);
 s32  e1000_check_for_fiber_link_generic(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_manage.c b/drivers/net/e1000/base/e1000_manage.c
index 4b81028302..266bb9ec91 100644
--- a/drivers/net/e1000/base/e1000_manage.c
+++ b/drivers/net/e1000/base/e1000_manage.c
@@ -353,195 +353,3 @@ bool e1000_enable_mng_pass_thru(struct e1000_hw *hw)
 
 	return false;
 }
-
-/**
- *  e1000_host_interface_command - Writes buffer to host interface
- *  @hw: pointer to the HW structure
- *  @buffer: contains a command to write
- *  @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- *  Writes a buffer to the Host Interface.  Upon success, returns E1000_SUCCESS
- *  else returns E1000_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 e1000_host_interface_command(struct e1000_hw *hw, u8 *buffer, u32 length)
-{
-	u32 hicr, i;
-
-	DEBUGFUNC("e1000_host_interface_command");
-
-	if (!(hw->mac.arc_subsystem_valid)) {
-		DEBUGOUT("Hardware doesn't support host interface command.\n");
-		return E1000_SUCCESS;
-	}
-
-	if (!hw->mac.asf_firmware_present) {
-		DEBUGOUT("Firmware is not present.\n");
-		return E1000_SUCCESS;
-	}
-
-	if (length == 0 || length & 0x3 ||
-	    length > E1000_HI_MAX_BLOCK_BYTE_LENGTH) {
-		DEBUGOUT("Buffer length failure.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Check that the host interface is enabled. */
-	hicr = E1000_READ_REG(hw, E1000_HICR);
-	if (!(hicr & E1000_HICR_EN)) {
-		DEBUGOUT("E1000_HOST_EN bit disabled.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Calculate length in DWORDs */
-	length >>= 2;
-
-	/* The device driver writes the relevant command block
-	 * into the ram area.
-	 */
-	for (i = 0; i < length; i++)
-		E1000_WRITE_REG_ARRAY_DWORD(hw, E1000_HOST_IF, i,
-					    *((u32 *)buffer + i));
-
-	/* Setting this bit tells the ARC that a new command is pending. */
-	E1000_WRITE_REG(hw, E1000_HICR, hicr | E1000_HICR_C);
-
-	for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
-		hicr = E1000_READ_REG(hw, E1000_HICR);
-		if (!(hicr & E1000_HICR_C))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check command successful completion. */
-	if (i == E1000_HI_COMMAND_TIMEOUT ||
-	    (!(E1000_READ_REG(hw, E1000_HICR) & E1000_HICR_SV))) {
-		DEBUGOUT("Command has failed with no status valid.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	for (i = 0; i < length; i++)
-		*((u32 *)buffer + i) = E1000_READ_REG_ARRAY_DWORD(hw,
-								  E1000_HOST_IF,
-								  i);
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_load_firmware - Writes proxy FW code buffer to host interface
- *                        and execute.
- *  @hw: pointer to the HW structure
- *  @buffer: contains a firmware to write
- *  @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- *  Upon success returns E1000_SUCCESS, returns E1000_ERR_CONFIG if not enabled
- *  in HW else returns E1000_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 e1000_load_firmware(struct e1000_hw *hw, u8 *buffer, u32 length)
-{
-	u32 hicr, hibba, fwsm, icr, i;
-
-	DEBUGFUNC("e1000_load_firmware");
-
-	if (hw->mac.type < e1000_i210) {
-		DEBUGOUT("Hardware doesn't support loading FW by the driver\n");
-		return -E1000_ERR_CONFIG;
-	}
-
-	/* Check that the host interface is enabled. */
-	hicr = E1000_READ_REG(hw, E1000_HICR);
-	if (!(hicr & E1000_HICR_EN)) {
-		DEBUGOUT("E1000_HOST_EN bit disabled.\n");
-		return -E1000_ERR_CONFIG;
-	}
-	if (!(hicr & E1000_HICR_MEMORY_BASE_EN)) {
-		DEBUGOUT("E1000_HICR_MEMORY_BASE_EN bit disabled.\n");
-		return -E1000_ERR_CONFIG;
-	}
-
-	if (length == 0 || length & 0x3 || length > E1000_HI_FW_MAX_LENGTH) {
-		DEBUGOUT("Buffer length failure.\n");
-		return -E1000_ERR_INVALID_ARGUMENT;
-	}
-
-	/* Clear notification from ROM-FW by reading ICR register */
-	icr = E1000_READ_REG(hw, E1000_ICR_V2);
-
-	/* Reset ROM-FW */
-	hicr = E1000_READ_REG(hw, E1000_HICR);
-	hicr |= E1000_HICR_FW_RESET_ENABLE;
-	E1000_WRITE_REG(hw, E1000_HICR, hicr);
-	hicr |= E1000_HICR_FW_RESET;
-	E1000_WRITE_REG(hw, E1000_HICR, hicr);
-	E1000_WRITE_FLUSH(hw);
-
-	/* Wait till MAC notifies about its readiness after ROM-FW reset */
-	for (i = 0; i < (E1000_HI_COMMAND_TIMEOUT * 2); i++) {
-		icr = E1000_READ_REG(hw, E1000_ICR_V2);
-		if (icr & E1000_ICR_MNG)
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for timeout */
-	if (i == E1000_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("FW reset failed.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Wait till MAC is ready to accept new FW code */
-	for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
-		fwsm = E1000_READ_REG(hw, E1000_FWSM);
-		if ((fwsm & E1000_FWSM_FW_VALID) &&
-		    ((fwsm & E1000_FWSM_MODE_MASK) >> E1000_FWSM_MODE_SHIFT ==
-		    E1000_FWSM_HI_EN_ONLY_MODE))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for timeout */
-	if (i == E1000_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("FW reset failed.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Calculate length in DWORDs */
-	length >>= 2;
-
-	/* The device driver writes the relevant FW code block
-	 * into the ram area in DWORDs via 1kB ram addressing window.
-	 */
-	for (i = 0; i < length; i++) {
-		if (!(i % E1000_HI_FW_BLOCK_DWORD_LENGTH)) {
-			/* Point to correct 1kB ram window */
-			hibba = E1000_HI_FW_BASE_ADDRESS +
-				((E1000_HI_FW_BLOCK_DWORD_LENGTH << 2) *
-				(i / E1000_HI_FW_BLOCK_DWORD_LENGTH));
-
-			E1000_WRITE_REG(hw, E1000_HIBBA, hibba);
-		}
-
-		E1000_WRITE_REG_ARRAY_DWORD(hw, E1000_HOST_IF,
-					    i % E1000_HI_FW_BLOCK_DWORD_LENGTH,
-					    *((u32 *)buffer + i));
-	}
-
-	/* Setting this bit tells the ARC that a new FW is ready to execute. */
-	hicr = E1000_READ_REG(hw, E1000_HICR);
-	E1000_WRITE_REG(hw, E1000_HICR, hicr | E1000_HICR_C);
-
-	for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
-		hicr = E1000_READ_REG(hw, E1000_HICR);
-		if (!(hicr & E1000_HICR_C))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for successful FW start. */
-	if (i == E1000_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("New FW did not start within timeout period.\n");
-		return -E1000_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	return E1000_SUCCESS;
-}
diff --git a/drivers/net/e1000/base/e1000_manage.h b/drivers/net/e1000/base/e1000_manage.h
index 268a13381d..da0246b6a9 100644
--- a/drivers/net/e1000/base/e1000_manage.h
+++ b/drivers/net/e1000/base/e1000_manage.h
@@ -16,8 +16,6 @@ s32  e1000_mng_write_dhcp_info_generic(struct e1000_hw *hw,
 				       u8 *buffer, u16 length);
 bool e1000_enable_mng_pass_thru(struct e1000_hw *hw);
 u8 e1000_calculate_checksum(u8 *buffer, u32 length);
-s32 e1000_host_interface_command(struct e1000_hw *hw, u8 *buffer, u32 length);
-s32 e1000_load_firmware(struct e1000_hw *hw, u8 *buffer, u32 length);
 
 enum e1000_mng_mode {
 	e1000_mng_mode_none = 0,
diff --git a/drivers/net/e1000/base/e1000_nvm.c b/drivers/net/e1000/base/e1000_nvm.c
index 430fecaf6d..4b3ce7d634 100644
--- a/drivers/net/e1000/base/e1000_nvm.c
+++ b/drivers/net/e1000/base/e1000_nvm.c
@@ -947,135 +947,6 @@ s32 e1000_read_pba_num_generic(struct e1000_hw *hw, u32 *pba_num)
 	return E1000_SUCCESS;
 }
 
-
-/**
- *  e1000_read_pba_raw
- *  @hw: pointer to the HW structure
- *  @eeprom_buf: optional pointer to EEPROM image
- *  @eeprom_buf_size: size of EEPROM image in words
- *  @max_pba_block_size: PBA block size limit
- *  @pba: pointer to output PBA structure
- *
- *  Reads PBA from EEPROM image when eeprom_buf is not NULL.
- *  Reads PBA from physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 e1000_read_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
-		       u32 eeprom_buf_size, u16 max_pba_block_size,
-		       struct e1000_pba *pba)
-{
-	s32 ret_val;
-	u16 pba_block_size;
-
-	if (pba == NULL)
-		return -E1000_ERR_PARAM;
-
-	if (eeprom_buf == NULL) {
-		ret_val = e1000_read_nvm(hw, NVM_PBA_OFFSET_0, 2,
-					 &pba->word[0]);
-		if (ret_val)
-			return ret_val;
-	} else {
-		if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
-			pba->word[0] = eeprom_buf[NVM_PBA_OFFSET_0];
-			pba->word[1] = eeprom_buf[NVM_PBA_OFFSET_1];
-		} else {
-			return -E1000_ERR_PARAM;
-		}
-	}
-
-	if (pba->word[0] == NVM_PBA_PTR_GUARD) {
-		if (pba->pba_block == NULL)
-			return -E1000_ERR_PARAM;
-
-		ret_val = e1000_get_pba_block_size(hw, eeprom_buf,
-						   eeprom_buf_size,
-						   &pba_block_size);
-		if (ret_val)
-			return ret_val;
-
-		if (pba_block_size > max_pba_block_size)
-			return -E1000_ERR_PARAM;
-
-		if (eeprom_buf == NULL) {
-			ret_val = e1000_read_nvm(hw, pba->word[1],
-						 pba_block_size,
-						 pba->pba_block);
-			if (ret_val)
-				return ret_val;
-		} else {
-			if (eeprom_buf_size > (u32)(pba->word[1] +
-					      pba_block_size)) {
-				memcpy(pba->pba_block,
-				       &eeprom_buf[pba->word[1]],
-				       pba_block_size * sizeof(u16));
-			} else {
-				return -E1000_ERR_PARAM;
-			}
-		}
-	}
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_write_pba_raw
- *  @hw: pointer to the HW structure
- *  @eeprom_buf: optional pointer to EEPROM image
- *  @eeprom_buf_size: size of EEPROM image in words
- *  @pba: pointer to PBA structure
- *
- *  Writes PBA to EEPROM image when eeprom_buf is not NULL.
- *  Writes PBA to physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 e1000_write_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
-			u32 eeprom_buf_size, struct e1000_pba *pba)
-{
-	s32 ret_val;
-
-	if (pba == NULL)
-		return -E1000_ERR_PARAM;
-
-	if (eeprom_buf == NULL) {
-		ret_val = e1000_write_nvm(hw, NVM_PBA_OFFSET_0, 2,
-					  &pba->word[0]);
-		if (ret_val)
-			return ret_val;
-	} else {
-		if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
-			eeprom_buf[NVM_PBA_OFFSET_0] = pba->word[0];
-			eeprom_buf[NVM_PBA_OFFSET_1] = pba->word[1];
-		} else {
-			return -E1000_ERR_PARAM;
-		}
-	}
-
-	if (pba->word[0] == NVM_PBA_PTR_GUARD) {
-		if (pba->pba_block == NULL)
-			return -E1000_ERR_PARAM;
-
-		if (eeprom_buf == NULL) {
-			ret_val = e1000_write_nvm(hw, pba->word[1],
-						  pba->pba_block[0],
-						  pba->pba_block);
-			if (ret_val)
-				return ret_val;
-		} else {
-			if (eeprom_buf_size > (u32)(pba->word[1] +
-					      pba->pba_block[0])) {
-				memcpy(&eeprom_buf[pba->word[1]],
-				       pba->pba_block,
-				       pba->pba_block[0] * sizeof(u16));
-			} else {
-				return -E1000_ERR_PARAM;
-			}
-		}
-	}
-
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_get_pba_block_size
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_nvm.h b/drivers/net/e1000/base/e1000_nvm.h
index 056f823537..e48d638795 100644
--- a/drivers/net/e1000/base/e1000_nvm.h
+++ b/drivers/net/e1000/base/e1000_nvm.h
@@ -40,11 +40,6 @@ s32  e1000_read_pba_num_generic(struct e1000_hw *hw, u32 *pba_num);
 s32  e1000_read_pba_string_generic(struct e1000_hw *hw, u8 *pba_num,
 				   u32 pba_num_size);
 s32  e1000_read_pba_length_generic(struct e1000_hw *hw, u32 *pba_num_size);
-s32 e1000_read_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
-		       u32 eeprom_buf_size, u16 max_pba_block_size,
-		       struct e1000_pba *pba);
-s32 e1000_write_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
-			u32 eeprom_buf_size, struct e1000_pba *pba);
 s32 e1000_get_pba_block_size(struct e1000_hw *hw, u16 *eeprom_buf,
 			     u32 eeprom_buf_size, u16 *pba_block_size);
 s32  e1000_read_nvm_spi(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
diff --git a/drivers/net/e1000/base/e1000_phy.c b/drivers/net/e1000/base/e1000_phy.c
index 62d0be5080..b3be39f7bd 100644
--- a/drivers/net/e1000/base/e1000_phy.c
+++ b/drivers/net/e1000/base/e1000_phy.c
@@ -545,79 +545,6 @@ s32 e1000_read_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 *data)
 	return E1000_SUCCESS;
 }
 
-/**
- *  e1000_write_sfp_data_byte - Writes SFP module data.
- *  @hw: pointer to the HW structure
- *  @offset: byte location offset to write to
- *  @data: data to write
- *
- *  Writes one byte to SFP module data stored
- *  in SFP resided EEPROM memory or SFP diagnostic area.
- *  Function should be called with
- *  E1000_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- *  E1000_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- *  access
- **/
-s32 e1000_write_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 data)
-{
-	u32 i = 0;
-	u32 i2ccmd = 0;
-	u32 data_local = 0;
-
-	DEBUGFUNC("e1000_write_sfp_data_byte");
-
-	if (offset > E1000_I2CCMD_SFP_DIAG_ADDR(255)) {
-		DEBUGOUT("I2CCMD command address exceeds upper limit\n");
-		return -E1000_ERR_PHY;
-	}
-	/* The programming interface is 16 bits wide
-	 * so we need to read the whole word first
-	 * then update appropriate byte lane and write
-	 * the updated word back.
-	 */
-	/* Set up Op-code, EEPROM Address,in the I2CCMD
-	 * register. The MAC will take care of interfacing
-	 * with an EEPROM to write the data given.
-	 */
-	i2ccmd = ((offset << E1000_I2CCMD_REG_ADDR_SHIFT) |
-		  E1000_I2CCMD_OPCODE_READ);
-	/* Set a command to read single word */
-	E1000_WRITE_REG(hw, E1000_I2CCMD, i2ccmd);
-	for (i = 0; i < E1000_I2CCMD_PHY_TIMEOUT; i++) {
-		usec_delay(50);
-		/* Poll the ready bit to see if lastly
-		 * launched I2C operation completed
-		 */
-		i2ccmd = E1000_READ_REG(hw, E1000_I2CCMD);
-		if (i2ccmd & E1000_I2CCMD_READY) {
-			/* Check if this is READ or WRITE phase */
-			if ((i2ccmd & E1000_I2CCMD_OPCODE_READ) ==
-			    E1000_I2CCMD_OPCODE_READ) {
-				/* Write the selected byte
-				 * lane and update whole word
-				 */
-				data_local = i2ccmd & 0xFF00;
-				data_local |= (u32)data;
-				i2ccmd = ((offset <<
-					E1000_I2CCMD_REG_ADDR_SHIFT) |
-					E1000_I2CCMD_OPCODE_WRITE | data_local);
-				E1000_WRITE_REG(hw, E1000_I2CCMD, i2ccmd);
-			} else {
-				break;
-			}
-		}
-	}
-	if (!(i2ccmd & E1000_I2CCMD_READY)) {
-		DEBUGOUT("I2CCMD Write did not complete\n");
-		return -E1000_ERR_PHY;
-	}
-	if (i2ccmd & E1000_I2CCMD_ERROR) {
-		DEBUGOUT("I2CCMD Error bit set\n");
-		return -E1000_ERR_PHY;
-	}
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_read_phy_reg_m88 - Read m88 PHY register
  *  @hw: pointer to the HW structure
@@ -4083,134 +4010,6 @@ s32 e1000_read_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 *data)
 	return ret_val;
 }
 
-/**
- *  e1000_read_phy_reg_mphy - Read mPHY control register
- *  @hw: pointer to the HW structure
- *  @address: address to be read
- *  @data: pointer to the read data
- *
- *  Reads the mPHY control register in the PHY at offset and stores the
- *  information read to data.
- **/
-s32 e1000_read_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 *data)
-{
-	u32 mphy_ctrl = 0;
-	bool locked = false;
-	bool ready;
-
-	DEBUGFUNC("e1000_read_phy_reg_mphy");
-
-	/* Check if mPHY is ready to read/write operations */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-
-	/* Check if mPHY access is disabled and enable it if so */
-	mphy_ctrl = E1000_READ_REG(hw, E1000_MPHY_ADDR_CTRL);
-	if (mphy_ctrl & E1000_MPHY_DIS_ACCESS) {
-		locked = true;
-		ready = e1000_is_mphy_ready(hw);
-		if (!ready)
-			return -E1000_ERR_PHY;
-		mphy_ctrl |= E1000_MPHY_ENA_ACCESS;
-		E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-	}
-
-	/* Set the address that we want to read */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-
-	/* We mask address, because we want to use only current lane */
-	mphy_ctrl = (mphy_ctrl & ~E1000_MPHY_ADDRESS_MASK &
-		~E1000_MPHY_ADDRESS_FNC_OVERRIDE) |
-		(address & E1000_MPHY_ADDRESS_MASK);
-	E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-
-	/* Read data from the address */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-	*data = E1000_READ_REG(hw, E1000_MPHY_DATA);
-
-	/* Disable access to mPHY if it was originally disabled */
-	if (locked) {
-		ready = e1000_is_mphy_ready(hw);
-		if (!ready)
-			return -E1000_ERR_PHY;
-		E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL,
-				E1000_MPHY_DIS_ACCESS);
-	}
-
-	return E1000_SUCCESS;
-}
-
-/**
- *  e1000_write_phy_reg_mphy - Write mPHY control register
- *  @hw: pointer to the HW structure
- *  @address: address to write to
- *  @data: data to write to register at offset
- *  @line_override: used when we want to use different line than default one
- *
- *  Writes data to mPHY control register.
- **/
-s32 e1000_write_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 data,
-			     bool line_override)
-{
-	u32 mphy_ctrl = 0;
-	bool locked = false;
-	bool ready;
-
-	DEBUGFUNC("e1000_write_phy_reg_mphy");
-
-	/* Check if mPHY is ready to read/write operations */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-
-	/* Check if mPHY access is disabled and enable it if so */
-	mphy_ctrl = E1000_READ_REG(hw, E1000_MPHY_ADDR_CTRL);
-	if (mphy_ctrl & E1000_MPHY_DIS_ACCESS) {
-		locked = true;
-		ready = e1000_is_mphy_ready(hw);
-		if (!ready)
-			return -E1000_ERR_PHY;
-		mphy_ctrl |= E1000_MPHY_ENA_ACCESS;
-		E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-	}
-
-	/* Set the address that we want to read */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-
-	/* We mask address, because we want to use only current lane */
-	if (line_override)
-		mphy_ctrl |= E1000_MPHY_ADDRESS_FNC_OVERRIDE;
-	else
-		mphy_ctrl &= ~E1000_MPHY_ADDRESS_FNC_OVERRIDE;
-	mphy_ctrl = (mphy_ctrl & ~E1000_MPHY_ADDRESS_MASK) |
-		(address & E1000_MPHY_ADDRESS_MASK);
-	E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-
-	/* Read data from the address */
-	ready = e1000_is_mphy_ready(hw);
-	if (!ready)
-		return -E1000_ERR_PHY;
-	E1000_WRITE_REG(hw, E1000_MPHY_DATA, data);
-
-	/* Disable access to mPHY if it was originally disabled */
-	if (locked) {
-		ready = e1000_is_mphy_ready(hw);
-		if (!ready)
-			return -E1000_ERR_PHY;
-		E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL,
-				E1000_MPHY_DIS_ACCESS);
-	}
-
-	return E1000_SUCCESS;
-}
-
 /**
  *  e1000_is_mphy_ready - Check if mPHY control register is not busy
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_phy.h b/drivers/net/e1000/base/e1000_phy.h
index 81c5308589..fcd1e09f42 100644
--- a/drivers/net/e1000/base/e1000_phy.h
+++ b/drivers/net/e1000/base/e1000_phy.h
@@ -71,7 +71,6 @@ s32  e1000_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data);
 s32  e1000_read_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 *data);
 s32  e1000_write_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 data);
 s32  e1000_read_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 *data);
-s32  e1000_write_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 data);
 s32  e1000_read_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 *data);
 s32  e1000_read_phy_reg_hv_locked(struct e1000_hw *hw, u32 offset, u16 *data);
 s32  e1000_read_phy_reg_page_hv(struct e1000_hw *hw, u32 offset, u16 *data);
@@ -86,9 +85,6 @@ s32  e1000_phy_force_speed_duplex_82577(struct e1000_hw *hw);
 s32  e1000_get_cable_length_82577(struct e1000_hw *hw);
 s32  e1000_write_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 data);
 s32  e1000_read_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_read_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 *data);
-s32 e1000_write_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 data,
-			     bool line_override);
 bool e1000_is_mphy_ready(struct e1000_hw *hw);
 
 s32 e1000_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr,
diff --git a/drivers/net/e1000/base/e1000_vf.c b/drivers/net/e1000/base/e1000_vf.c
index 44ebe07ee4..9b001f9c2e 100644
--- a/drivers/net/e1000/base/e1000_vf.c
+++ b/drivers/net/e1000/base/e1000_vf.c
@@ -411,25 +411,6 @@ void e1000_update_mc_addr_list_vf(struct e1000_hw *hw,
 	e1000_write_msg_read_ack(hw, msgbuf, E1000_VFMAILBOX_SIZE);
 }
 
-/**
- *  e1000_vfta_set_vf - Set/Unset vlan filter table address
- *  @hw: pointer to the HW structure
- *  @vid: determines the vfta register and bit to set/unset
- *  @set: if true then set bit, else clear bit
- **/
-void e1000_vfta_set_vf(struct e1000_hw *hw, u16 vid, bool set)
-{
-	u32 msgbuf[2];
-
-	msgbuf[0] = E1000_VF_SET_VLAN;
-	msgbuf[1] = vid;
-	/* Setting the 8 bit field MSG INFO to TRUE indicates "add" */
-	if (set)
-		msgbuf[0] |= E1000_VF_SET_VLAN_ADD;
-
-	e1000_write_msg_read_ack(hw, msgbuf, 2);
-}
-
 /** e1000_rlpml_set_vf - Set the maximum receive packet length
  *  @hw: pointer to the HW structure
  *  @max_size: value to assign to max frame size
diff --git a/drivers/net/e1000/base/e1000_vf.h b/drivers/net/e1000/base/e1000_vf.h
index 4bec21c935..ff62970132 100644
--- a/drivers/net/e1000/base/e1000_vf.h
+++ b/drivers/net/e1000/base/e1000_vf.h
@@ -260,7 +260,6 @@ enum e1000_promisc_type {
 
 /* These functions must be implemented by drivers */
 s32  e1000_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value);
-void e1000_vfta_set_vf(struct e1000_hw *, u16, bool);
 void e1000_rlpml_set_vf(struct e1000_hw *, u16);
 s32 e1000_promisc_set_vf(struct e1000_hw *, enum e1000_promisc_type);
 #endif /* _E1000_VF_H_ */
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index aae68721fb..04fd15c998 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -1064,11 +1064,6 @@ static int ena_com_get_feature(struct ena_com_dev *ena_dev,
 				      feature_ver);
 }
 
-int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev)
-{
-	return ena_dev->rss.hash_func;
-}
-
 static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev)
 {
 	struct ena_admin_feature_rss_flow_hash_control *hash_key =
@@ -1318,31 +1313,6 @@ static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev)
 	return 0;
 }
 
-static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev,
-						 u16 intr_delay_resolution)
-{
-	u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution;
-
-	if (unlikely(!intr_delay_resolution)) {
-		ena_trc_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n");
-		intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
-	}
-
-	/* update Rx */
-	ena_dev->intr_moder_rx_interval =
-		ena_dev->intr_moder_rx_interval *
-		prev_intr_delay_resolution /
-		intr_delay_resolution;
-
-	/* update Tx */
-	ena_dev->intr_moder_tx_interval =
-		ena_dev->intr_moder_tx_interval *
-		prev_intr_delay_resolution /
-		intr_delay_resolution;
-
-	ena_dev->intr_delay_resolution = intr_delay_resolution;
-}
-
 /*****************************************************************************/
 /*******************************      API       ******************************/
 /*****************************************************************************/
@@ -1703,17 +1673,6 @@ void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling)
 	ena_dev->admin_queue.polling = polling;
 }
 
-bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev)
-{
-	return ena_dev->admin_queue.polling;
-}
-
-void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
-					 bool polling)
-{
-	ena_dev->admin_queue.auto_polling = polling;
-}
-
 int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev)
 {
 	struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read;
@@ -1942,12 +1901,6 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid)
 	ena_com_io_queue_free(ena_dev, io_sq, io_cq);
 }
 
-int ena_com_get_link_params(struct ena_com_dev *ena_dev,
-			    struct ena_admin_get_feat_resp *resp)
-{
-	return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0);
-}
-
 int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 			      struct ena_com_dev_get_features_ctx *get_feat_ctx)
 {
@@ -2277,24 +2230,6 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
 	return ret;
 }
 
-int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
-				 struct ena_admin_feature_offload_desc *offload)
-{
-	int ret;
-	struct ena_admin_get_feat_resp resp;
-
-	ret = ena_com_get_feature(ena_dev, &resp,
-				  ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
-	if (unlikely(ret)) {
-		ena_trc_err("Failed to get offload capabilities %d\n", ret);
-		return ret;
-	}
-
-	memcpy(offload, &resp.u.offload, sizeof(resp.u.offload));
-
-	return 0;
-}
-
 int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 {
 	struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue;
@@ -2416,44 +2351,6 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
 	return rc;
 }
 
-int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
-			      enum ena_admin_hash_functions *func)
-{
-	struct ena_rss *rss = &ena_dev->rss;
-	struct ena_admin_get_feat_resp get_resp;
-	int rc;
-
-	if (unlikely(!func))
-		return ENA_COM_INVAL;
-
-	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
-				    ENA_ADMIN_RSS_HASH_FUNCTION,
-				    rss->hash_key_dma_addr,
-				    sizeof(*rss->hash_key), 0);
-	if (unlikely(rc))
-		return rc;
-
-	/* ENA_FFS() returns 1 in case the lsb is set */
-	rss->hash_func = ENA_FFS(get_resp.u.flow_hash_func.selected_func);
-	if (rss->hash_func)
-		rss->hash_func--;
-
-	*func = rss->hash_func;
-
-	return 0;
-}
-
-int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key)
-{
-	struct ena_admin_feature_rss_flow_hash_control *hash_key =
-		ena_dev->rss.hash_key;
-
-	if (key)
-		memcpy(key, hash_key->key, (size_t)(hash_key->keys_num) << 2);
-
-	return 0;
-}
-
 int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev,
 			  enum ena_admin_flow_hash_proto proto,
 			  u16 *fields)
@@ -2582,43 +2479,6 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev)
 	return rc;
 }
 
-int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
-			   enum ena_admin_flow_hash_proto proto,
-			   u16 hash_fields)
-{
-	struct ena_rss *rss = &ena_dev->rss;
-	struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl;
-	u16 supported_fields;
-	int rc;
-
-	if (proto >= ENA_ADMIN_RSS_PROTO_NUM) {
-		ena_trc_err("Invalid proto num (%u)\n", proto);
-		return ENA_COM_INVAL;
-	}
-
-	/* Get the ctrl table */
-	rc = ena_com_get_hash_ctrl(ena_dev, proto, NULL);
-	if (unlikely(rc))
-		return rc;
-
-	/* Make sure all the fields are supported */
-	supported_fields = hash_ctrl->supported_fields[proto].fields;
-	if ((hash_fields & supported_fields) != hash_fields) {
-		ena_trc_err("proto %d doesn't support the required fields %x. supports only: %x\n",
-			    proto, hash_fields, supported_fields);
-	}
-
-	hash_ctrl->selected_fields[proto].fields = hash_fields;
-
-	rc = ena_com_set_hash_ctrl(ena_dev);
-
-	/* In case of failure, restore the old hash ctrl */
-	if (unlikely(rc))
-		ena_com_get_hash_ctrl(ena_dev, 0, NULL);
-
-	return 0;
-}
-
 int ena_com_indirect_table_fill_entry(struct ena_com_dev *ena_dev,
 				      u16 entry_idx, u16 entry_value)
 {
@@ -2874,88 +2734,6 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev)
 	return ret;
 }
 
-/* Interrupt moderation */
-bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev)
-{
-	return ena_com_check_supported_feature_id(ena_dev,
-						  ENA_ADMIN_INTERRUPT_MODERATION);
-}
-
-static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs,
-							  u32 intr_delay_resolution,
-							  u32 *intr_moder_interval)
-{
-	if (!intr_delay_resolution) {
-		ena_trc_err("Illegal interrupt delay granularity value\n");
-		return ENA_COM_FAULT;
-	}
-
-	*intr_moder_interval = coalesce_usecs / intr_delay_resolution;
-
-	return 0;
-}
-
-
-int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev,
-						      u32 tx_coalesce_usecs)
-{
-	return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs,
-							      ena_dev->intr_delay_resolution,
-							      &ena_dev->intr_moder_tx_interval);
-}
-
-int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev,
-						      u32 rx_coalesce_usecs)
-{
-	return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs,
-							      ena_dev->intr_delay_resolution,
-							      &ena_dev->intr_moder_rx_interval);
-}
-
-int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev)
-{
-	struct ena_admin_get_feat_resp get_resp;
-	u16 delay_resolution;
-	int rc;
-
-	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_INTERRUPT_MODERATION, 0);
-
-	if (rc) {
-		if (rc == ENA_COM_UNSUPPORTED) {
-			ena_trc_dbg("Feature %d isn't supported\n",
-				    ENA_ADMIN_INTERRUPT_MODERATION);
-			rc = 0;
-		} else {
-			ena_trc_err("Failed to get interrupt moderation admin cmd. rc: %d\n",
-				    rc);
-		}
-
-		/* no moderation supported, disable adaptive support */
-		ena_com_disable_adaptive_moderation(ena_dev);
-		return rc;
-	}
-
-	/* if moderation is supported by device we set adaptive moderation */
-	delay_resolution = get_resp.u.intr_moderation.intr_delay_resolution;
-	ena_com_update_intr_delay_resolution(ena_dev, delay_resolution);
-
-	/* Disable adaptive moderation by default - can be enabled later */
-	ena_com_disable_adaptive_moderation(ena_dev);
-
-	return 0;
-}
-
-unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev)
-{
-	return ena_dev->intr_moder_tx_interval;
-}
-
-unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev)
-{
-	return ena_dev->intr_moder_rx_interval;
-}
-
 int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
 			    struct ena_admin_feature_llq_desc *llq_features,
 			    struct ena_llq_configurations *llq_default_cfg)
diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h
index 64d8f247cb..f82c9f1876 100644
--- a/drivers/net/ena/base/ena_com.h
+++ b/drivers/net/ena/base/ena_com.h
@@ -483,29 +483,6 @@ bool ena_com_get_admin_running_state(struct ena_com_dev *ena_dev);
  */
 void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling);
 
-/* ena_com_get_admin_polling_mode - Get the admin completion queue polling mode
- * @ena_dev: ENA communication layer struct
- *
- * Get the admin completion mode.
- * If polling mode is on, ena_com_execute_admin_command will perform a
- * polling on the admin completion queue for the commands completion,
- * otherwise it will wait on wait event.
- *
- * @return state
- */
-bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev);
-
-/* ena_com_set_admin_auto_polling_mode - Enable autoswitch to polling mode
- * @ena_dev: ENA communication layer struct
- * @polling: Enable/Disable polling mode
- *
- * Set the autopolling mode.
- * If autopolling is on:
- * In case of missing interrupt when data is available switch to polling.
- */
-void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
-					 bool polling);
-
 /* ena_com_admin_q_comp_intr_handler - admin queue interrupt handler
  * @ena_dev: ENA communication layer struct
  *
@@ -552,18 +529,6 @@ void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev);
  */
 int ena_com_validate_version(struct ena_com_dev *ena_dev);
 
-/* ena_com_get_link_params - Retrieve physical link parameters.
- * @ena_dev: ENA communication layer struct
- * @resp: Link parameters
- *
- * Retrieve the physical link parameters,
- * like speed, auto-negotiation and full duplex support.
- *
- * @return - 0 on Success negative value otherwise.
- */
-int ena_com_get_link_params(struct ena_com_dev *ena_dev,
-			    struct ena_admin_get_feat_resp *resp);
-
 /* ena_com_get_dma_width - Retrieve physical dma address width the device
  * supports.
  * @ena_dev: ENA communication layer struct
@@ -619,15 +584,6 @@ int ena_com_get_eni_stats(struct ena_com_dev *ena_dev,
  */
 int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu);
 
-/* ena_com_get_offload_settings - Retrieve the device offloads capabilities
- * @ena_dev: ENA communication layer struct
- * @offlad: offload return value
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
-				 struct ena_admin_feature_offload_desc *offload);
-
 /* ena_com_rss_init - Init RSS
  * @ena_dev: ENA communication layer struct
  * @log_size: indirection log size
@@ -647,14 +603,6 @@ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 log_size);
  */
 void ena_com_rss_destroy(struct ena_com_dev *ena_dev);
 
-/* ena_com_get_current_hash_function - Get RSS hash function
- * @ena_dev: ENA communication layer struct
- *
- * Return the current hash function.
- * @return: 0 or one of the ena_admin_hash_functions values.
- */
-int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev);
-
 /* ena_com_fill_hash_function - Fill RSS hash function
  * @ena_dev: ENA communication layer struct
  * @func: The hash function (Toeplitz or crc)
@@ -686,48 +634,6 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
  */
 int ena_com_set_hash_function(struct ena_com_dev *ena_dev);
 
-/* ena_com_get_hash_function - Retrieve the hash function from the device.
- * @ena_dev: ENA communication layer struct
- * @func: hash function
- *
- * Retrieve the hash function from the device.
- *
- * @note: If the caller called ena_com_fill_hash_function but didn't flush
- * it to the device, the new configuration will be lost.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
-			      enum ena_admin_hash_functions *func);
-
-/* ena_com_get_hash_key - Retrieve the hash key
- * @ena_dev: ENA communication layer struct
- * @key: hash key
- *
- * Retrieve the hash key.
- *
- * @note: If the caller called ena_com_fill_hash_key but didn't flush
- * it to the device, the new configuration will be lost.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key);
-/* ena_com_fill_hash_ctrl - Fill RSS hash control
- * @ena_dev: ENA communication layer struct.
- * @proto: The protocol to configure.
- * @hash_fields: bit mask of ena_admin_flow_hash_fields
- *
- * Fill the ena_dev resources with the desire hash control (the ethernet
- * fields that take part of the hash) for a specific protocol.
- * To flush the hash control to the device, the caller should call
- * ena_com_set_hash_ctrl.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
-			   enum ena_admin_flow_hash_proto proto,
-			   u16 hash_fields);
-
 /* ena_com_set_hash_ctrl - Flush the hash control resources to the device.
  * @ena_dev: ENA communication layer struct
  *
@@ -884,56 +790,6 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
 				  struct ena_admin_acq_entry *cmd_comp,
 				  size_t cmd_comp_size);
 
-/* ena_com_init_interrupt_moderation - Init interrupt moderation
- * @ena_dev: ENA communication layer struct
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev);
-
-/* ena_com_interrupt_moderation_supported - Return if interrupt moderation
- * capability is supported by the device.
- *
- * @return - supported or not.
- */
-bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev);
-
-/* ena_com_update_nonadaptive_moderation_interval_tx - Update the
- * non-adaptive interval in Tx direction.
- * @ena_dev: ENA communication layer struct
- * @tx_coalesce_usecs: Interval in usec.
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev,
-						      u32 tx_coalesce_usecs);
-
-/* ena_com_update_nonadaptive_moderation_interval_rx - Update the
- * non-adaptive interval in Rx direction.
- * @ena_dev: ENA communication layer struct
- * @rx_coalesce_usecs: Interval in usec.
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev,
-						      u32 rx_coalesce_usecs);
-
-/* ena_com_get_nonadaptive_moderation_interval_tx - Retrieve the
- * non-adaptive interval in Tx direction.
- * @ena_dev: ENA communication layer struct
- *
- * @return - interval in usec
- */
-unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev);
-
-/* ena_com_get_nonadaptive_moderation_interval_rx - Retrieve the
- * non-adaptive interval in Rx direction.
- * @ena_dev: ENA communication layer struct
- *
- * @return - interval in usec
- */
-unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev);
-
 /* ena_com_config_dev_mode - Configure the placement policy of the device.
  * @ena_dev: ENA communication layer struct
  * @llq_features: LLQ feature descriptor, retrieve via
diff --git a/drivers/net/ena/base/ena_eth_com.c b/drivers/net/ena/base/ena_eth_com.c
index a35d92fbd3..05ab030d07 100644
--- a/drivers/net/ena/base/ena_eth_com.c
+++ b/drivers/net/ena/base/ena_eth_com.c
@@ -613,14 +613,3 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
 
 	return ena_com_sq_update_tail(io_sq);
 }
-
-bool ena_com_cq_empty(struct ena_com_io_cq *io_cq)
-{
-	struct ena_eth_io_rx_cdesc_base *cdesc;
-
-	cdesc = ena_com_get_next_rx_cdesc(io_cq);
-	if (cdesc)
-		return false;
-	else
-		return true;
-}
diff --git a/drivers/net/ena/base/ena_eth_com.h b/drivers/net/ena/base/ena_eth_com.h
index 7dda16cd9f..3799f08bf4 100644
--- a/drivers/net/ena/base/ena_eth_com.h
+++ b/drivers/net/ena/base/ena_eth_com.h
@@ -64,8 +64,6 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
 			       struct ena_com_buf *ena_buf,
 			       u16 req_id);
 
-bool ena_com_cq_empty(struct ena_com_io_cq *io_cq);
-
 static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq,
 				       struct ena_eth_io_intr_reg *intr_reg)
 {
diff --git a/drivers/net/fm10k/base/fm10k_api.c b/drivers/net/fm10k/base/fm10k_api.c
index dfb50a10d1..631babcdd6 100644
--- a/drivers/net/fm10k/base/fm10k_api.c
+++ b/drivers/net/fm10k/base/fm10k_api.c
@@ -140,34 +140,6 @@ s32 fm10k_start_hw(struct fm10k_hw *hw)
 			       FM10K_NOT_IMPLEMENTED);
 }
 
-/**
- *  fm10k_get_bus_info - Set PCI bus info
- *  @hw: pointer to hardware structure
- *
- *  Sets the PCI bus info (speed, width, type) within the fm10k_hw structure
- **/
-s32 fm10k_get_bus_info(struct fm10k_hw *hw)
-{
-	return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw),
-			       FM10K_NOT_IMPLEMENTED);
-}
-
-#ifndef NO_IS_SLOT_APPROPRIATE_CHECK
-/**
- *  fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU
- *  @hw: pointer to hardware structure
- *
- *  Looks at the PCIe bus info to confirm whether or not this slot can support
- *  the necessary bandwidth for this device.
- **/
-bool fm10k_is_slot_appropriate(struct fm10k_hw *hw)
-{
-	if (hw->mac.ops.is_slot_appropriate)
-		return hw->mac.ops.is_slot_appropriate(hw);
-	return true;
-}
-
-#endif
 /**
  *  fm10k_update_vlan - Clear VLAN ID to VLAN filter table
  *  @hw: pointer to hardware structure
@@ -233,36 +205,6 @@ void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats)
 	}
 }
 
-/**
- *  fm10k_configure_dglort_map - Configures GLORT entry and queues
- *  @hw: pointer to hardware structure
- *  @dglort: pointer to dglort configuration structure
- *
- *  Reads the configuration structure contained in dglort_cfg and uses
- *  that information to then populate a DGLORTMAP/DEC entry and the queues
- *  to which it has been assigned.
- **/
-s32 fm10k_configure_dglort_map(struct fm10k_hw *hw,
-			       struct fm10k_dglort_cfg *dglort)
-{
-	return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map,
-			       (hw, dglort), FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- *  fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system
- *  @hw: pointer to hardware structure
- *  @dma_mask: 64 bit DMA mask required for platform
- *
- *  This function configures the endpoint to limit the access to memory
- *  beyond what is physically in the system.
- **/
-void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask)
-{
-	if (hw->mac.ops.set_dma_mask)
-		hw->mac.ops.set_dma_mask(hw, dma_mask);
-}
-
 /**
  *  fm10k_get_fault - Record a fault in one of the interface units
  *  @hw: pointer to hardware structure
@@ -298,49 +240,3 @@ s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport,
 			       (hw, lport, mac, vid, add, flags),
 			       FM10K_NOT_IMPLEMENTED);
 }
-
-/**
- *  fm10k_update_mc_addr - Update device multicast address
- *  @hw: pointer to the HW structure
- *  @lport: logical port ID to update - unused
- *  @mac: MAC address to add/remove from table
- *  @vid: VLAN ID to add/remove from table
- *  @add: Indicates if this is an add or remove operation
- *
- *  This function is used to add or remove multicast MAC addresses
- **/
-s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport,
-			 const u8 *mac, u16 vid, bool add)
-{
-	return fm10k_call_func(hw, hw->mac.ops.update_mc_addr,
-			       (hw, lport, mac, vid, add),
-			       FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- *  fm10k_adjust_systime - Adjust systime frequency
- *  @hw: pointer to hardware structure
- *  @ppb: adjustment rate in parts per billion
- *
- *  This function is meant to update the frequency of the clock represented
- *  by the SYSTIME register.
- **/
-s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb)
-{
-	return fm10k_call_func(hw, hw->mac.ops.adjust_systime,
-			       (hw, ppb), FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- *  fm10k_notify_offset - Notify switch of change in PTP offset
- *  @hw: pointer to hardware structure
- *  @offset: 64bit unsigned offset from hardware SYSTIME value
- *
- *  This function is meant to notify switch of change in the PTP offset for
- *  the hardware SYSTIME registers.
- **/
-s32 fm10k_notify_offset(struct fm10k_hw *hw, u64 offset)
-{
-	return fm10k_call_func(hw, hw->mac.ops.notify_offset,
-			       (hw, offset), FM10K_NOT_IMPLEMENTED);
-}
diff --git a/drivers/net/fm10k/base/fm10k_api.h b/drivers/net/fm10k/base/fm10k_api.h
index d9593bba00..4ffe41cd08 100644
--- a/drivers/net/fm10k/base/fm10k_api.h
+++ b/drivers/net/fm10k/base/fm10k_api.h
@@ -14,22 +14,11 @@ s32 fm10k_init_hw(struct fm10k_hw *hw);
 s32 fm10k_stop_hw(struct fm10k_hw *hw);
 s32 fm10k_start_hw(struct fm10k_hw *hw);
 s32 fm10k_init_shared_code(struct fm10k_hw *hw);
-s32 fm10k_get_bus_info(struct fm10k_hw *hw);
-#ifndef NO_IS_SLOT_APPROPRIATE_CHECK
-bool fm10k_is_slot_appropriate(struct fm10k_hw *hw);
-#endif
 s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set);
 s32 fm10k_read_mac_addr(struct fm10k_hw *hw);
 void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats);
 void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats);
-s32 fm10k_configure_dglort_map(struct fm10k_hw *hw,
-			       struct fm10k_dglort_cfg *dglort);
-void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask);
 s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault);
 s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport,
 			  const u8 *mac, u16 vid, bool add, u8 flags);
-s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport,
-			 const u8 *mac, u16 vid, bool add);
-s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb);
-s32 fm10k_notify_offset(struct fm10k_hw *hw, u64 offset);
 #endif /* _FM10K_API_H_ */
diff --git a/drivers/net/fm10k/base/fm10k_tlv.c b/drivers/net/fm10k/base/fm10k_tlv.c
index adffc1bcef..72b0ffd4cb 100644
--- a/drivers/net/fm10k/base/fm10k_tlv.c
+++ b/drivers/net/fm10k/base/fm10k_tlv.c
@@ -24,59 +24,6 @@ s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id)
 	return FM10K_SUCCESS;
 }
 
-/**
- *  fm10k_tlv_attr_put_null_string - Place null terminated string on message
- *  @msg: Pointer to message block
- *  @attr_id: Attribute ID
- *  @string: Pointer to string to be stored in attribute
- *
- *  This function will reorder a string to be CPU endian and store it in
- *  the attribute buffer.  It will return success if provided with a valid
- *  pointers.
- **/
-static s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id,
-					  const unsigned char *string)
-{
-	u32 attr_data = 0, len = 0;
-	u32 *attr;
-
-	DEBUGFUNC("fm10k_tlv_attr_put_null_string");
-
-	/* verify pointers are not NULL */
-	if (!string || !msg)
-		return FM10K_ERR_PARAM;
-
-	attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
-
-	/* copy string into local variable and then write to msg */
-	do {
-		/* write data to message */
-		if (len && !(len % 4)) {
-			attr[len / 4] = attr_data;
-			attr_data = 0;
-		}
-
-		/* record character to offset location */
-		attr_data |= (u32)(*string) << (8 * (len % 4));
-		len++;
-
-		/* test for NULL and then increment */
-	} while (*(string++));
-
-	/* write last piece of data to message */
-	attr[(len + 3) / 4] = attr_data;
-
-	/* record attribute header, update message length */
-	len <<= FM10K_TLV_LEN_SHIFT;
-	attr[0] = len | attr_id;
-
-	/* add header length to length */
-	len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
-	*msg += FM10K_TLV_LEN_ALIGN(len);
-
-	return FM10K_SUCCESS;
-}
-
 /**
  *  fm10k_tlv_attr_get_null_string - Get null terminated string from attribute
  *  @attr: Pointer to attribute
@@ -346,68 +293,6 @@ s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len)
 	return FM10K_SUCCESS;
 }
 
-/**
- *  fm10k_tlv_attr_nest_start - Start a set of nested attributes
- *  @msg: Pointer to message block
- *  @attr_id: Attribute ID
- *
- *  This function will mark off a new nested region for encapsulating
- *  a given set of attributes.  The idea is if you wish to place a secondary
- *  structure within the message this mechanism allows for that.  The
- *  function will return NULL on failure, and a pointer to the start
- *  of the nested attributes on success.
- **/
-static u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id)
-{
-	u32 *attr;
-
-	DEBUGFUNC("fm10k_tlv_attr_nest_start");
-
-	/* verify pointer is not NULL */
-	if (!msg)
-		return NULL;
-
-	attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
-
-	attr[0] = attr_id;
-
-	/* return pointer to nest header */
-	return attr;
-}
-
-/**
- *  fm10k_tlv_attr_nest_stop - Stop a set of nested attributes
- *  @msg: Pointer to message block
- *
- *  This function closes off an existing set of nested attributes.  The
- *  message pointer should be pointing to the parent of the nest.  So in
- *  the case of a nest within the nest this would be the outer nest pointer.
- *  This function will return success provided all pointers are valid.
- **/
-static s32 fm10k_tlv_attr_nest_stop(u32 *msg)
-{
-	u32 *attr;
-	u32 len;
-
-	DEBUGFUNC("fm10k_tlv_attr_nest_stop");
-
-	/* verify pointer is not NULL */
-	if (!msg)
-		return FM10K_ERR_PARAM;
-
-	/* locate the nested header and retrieve its length */
-	attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
-	len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT;
-
-	/* only include nest if data was added to it */
-	if (len) {
-		len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
-		*msg += len;
-	}
-
-	return FM10K_SUCCESS;
-}
-
 /**
  *  fm10k_tlv_attr_validate - Validate attribute metadata
  *  @attr: Pointer to attribute
@@ -661,74 +546,6 @@ const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = {
 	FM10K_TLV_ATTR_LAST
 };
 
-/**
- *  fm10k_tlv_msg_test_generate_data - Stuff message with data
- *  @msg: Pointer to message
- *  @attr_flags: List of flags indicating what attributes to add
- *
- *  This function is meant to load a message buffer with attribute data
- **/
-STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags)
-{
-	DEBUGFUNC("fm10k_tlv_msg_test_generate_data");
-
-	if (attr_flags & BIT(FM10K_TEST_MSG_STRING))
-		fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING,
-					       test_str);
-	if (attr_flags & BIT(FM10K_TEST_MSG_MAC_ADDR))
-		fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR,
-					    test_mac, test_vlan);
-	if (attr_flags & BIT(FM10K_TEST_MSG_U8))
-		fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8,  test_u8);
-	if (attr_flags & BIT(FM10K_TEST_MSG_U16))
-		fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16);
-	if (attr_flags & BIT(FM10K_TEST_MSG_U32))
-		fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32);
-	if (attr_flags & BIT(FM10K_TEST_MSG_U64))
-		fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64);
-	if (attr_flags & BIT(FM10K_TEST_MSG_S8))
-		fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8,  test_s8);
-	if (attr_flags & BIT(FM10K_TEST_MSG_S16))
-		fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16);
-	if (attr_flags & BIT(FM10K_TEST_MSG_S32))
-		fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32);
-	if (attr_flags & BIT(FM10K_TEST_MSG_S64))
-		fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64);
-	if (attr_flags & BIT(FM10K_TEST_MSG_LE_STRUCT))
-		fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT,
-					     test_le, 8);
-}
-
-/**
- *  fm10k_tlv_msg_test_create - Create a test message testing all attributes
- *  @msg: Pointer to message
- *  @attr_flags: List of flags indicating what attributes to add
- *
- *  This function is meant to load a message buffer with all attribute types
- *  including a nested attribute.
- **/
-void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags)
-{
-	u32 *nest = NULL;
-
-	DEBUGFUNC("fm10k_tlv_msg_test_create");
-
-	fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST);
-
-	fm10k_tlv_msg_test_generate_data(msg, attr_flags);
-
-	/* check for nested attributes */
-	attr_flags >>= FM10K_TEST_MSG_NESTED;
-
-	if (attr_flags) {
-		nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED);
-
-		fm10k_tlv_msg_test_generate_data(nest, attr_flags);
-
-		fm10k_tlv_attr_nest_stop(msg);
-	}
-}
-
 /**
  *  fm10k_tlv_msg_test - Validate all results on test message receive
  *  @hw: Pointer to hardware structure
diff --git a/drivers/net/fm10k/base/fm10k_tlv.h b/drivers/net/fm10k/base/fm10k_tlv.h
index af2e4c76a3..1665709d3d 100644
--- a/drivers/net/fm10k/base/fm10k_tlv.h
+++ b/drivers/net/fm10k/base/fm10k_tlv.h
@@ -155,7 +155,6 @@ enum fm10k_tlv_test_attr_id {
 };
 
 extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[];
-void fm10k_tlv_msg_test_create(u32 *, u32);
 s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
 
 #define FM10K_TLV_MSG_TEST_HANDLER(func) \
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index e20bb9ac35..b93000a2aa 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1115,32 +1115,6 @@ enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
 	return status;
 }
 
-/**
- * i40e_get_port_mac_addr - get Port MAC address
- * @hw: pointer to the HW structure
- * @mac_addr: pointer to Port MAC address
- *
- * Reads the adapter's Port MAC address
- **/
-enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
-{
-	struct i40e_aqc_mac_address_read_data addrs;
-	enum i40e_status_code status;
-	u16 flags = 0;
-
-	status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
-	if (status)
-		return status;
-
-	if (flags & I40E_AQC_PORT_ADDR_VALID)
-		i40e_memcpy(mac_addr, &addrs.port_mac, sizeof(addrs.port_mac),
-			I40E_NONDMA_TO_NONDMA);
-	else
-		status = I40E_ERR_INVALID_MAC_ADDR;
-
-	return status;
-}
-
 /**
  * i40e_pre_tx_queue_cfg - pre tx queue configure
  * @hw: pointer to the HW structure
@@ -1173,92 +1147,6 @@ void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable)
 	wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), reg_val);
 }
 
-/**
- * i40e_get_san_mac_addr - get SAN MAC address
- * @hw: pointer to the HW structure
- * @mac_addr: pointer to SAN MAC address
- *
- * Reads the adapter's SAN MAC address from NVM
- **/
-enum i40e_status_code i40e_get_san_mac_addr(struct i40e_hw *hw,
-					    u8 *mac_addr)
-{
-	struct i40e_aqc_mac_address_read_data addrs;
-	enum i40e_status_code status;
-	u16 flags = 0;
-
-	status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
-	if (status)
-		return status;
-
-	if (flags & I40E_AQC_SAN_ADDR_VALID)
-		i40e_memcpy(mac_addr, &addrs.pf_san_mac, sizeof(addrs.pf_san_mac),
-			I40E_NONDMA_TO_NONDMA);
-	else
-		status = I40E_ERR_INVALID_MAC_ADDR;
-
-	return status;
-}
-
-/**
- *  i40e_read_pba_string - Reads part number string from EEPROM
- *  @hw: pointer to hardware structure
- *  @pba_num: stores the part number string from the EEPROM
- *  @pba_num_size: part number string buffer length
- *
- *  Reads the part number string from the EEPROM.
- **/
-enum i40e_status_code i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
-					    u32 pba_num_size)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	u16 pba_word = 0;
-	u16 pba_size = 0;
-	u16 pba_ptr = 0;
-	u16 i = 0;
-
-	status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word);
-	if ((status != I40E_SUCCESS) || (pba_word != 0xFAFA)) {
-		DEBUGOUT("Failed to read PBA flags or flag is invalid.\n");
-		return status;
-	}
-
-	status = i40e_read_nvm_word(hw, I40E_SR_PBA_BLOCK_PTR, &pba_ptr);
-	if (status != I40E_SUCCESS) {
-		DEBUGOUT("Failed to read PBA Block pointer.\n");
-		return status;
-	}
-
-	status = i40e_read_nvm_word(hw, pba_ptr, &pba_size);
-	if (status != I40E_SUCCESS) {
-		DEBUGOUT("Failed to read PBA Block size.\n");
-		return status;
-	}
-
-	/* Subtract one to get PBA word count (PBA Size word is included in
-	 * total size)
-	 */
-	pba_size--;
-	if (pba_num_size < (((u32)pba_size * 2) + 1)) {
-		DEBUGOUT("Buffer to small for PBA data.\n");
-		return I40E_ERR_PARAM;
-	}
-
-	for (i = 0; i < pba_size; i++) {
-		status = i40e_read_nvm_word(hw, (pba_ptr + 1) + i, &pba_word);
-		if (status != I40E_SUCCESS) {
-			DEBUGOUT1("Failed to read PBA Block word %d.\n", i);
-			return status;
-		}
-
-		pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
-		pba_num[(i * 2) + 1] = pba_word & 0xFF;
-	}
-	pba_num[(pba_size * 2)] = '\0';
-
-	return status;
-}
-
 /**
  * i40e_get_media_type - Gets media type
  * @hw: pointer to the hardware structure
@@ -1970,36 +1858,6 @@ enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_set_link_restart_an
- * @hw: pointer to the hw struct
- * @enable_link: if true: enable link, if false: disable link
- * @cmd_details: pointer to command details structure or NULL
- *
- * Sets up the link and restarts the Auto-Negotiation over the link.
- **/
-enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw,
-		bool enable_link, struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_link_restart_an *cmd =
-		(struct i40e_aqc_set_link_restart_an *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_set_link_restart_an);
-
-	cmd->command = I40E_AQ_PHY_RESTART_AN;
-	if (enable_link)
-		cmd->command |= I40E_AQ_PHY_LINK_ENABLE;
-	else
-		cmd->command &= ~I40E_AQ_PHY_LINK_ENABLE;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_get_link_info
  * @hw: pointer to the hw struct
@@ -2127,98 +1985,6 @@ enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_get_local_advt_reg
- * @hw: pointer to the hw struct
- * @advt_reg: local AN advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the Local AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw,
-				u64 *advt_reg,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_an_advt_reg *resp =
-		(struct i40e_aqc_an_advt_reg *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_get_local_advt_reg);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (status != I40E_SUCCESS)
-		goto aq_get_local_advt_reg_exit;
-
-	*advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32;
-	*advt_reg |= LE32_TO_CPU(resp->local_an_reg0);
-
-aq_get_local_advt_reg_exit:
-	return status;
-}
-
-/**
- * i40e_aq_set_local_advt_reg
- * @hw: pointer to the hw struct
- * @advt_reg: local AN advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the Local AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
-				u64 advt_reg,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_an_advt_reg *cmd =
-		(struct i40e_aqc_an_advt_reg *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_get_local_advt_reg);
-
-	cmd->local_an_reg0 = CPU_TO_LE32(I40E_LO_DWORD(advt_reg));
-	cmd->local_an_reg1 = CPU_TO_LE16(I40E_HI_DWORD(advt_reg));
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_get_partner_advt
- * @hw: pointer to the hw struct
- * @advt_reg: AN partner advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the link partner AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw,
-				u64 *advt_reg,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_an_advt_reg *resp =
-		(struct i40e_aqc_an_advt_reg *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_get_partner_advt);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (status != I40E_SUCCESS)
-		goto aq_get_partner_advt_exit;
-
-	*advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32;
-	*advt_reg |= LE32_TO_CPU(resp->local_an_reg0);
-
-aq_get_partner_advt_exit:
-	return status;
-}
-
 /**
  * i40e_aq_set_lb_modes
  * @hw: pointer to the hw struct
@@ -2246,32 +2012,6 @@ enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_set_phy_debug
- * @hw: pointer to the hw struct
- * @cmd_flags: debug command flags
- * @cmd_details: pointer to command details structure or NULL
- *
- * Reset the external PHY.
- **/
-enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_phy_debug *cmd =
-		(struct i40e_aqc_set_phy_debug *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_set_phy_debug);
-
-	cmd->command_flags = cmd_flags;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_hw_ver_ge
  * @hw: pointer to the hw struct
@@ -2333,62 +2073,6 @@ enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_set_default_vsi
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw,
-				u16 seid,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)
-		&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	cmd->promiscuous_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
-	cmd->seid = CPU_TO_LE16(seid);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_clear_default_vsi
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_clear_default_vsi(struct i40e_hw *hw,
-				u16 seid,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)
-		&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	cmd->promiscuous_flags = CPU_TO_LE16(0);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
-	cmd->seid = CPU_TO_LE16(seid);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_set_vsi_unicast_promiscuous
  * @hw: pointer to the hw struct
@@ -2463,36 +2147,34 @@ enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
 }
 
 /**
-* i40e_aq_set_vsi_full_promiscuous
-* @hw: pointer to the hw struct
-* @seid: VSI number
-* @set: set promiscuous enable/disable
-* @cmd_details: pointer to command details structure or NULL
-**/
-enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
-				u16 seid, bool set,
+ * i40e_aq_set_vsi_broadcast
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @set_filter: true to set filter, false to clear filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear the broadcast promiscuous flag (filter) for a given VSI.
+ **/
+enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
+				u16 seid, bool set_filter,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
 	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
 		(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
 	enum i40e_status_code status;
-	u16 flags = 0;
 
 	i40e_fill_default_direct_cmd_desc(&desc,
-		i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	if (set)
-		flags = I40E_AQC_SET_VSI_PROMISC_UNICAST   |
-			I40E_AQC_SET_VSI_PROMISC_MULTICAST |
-			I40E_AQC_SET_VSI_PROMISC_BROADCAST;
-
-	cmd->promiscuous_flags = CPU_TO_LE16(flags);
+					i40e_aqc_opc_set_vsi_promiscuous_modes);
 
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST   |
-				       I40E_AQC_SET_VSI_PROMISC_MULTICAST |
-				       I40E_AQC_SET_VSI_PROMISC_BROADCAST);
+	if (set_filter)
+		cmd->promiscuous_flags
+			    |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
+	else
+		cmd->promiscuous_flags
+			    &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST);
 
+	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
 	cmd->seid = CPU_TO_LE16(seid);
 	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
 
@@ -2500,15 +2182,14 @@ enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
 }
 
 /**
- * i40e_aq_set_vsi_mc_promisc_on_vlan
+ * i40e_aq_set_vsi_vlan_promisc - control the VLAN promiscuous setting
  * @hw: pointer to the hw struct
  * @seid: vsi number
  * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag
  * @cmd_details: pointer to command details structure or NULL
  **/
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
+enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
+				u16 seid, bool enable,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
@@ -2519,14 +2200,12 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
 
 	i40e_fill_default_direct_cmd_desc(&desc,
 					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
 	if (enable)
-		flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST;
+		flags |= I40E_AQC_SET_VSI_PROMISC_VLAN;
 
 	cmd->promiscuous_flags = CPU_TO_LE16(flags);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_MULTICAST);
+	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_VLAN);
 	cmd->seid = CPU_TO_LE16(seid);
-	cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
 
 	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
 
@@ -2534,166 +2213,26 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
 }
 
 /**
- * i40e_aq_set_vsi_uc_promisc_on_vlan
+ * i40e_get_vsi_params - get VSI configuration info
  * @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag
+ * @vsi_ctx: pointer to a vsi context struct
  * @cmd_details: pointer to command details structure or NULL
  **/
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
+enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw,
+				struct i40e_vsi_context *vsi_ctx,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+	struct i40e_aqc_add_get_update_vsi *cmd =
+		(struct i40e_aqc_add_get_update_vsi *)&desc.params.raw;
+	struct i40e_aqc_add_get_update_vsi_completion *resp =
+		(struct i40e_aqc_add_get_update_vsi_completion *)
+		&desc.params.raw;
 	enum i40e_status_code status;
-	u16 flags = 0;
 
+	UNREFERENCED_1PARAMETER(cmd_details);
 	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	if (enable) {
-		flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
-		if (i40e_hw_ver_ge(hw, 1, 5))
-			flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
-	}
-
-	cmd->promiscuous_flags = CPU_TO_LE16(flags);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
-	if (i40e_hw_ver_ge(hw, 1, 5))
-		cmd->valid_flags |=
-			CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
-	cmd->seid = CPU_TO_LE16(seid);
-	cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_set_vsi_bc_promisc_on_vlan
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set broadcast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any broadcast packet with this VLAN tag
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 flags = 0;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	if (enable)
-		flags |= I40E_AQC_SET_VSI_PROMISC_BROADCAST;
-
-	cmd->promiscuous_flags = CPU_TO_LE16(flags);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
-	cmd->seid = CPU_TO_LE16(seid);
-	cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_set_vsi_broadcast
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @set_filter: true to set filter, false to clear filter
- * @cmd_details: pointer to command details structure or NULL
- *
- * Set or clear the broadcast promiscuous flag (filter) for a given VSI.
- **/
-enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
-				u16 seid, bool set_filter,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-
-	if (set_filter)
-		cmd->promiscuous_flags
-			    |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
-	else
-		cmd->promiscuous_flags
-			    &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST);
-
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
-	cmd->seid = CPU_TO_LE16(seid);
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_set_vsi_vlan_promisc - control the VLAN promiscuous setting
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
-				u16 seid, bool enable,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
-		(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 flags = 0;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_set_vsi_promiscuous_modes);
-	if (enable)
-		flags |= I40E_AQC_SET_VSI_PROMISC_VLAN;
-
-	cmd->promiscuous_flags = CPU_TO_LE16(flags);
-	cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_VLAN);
-	cmd->seid = CPU_TO_LE16(seid);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_get_vsi_params - get VSI configuration info
- * @hw: pointer to the hw struct
- * @vsi_ctx: pointer to a vsi context struct
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw,
-				struct i40e_vsi_context *vsi_ctx,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_get_update_vsi *cmd =
-		(struct i40e_aqc_add_get_update_vsi *)&desc.params.raw;
-	struct i40e_aqc_add_get_update_vsi_completion *resp =
-		(struct i40e_aqc_add_get_update_vsi_completion *)
-		&desc.params.raw;
-	enum i40e_status_code status;
-
-	UNREFERENCED_1PARAMETER(cmd_details);
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_get_vsi_parameters);
+					  i40e_aqc_opc_get_vsi_parameters);
 
 	cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid);
 
@@ -2867,73 +2406,6 @@ enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_send_driver_version
- * @hw: pointer to the hw struct
- * @dv: driver's major, minor version
- * @cmd_details: pointer to command details structure or NULL
- *
- * Send the driver version to the firmware
- **/
-enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw,
-				struct i40e_driver_version *dv,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_driver_version *cmd =
-		(struct i40e_aqc_driver_version *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 len;
-
-	if (dv == NULL)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_driver_version);
-
-	desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD);
-	cmd->driver_major_ver = dv->major_version;
-	cmd->driver_minor_ver = dv->minor_version;
-	cmd->driver_build_ver = dv->build_version;
-	cmd->driver_subbuild_ver = dv->subbuild_version;
-
-	len = 0;
-	while (len < sizeof(dv->driver_string) &&
-	       (dv->driver_string[len] < 0x80) &&
-	       dv->driver_string[len])
-		len++;
-	status = i40e_asq_send_command(hw, &desc, dv->driver_string,
-				       len, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_get_link_status - get status of the HW network link
- * @hw: pointer to the hw struct
- * @link_up: pointer to bool (true/false = linkup/linkdown)
- *
- * Variable link_up true if link is up, false if link is down.
- * The variable link_up is invalid if returned value of status != I40E_SUCCESS
- *
- * Side effect: LinkStatusEvent reporting becomes enabled
- **/
-enum i40e_status_code i40e_get_link_status(struct i40e_hw *hw, bool *link_up)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-
-	if (hw->phy.get_link_info) {
-		status = i40e_update_link_info(hw);
-
-		if (status != I40E_SUCCESS)
-			i40e_debug(hw, I40E_DEBUG_LINK, "get link failed: status %d\n",
-				   status);
-	}
-
-	*link_up = hw->phy.link_info.link_info & I40E_AQ_LINK_UP;
-
-	return status;
-}
-
 /**
  * i40e_updatelink_status - update status of the HW network link
  * @hw: pointer to the hw struct
@@ -2973,31 +2445,6 @@ enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw)
 	return status;
 }
 
-
-/**
- * i40e_get_link_speed
- * @hw: pointer to the hw struct
- *
- * Returns the link speed of the adapter.
- **/
-enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw)
-{
-	enum i40e_aq_link_speed speed = I40E_LINK_SPEED_UNKNOWN;
-	enum i40e_status_code status = I40E_SUCCESS;
-
-	if (hw->phy.get_link_info) {
-		status = i40e_aq_get_link_info(hw, true, NULL, NULL);
-
-		if (status != I40E_SUCCESS)
-			goto i40e_link_speed_exit;
-	}
-
-	speed = hw->phy.link_info.link_speed;
-
-i40e_link_speed_exit:
-	return speed;
-}
-
 /**
  * i40e_aq_add_veb - Insert a VEB between the VSI and the MAC
  * @hw: pointer to the hw struct
@@ -3204,134 +2651,6 @@ enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
 	return status;
 }
 
-/**
- * i40e_mirrorrule_op - Internal helper function to add/delete mirror rule
- * @hw: pointer to the hw struct
- * @opcode: AQ opcode for add or delete mirror rule
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @id: Destination VSI SEID or Rule ID
- * @count: length of the list
- * @mr_list: list of mirrored VSI SEIDs or VLAN IDs
- * @cmd_details: pointer to command details structure or NULL
- * @rule_id: Rule ID returned from FW
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Add/Delete a mirror rule to a specific switch. Mirror rules are supported for
- * VEBs/VEPA elements only
- **/
-static enum i40e_status_code i40e_mirrorrule_op(struct i40e_hw *hw,
-			u16 opcode, u16 sw_seid, u16 rule_type, u16 id,
-			u16 count, __le16 *mr_list,
-			struct i40e_asq_cmd_details *cmd_details,
-			u16 *rule_id, u16 *rules_used, u16 *rules_free)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_delete_mirror_rule *cmd =
-		(struct i40e_aqc_add_delete_mirror_rule *)&desc.params.raw;
-	struct i40e_aqc_add_delete_mirror_rule_completion *resp =
-	(struct i40e_aqc_add_delete_mirror_rule_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 buf_size;
-
-	buf_size = count * sizeof(*mr_list);
-
-	/* prep the rest of the request */
-	i40e_fill_default_direct_cmd_desc(&desc, opcode);
-	cmd->seid = CPU_TO_LE16(sw_seid);
-	cmd->rule_type = CPU_TO_LE16(rule_type &
-				     I40E_AQC_MIRROR_RULE_TYPE_MASK);
-	cmd->num_entries = CPU_TO_LE16(count);
-	/* Dest VSI for add, rule_id for delete */
-	cmd->destination = CPU_TO_LE16(id);
-	if (mr_list) {
-		desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF |
-						I40E_AQ_FLAG_RD));
-		if (buf_size > I40E_AQ_LARGE_BUF)
-			desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-	}
-
-	status = i40e_asq_send_command(hw, &desc, mr_list, buf_size,
-				       cmd_details);
-	if (status == I40E_SUCCESS ||
-	    hw->aq.asq_last_status == I40E_AQ_RC_ENOSPC) {
-		if (rule_id)
-			*rule_id = LE16_TO_CPU(resp->rule_id);
-		if (rules_used)
-			*rules_used = LE16_TO_CPU(resp->mirror_rules_used);
-		if (rules_free)
-			*rules_free = LE16_TO_CPU(resp->mirror_rules_free);
-	}
-	return status;
-}
-
-/**
- * i40e_aq_add_mirrorrule - add a mirror rule
- * @hw: pointer to the hw struct
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @dest_vsi: SEID of VSI to which packets will be mirrored
- * @count: length of the list
- * @mr_list: list of mirrored VSI SEIDs or VLAN IDs
- * @cmd_details: pointer to command details structure or NULL
- * @rule_id: Rule ID returned from FW
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Add mirror rule. Mirror rules are supported for VEBs or VEPA elements only
- **/
-enum i40e_status_code i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
-			u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
-			struct i40e_asq_cmd_details *cmd_details,
-			u16 *rule_id, u16 *rules_used, u16 *rules_free)
-{
-	if (!(rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS ||
-	    rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS)) {
-		if (count == 0 || !mr_list)
-			return I40E_ERR_PARAM;
-	}
-
-	return i40e_mirrorrule_op(hw, i40e_aqc_opc_add_mirror_rule, sw_seid,
-				  rule_type, dest_vsi, count, mr_list,
-				  cmd_details, rule_id, rules_used, rules_free);
-}
-
-/**
- * i40e_aq_delete_mirrorrule - delete a mirror rule
- * @hw: pointer to the hw struct
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @count: length of the list
- * @rule_id: Rule ID that is returned in the receive desc as part of
- *		add_mirrorrule.
- * @mr_list: list of mirrored VLAN IDs to be removed
- * @cmd_details: pointer to command details structure or NULL
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Delete a mirror rule. Mirror rules are supported for VEBs/VEPA elements only
- **/
-enum i40e_status_code i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
-			u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
-			struct i40e_asq_cmd_details *cmd_details,
-			u16 *rules_used, u16 *rules_free)
-{
-	/* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */
-	if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
-		/* count and mr_list shall be valid for rule_type INGRESS VLAN
-		 * mirroring. For other rule_type, count and rule_type should
-		 * not matter.
-		 */
-		if (count == 0 || !mr_list)
-			return I40E_ERR_PARAM;
-	}
-
-	return i40e_mirrorrule_op(hw, i40e_aqc_opc_delete_mirror_rule, sw_seid,
-				  rule_type, rule_id, count, mr_list,
-				  cmd_details, NULL, rules_used, rules_free);
-}
-
 /**
  * i40e_aq_add_vlan - Add VLAN ids to the HW filtering
  * @hw: pointer to the hw struct
@@ -3638,196 +2957,41 @@ enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
 }
 
 /**
- * i40e_aq_read_nvm_config - read an nvm config block
+ * i40e_aq_erase_nvm
  * @hw: pointer to the hw struct
- * @cmd_flags: NVM access admin command bits
- * @field_id: field or feature id
- * @data: buffer for result
- * @buf_size: buffer size
- * @element_count: pointer to count of elements read by FW
+ * @module_pointer: module pointer location in words from the NVM beginning
+ * @offset: offset in the module (expressed in 4 KB from module's beginning)
+ * @length: length of the section to be erased (expressed in 4 KB)
+ * @last_command: tells if this is the last command in a series
  * @cmd_details: pointer to command details structure or NULL
+ *
+ * Erase the NVM sector using the admin queue commands
  **/
-enum i40e_status_code i40e_aq_read_nvm_config(struct i40e_hw *hw,
-				u8 cmd_flags, u32 field_id, void *data,
-				u16 buf_size, u16 *element_count,
+enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
+				u32 offset, u16 length, bool last_command,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
-	struct i40e_aqc_nvm_config_read *cmd =
-		(struct i40e_aqc_nvm_config_read *)&desc.params.raw;
+	struct i40e_aqc_nvm_update *cmd =
+		(struct i40e_aqc_nvm_update *)&desc.params.raw;
 	enum i40e_status_code status;
 
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_config_read);
-	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF));
-	if (buf_size > I40E_AQ_LARGE_BUF)
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
+	DEBUGFUNC("i40e_aq_erase_nvm");
 
-	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
-	cmd->element_id = CPU_TO_LE16((u16)(0xffff & field_id));
-	if (cmd_flags & I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK)
-		cmd->element_id_msw = CPU_TO_LE16((u16)(field_id >> 16));
-	else
-		cmd->element_id_msw = 0;
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000) {
+		status = I40E_ERR_PARAM;
+		goto i40e_aq_erase_nvm_exit;
+	}
 
-	status = i40e_asq_send_command(hw, &desc, data, buf_size, cmd_details);
+	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase);
 
-	if (!status && element_count)
-		*element_count = LE16_TO_CPU(cmd->element_count);
-
-	return status;
-}
-
-/**
- * i40e_aq_write_nvm_config - write an nvm config block
- * @hw: pointer to the hw struct
- * @cmd_flags: NVM access admin command bits
- * @data: buffer for result
- * @buf_size: buffer size
- * @element_count: count of elements to be written
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_write_nvm_config(struct i40e_hw *hw,
-				u8 cmd_flags, void *data, u16 buf_size,
-				u16 element_count,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_nvm_config_write *cmd =
-		(struct i40e_aqc_nvm_config_write *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_config_write);
-	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
-	if (buf_size > I40E_AQ_LARGE_BUF)
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
-	cmd->element_count = CPU_TO_LE16(element_count);
-	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
-	status = i40e_asq_send_command(hw, &desc, data, buf_size, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_nvm_update_in_process
- * @hw: pointer to the hw struct
- * @update_flow_state: True indicates that update flow starts, false that ends
- * @cmd_details: pointer to command details structure or NULL
- *
- * Indicate NVM update in process.
- **/
-enum i40e_status_code
-i40e_aq_nvm_update_in_process(struct i40e_hw *hw,
-			      bool update_flow_state,
-			      struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_nvm_update_in_process *cmd =
-		(struct i40e_aqc_nvm_update_in_process *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_nvm_update_in_process);
-
-	cmd->command = I40E_AQ_UPDATE_FLOW_END;
-
-	if (update_flow_state)
-		cmd->command |= I40E_AQ_UPDATE_FLOW_START;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_min_rollback_rev_update - triggers an ow after update
- * @hw: pointer to the hw struct
- * @mode: opt-in mode, 1b for single module update, 0b for bulk update
- * @module: module to be updated. Ignored if mode is 0b
- * @min_rrev: value of the new minimal version. Ignored if mode is 0b
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code
-i40e_aq_min_rollback_rev_update(struct i40e_hw *hw, u8 mode, u8 module,
-				u32 min_rrev,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_rollback_revision_update *cmd =
-		(struct i40e_aqc_rollback_revision_update *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-		i40e_aqc_opc_rollback_revision_update);
-	cmd->optin_mode = mode;
-	cmd->module_selected = module;
-	cmd->min_rrev = min_rrev;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_oem_post_update - triggers an OEM specific flow after update
- * @hw: pointer to the hw struct
- * @buff: buffer for result
- * @buff_size: buffer size
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_oem_post_update(struct i40e_hw *hw,
-				void *buff, u16 buff_size,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	UNREFERENCED_2PARAMETER(buff, buff_size);
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_oem_post_update);
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-	if (status && LE16_TO_CPU(desc.retval) == I40E_AQ_RC_ESRCH)
-		status = I40E_ERR_NOT_IMPLEMENTED;
-
-	return status;
-}
-
-/**
- * i40e_aq_erase_nvm
- * @hw: pointer to the hw struct
- * @module_pointer: module pointer location in words from the NVM beginning
- * @offset: offset in the module (expressed in 4 KB from module's beginning)
- * @length: length of the section to be erased (expressed in 4 KB)
- * @last_command: tells if this is the last command in a series
- * @cmd_details: pointer to command details structure or NULL
- *
- * Erase the NVM sector using the admin queue commands
- **/
-enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
-				u32 offset, u16 length, bool last_command,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_nvm_update *cmd =
-		(struct i40e_aqc_nvm_update *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	DEBUGFUNC("i40e_aq_erase_nvm");
-
-	/* In offset the highest byte must be zeroed. */
-	if (offset & 0xFF000000) {
-		status = I40E_ERR_PARAM;
-		goto i40e_aq_erase_nvm_exit;
-	}
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase);
-
-	/* If this is the last command in a series, set the proper flag. */
-	if (last_command)
-		cmd->command_flags |= I40E_AQ_NVM_LAST_CMD;
-	cmd->module_pointer = module_pointer;
-	cmd->offset = CPU_TO_LE32(offset);
-	cmd->length = CPU_TO_LE16(length);
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->command_flags |= I40E_AQ_NVM_LAST_CMD;
+	cmd->module_pointer = module_pointer;
+	cmd->offset = CPU_TO_LE32(offset);
+	cmd->length = CPU_TO_LE16(length);
 
 	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
 
@@ -4302,43 +3466,6 @@ enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
 	return status;
 }
 
-/**
- * i40e_aq_rearrange_nvm
- * @hw: pointer to the hw struct
- * @rearrange_nvm: defines direction of rearrangement
- * @cmd_details: pointer to command details structure or NULL
- *
- * Rearrange NVM structure, available only for transition FW
- **/
-enum i40e_status_code i40e_aq_rearrange_nvm(struct i40e_hw *hw,
-				u8 rearrange_nvm,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aqc_nvm_update *cmd;
-	enum i40e_status_code status;
-	struct i40e_aq_desc desc;
-
-	DEBUGFUNC("i40e_aq_rearrange_nvm");
-
-	cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_update);
-
-	rearrange_nvm &= (I40E_AQ_NVM_REARRANGE_TO_FLAT |
-			 I40E_AQ_NVM_REARRANGE_TO_STRUCT);
-
-	if (!rearrange_nvm) {
-		status = I40E_ERR_PARAM;
-		goto i40e_aq_rearrange_nvm_exit;
-	}
-
-	cmd->command_flags |= rearrange_nvm;
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-i40e_aq_rearrange_nvm_exit:
-	return status;
-}
-
 /**
  * i40e_aq_get_lldp_mib
  * @hw: pointer to the hw struct
@@ -4459,44 +3586,6 @@ enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_restore_lldp
- * @hw: pointer to the hw struct
- * @setting: pointer to factory setting variable or NULL
- * @restore: True if factory settings should be restored
- * @cmd_details: pointer to command details structure or NULL
- *
- * Restore LLDP Agent factory settings if @restore set to True. In other case
- * only returns factory setting in AQ response.
- **/
-enum i40e_status_code
-i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
-		     struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_lldp_restore *cmd =
-		(struct i40e_aqc_lldp_restore *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_PERSISTENT)) {
-		i40e_debug(hw, I40E_DEBUG_ALL,
-			   "Restore LLDP not supported by current FW version.\n");
-		return I40E_ERR_DEVICE_NOT_SUPPORTED;
-	}
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_restore);
-
-	if (restore)
-		cmd->command |= I40E_AQ_LLDP_AGENT_RESTORE;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (setting)
-		*setting = cmd->command & 1;
-
-	return status;
-}
-
 /**
  * i40e_aq_stop_lldp
  * @hw: pointer to the hw struct
@@ -4567,37 +3656,6 @@ enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_set_dcb_parameters
- * @hw: pointer to the hw struct
- * @cmd_details: pointer to command details structure or NULL
- * @dcb_enable: True if DCB configuration needs to be applied
- *
- **/
-enum i40e_status_code
-i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable,
-			   struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_dcb_parameters *cmd =
-		(struct i40e_aqc_set_dcb_parameters *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE))
-		return I40E_ERR_DEVICE_NOT_SUPPORTED;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_set_dcb_parameters);
-
-	if (dcb_enable) {
-		cmd->valid_flags = I40E_DCB_VALID;
-		cmd->command = I40E_AQ_DCB_SET_AGENT;
-	}
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_get_cee_dcb_config
  * @hw: pointer to the hw struct
@@ -4626,36 +3684,6 @@ enum i40e_status_code i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_start_stop_dcbx - Start/Stop DCBx service in FW
- * @hw: pointer to the hw struct
- * @start_agent: True if DCBx Agent needs to be Started
- *				False if DCBx Agent needs to be Stopped
- * @cmd_details: pointer to command details structure or NULL
- *
- * Start/Stop the embedded dcbx Agent
- **/
-enum i40e_status_code i40e_aq_start_stop_dcbx(struct i40e_hw *hw,
-				bool start_agent,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_lldp_stop_start_specific_agent *cmd =
-		(struct i40e_aqc_lldp_stop_start_specific_agent *)
-				&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-				i40e_aqc_opc_lldp_stop_start_spec_agent);
-
-	if (start_agent)
-		cmd->command = I40E_AQC_START_SPECIFIC_AGENT_MASK;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_add_udp_tunnel
  * @hw: pointer to the hw struct
@@ -4716,45 +3744,6 @@ enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
 	return status;
 }
 
-/**
- * i40e_aq_get_switch_resource_alloc (0x0204)
- * @hw: pointer to the hw struct
- * @num_entries: pointer to u8 to store the number of resource entries returned
- * @buf: pointer to a user supplied buffer.  This buffer must be large enough
- *        to store the resource information for all resource types.  Each
- *        resource type is a i40e_aqc_switch_resource_alloc_data structure.
- * @count: size, in bytes, of the buffer provided
- * @cmd_details: pointer to command details structure or NULL
- *
- * Query the resources allocated to a function.
- **/
-enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw,
-			u8 *num_entries,
-			struct i40e_aqc_switch_resource_alloc_element_resp *buf,
-			u16 count,
-			struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_get_switch_resource_alloc *cmd_resp =
-		(struct i40e_aqc_get_switch_resource_alloc *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 length = count * sizeof(*buf);
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					i40e_aqc_opc_get_switch_resource_alloc);
-
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	if (length > I40E_AQ_LARGE_BUF)
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
-	status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details);
-
-	if (!status && num_entries)
-		*num_entries = cmd_resp->num_entries;
-
-	return status;
-}
-
 /**
  * i40e_aq_delete_element - Delete switch element
  * @hw: pointer to the hw struct
@@ -4784,178 +3773,45 @@ enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
 }
 
 /**
- * i40e_aq_add_pvirt - Instantiate a Port Virtualizer on a port
- * @hw: pointer to the hw struct
- * @flags: component flags
- * @mac_seid: uplink seid (MAC SEID)
- * @vsi_seid: connected vsi seid
- * @ret_seid: seid of create pv component
- *
- * This instantiates an i40e port virtualizer with specified flags.
- * Depending on specified flags the port virtualizer can act as a
- * 802.1Qbr port virtualizer or a 802.1Qbg S-component.
- */
-enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags,
-				       u16 mac_seid, u16 vsi_seid,
-				       u16 *ret_seid)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_update_pv *cmd =
-		(struct i40e_aqc_add_update_pv *)&desc.params.raw;
-	struct i40e_aqc_add_update_pv_completion *resp =
-		(struct i40e_aqc_add_update_pv_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (vsi_seid == 0)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_pv);
-	cmd->command_flags = CPU_TO_LE16(flags);
-	cmd->uplink_seid = CPU_TO_LE16(mac_seid);
-	cmd->connected_seid = CPU_TO_LE16(vsi_seid);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-	if (!status && ret_seid)
-		*ret_seid = LE16_TO_CPU(resp->pv_seid);
-
-	return status;
-}
-
-/**
- * i40e_aq_add_tag - Add an S/E-tag
+ * i40e_aq_add_mcast_etag - Add a multicast E-tag
  * @hw: pointer to the hw struct
- * @direct_to_queue: should s-tag direct flow to a specific queue
- * @vsi_seid: VSI SEID to use this tag
- * @tag: value of the tag
- * @queue_num: queue number, only valid is direct_to_queue is true
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
+ * @pv_seid: Port Virtualizer of this SEID to associate E-tag with
+ * @etag: value of E-tag to add
+ * @num_tags_in_buf: number of unicast E-tags in indirect buffer
+ * @buf: address of indirect buffer
+ * @tags_used: return value, number of E-tags in use by this port
+ * @tags_free: return value, number of unallocated M-tags
  * @cmd_details: pointer to command details structure or NULL
  *
- * This associates an S- or E-tag to a VSI in the switch complex.  It returns
+ * This associates a multicast E-tag to a port virtualizer.  It will return
  * the number of tags allocated by the PF, and the number of unallocated
  * tags available.
+ *
+ * The indirect buffer pointed to by buf is a list of 2-byte E-tags,
+ * num_tags_in_buf long.
  **/
-enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue,
-				u16 vsi_seid, u16 tag, u16 queue_num,
+enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
+				u16 etag, u8 num_tags_in_buf, void *buf,
 				u16 *tags_used, u16 *tags_free,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_tag *cmd =
-		(struct i40e_aqc_add_tag *)&desc.params.raw;
-	struct i40e_aqc_add_remove_tag_completion *resp =
-		(struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw;
+	struct i40e_aqc_add_remove_mcast_etag *cmd =
+		(struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
+	struct i40e_aqc_add_remove_mcast_etag_completion *resp =
+	   (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
 	enum i40e_status_code status;
+	u16 length = sizeof(u16) * num_tags_in_buf;
 
-	if (vsi_seid == 0)
+	if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0))
 		return I40E_ERR_PARAM;
 
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_tag);
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_add_multicast_etag);
 
-	cmd->seid = CPU_TO_LE16(vsi_seid);
-	cmd->tag = CPU_TO_LE16(tag);
-	if (direct_to_queue) {
-		cmd->flags = CPU_TO_LE16(I40E_AQC_ADD_TAG_FLAG_TO_QUEUE);
-		cmd->queue_number = CPU_TO_LE16(queue_num);
-	}
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status) {
-		if (tags_used != NULL)
-			*tags_used = LE16_TO_CPU(resp->tags_used);
-		if (tags_free != NULL)
-			*tags_free = LE16_TO_CPU(resp->tags_free);
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_remove_tag - Remove an S- or E-tag
- * @hw: pointer to the hw struct
- * @vsi_seid: VSI SEID this tag is associated with
- * @tag: value of the S-tag to delete
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This deletes an S- or E-tag from a VSI in the switch complex.  It returns
- * the number of tags allocated by the PF, and the number of unallocated
- * tags available.
- **/
-enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid,
-				u16 tag, u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_remove_tag *cmd =
-		(struct i40e_aqc_remove_tag *)&desc.params.raw;
-	struct i40e_aqc_add_remove_tag_completion *resp =
-		(struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (vsi_seid == 0)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_tag);
-
-	cmd->seid = CPU_TO_LE16(vsi_seid);
-	cmd->tag = CPU_TO_LE16(tag);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status) {
-		if (tags_used != NULL)
-			*tags_used = LE16_TO_CPU(resp->tags_used);
-		if (tags_free != NULL)
-			*tags_free = LE16_TO_CPU(resp->tags_free);
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_add_mcast_etag - Add a multicast E-tag
- * @hw: pointer to the hw struct
- * @pv_seid: Port Virtualizer of this SEID to associate E-tag with
- * @etag: value of E-tag to add
- * @num_tags_in_buf: number of unicast E-tags in indirect buffer
- * @buf: address of indirect buffer
- * @tags_used: return value, number of E-tags in use by this port
- * @tags_free: return value, number of unallocated M-tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This associates a multicast E-tag to a port virtualizer.  It will return
- * the number of tags allocated by the PF, and the number of unallocated
- * tags available.
- *
- * The indirect buffer pointed to by buf is a list of 2-byte E-tags,
- * num_tags_in_buf long.
- **/
-enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
-				u16 etag, u8 num_tags_in_buf, void *buf,
-				u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_mcast_etag *cmd =
-		(struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
-	struct i40e_aqc_add_remove_mcast_etag_completion *resp =
-	   (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 length = sizeof(u16) * num_tags_in_buf;
-
-	if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0))
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_add_multicast_etag);
-
-	cmd->pv_seid = CPU_TO_LE16(pv_seid);
-	cmd->etag = CPU_TO_LE16(etag);
-	cmd->num_unicast_etags = num_tags_in_buf;
+	cmd->pv_seid = CPU_TO_LE16(pv_seid);
+	cmd->etag = CPU_TO_LE16(etag);
+	cmd->num_unicast_etags = num_tags_in_buf;
 
 	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
 
@@ -4971,239 +3827,6 @@ enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
 	return status;
 }
 
-/**
- * i40e_aq_remove_mcast_etag - Remove a multicast E-tag
- * @hw: pointer to the hw struct
- * @pv_seid: Port Virtualizer SEID this M-tag is associated with
- * @etag: value of the E-tag to remove
- * @tags_used: return value, number of tags in use by this port
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This deletes an E-tag from the port virtualizer.  It will return
- * the number of tags allocated by the port, and the number of unallocated
- * tags available.
- **/
-enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
-				u16 etag, u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_mcast_etag *cmd =
-		(struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
-	struct i40e_aqc_add_remove_mcast_etag_completion *resp =
-	   (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-
-
-	if (pv_seid == 0)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_remove_multicast_etag);
-
-	cmd->pv_seid = CPU_TO_LE16(pv_seid);
-	cmd->etag = CPU_TO_LE16(etag);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status) {
-		if (tags_used != NULL)
-			*tags_used = LE16_TO_CPU(resp->mcast_etags_used);
-		if (tags_free != NULL)
-			*tags_free = LE16_TO_CPU(resp->mcast_etags_free);
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_update_tag - Update an S/E-tag
- * @hw: pointer to the hw struct
- * @vsi_seid: VSI SEID using this S-tag
- * @old_tag: old tag value
- * @new_tag: new tag value
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This updates the value of the tag currently attached to this VSI
- * in the switch complex.  It will return the number of tags allocated
- * by the PF, and the number of unallocated tags available.
- **/
-enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid,
-				u16 old_tag, u16 new_tag, u16 *tags_used,
-				u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_update_tag *cmd =
-		(struct i40e_aqc_update_tag *)&desc.params.raw;
-	struct i40e_aqc_update_tag_completion *resp =
-		(struct i40e_aqc_update_tag_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (vsi_seid == 0)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_update_tag);
-
-	cmd->seid = CPU_TO_LE16(vsi_seid);
-	cmd->old_tag = CPU_TO_LE16(old_tag);
-	cmd->new_tag = CPU_TO_LE16(new_tag);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status) {
-		if (tags_used != NULL)
-			*tags_used = LE16_TO_CPU(resp->tags_used);
-		if (tags_free != NULL)
-			*tags_free = LE16_TO_CPU(resp->tags_free);
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_dcb_ignore_pfc - Ignore PFC for given TCs
- * @hw: pointer to the hw struct
- * @tcmap: TC map for request/release any ignore PFC condition
- * @request: request or release ignore PFC condition
- * @tcmap_ret: return TCs for which PFC is currently ignored
- * @cmd_details: pointer to command details structure or NULL
- *
- * This sends out request/release to ignore PFC condition for a TC.
- * It will return the TCs for which PFC is currently ignored.
- **/
-enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, u8 tcmap,
-				bool request, u8 *tcmap_ret,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_pfc_ignore *cmd_resp =
-		(struct i40e_aqc_pfc_ignore *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_ignore_pfc);
-
-	if (request)
-		cmd_resp->command_flags = I40E_AQC_PFC_IGNORE_SET;
-
-	cmd_resp->tc_bitmap = tcmap;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status) {
-		if (tcmap_ret != NULL)
-			*tcmap_ret = cmd_resp->tc_bitmap;
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_dcb_updated - DCB Updated Command
- * @hw: pointer to the hw struct
- * @cmd_details: pointer to command details structure or NULL
- *
- * When LLDP is handled in PF this command is used by the PF
- * to notify EMP that a DCB setting is modified.
- * When LLDP is handled in EMP this command is used by the PF
- * to notify EMP whenever one of the following parameters get
- * modified:
- *   - PFCLinkDelayAllowance in PRTDCB_GENC.PFCLDA
- *   - PCIRTT in PRTDCB_GENC.PCIRTT
- *   - Maximum Frame Size for non-FCoE TCs set by PRTDCB_TDPUC.MAX_TXFRAME.
- * EMP will return when the shared RPB settings have been
- * recomputed and modified. The retval field in the descriptor
- * will be set to 0 when RPB is modified.
- **/
-enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_add_statistics - Add a statistics block to a VLAN in a switch.
- * @hw: pointer to the hw struct
- * @seid: defines the SEID of the switch for which the stats are requested
- * @vlan_id: the VLAN ID for which the statistics are requested
- * @stat_index: index of the statistics counters block assigned to this VLAN
- * @cmd_details: pointer to command details structure or NULL
- *
- * XL710 supports 128 smonVlanStats counters.This command is used to
- * allocate a set of smonVlanStats counters to a specific VLAN in a specific
- * switch.
- **/
-enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid,
-				u16 vlan_id, u16 *stat_index,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_statistics *cmd_resp =
-		(struct i40e_aqc_add_remove_statistics *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if ((seid == 0) || (stat_index == NULL))
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_statistics);
-
-	cmd_resp->seid = CPU_TO_LE16(seid);
-	cmd_resp->vlan = CPU_TO_LE16(vlan_id);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (!status && stat_index)
-		*stat_index = LE16_TO_CPU(cmd_resp->stat_index);
-
-	return status;
-}
-
-/**
- * i40e_aq_remove_statistics - Remove a statistics block to a VLAN in a switch.
- * @hw: pointer to the hw struct
- * @seid: defines the SEID of the switch for which the stats are requested
- * @vlan_id: the VLAN ID for which the statistics are requested
- * @stat_index: index of the statistics counters block assigned to this VLAN
- * @cmd_details: pointer to command details structure or NULL
- *
- * XL710 supports 128 smonVlanStats counters.This command is used to
- * deallocate a set of smonVlanStats counters to a specific VLAN in a specific
- * switch.
- **/
-enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid,
-				u16 vlan_id, u16 stat_index,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_statistics *cmd =
-		(struct i40e_aqc_add_remove_statistics *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (seid == 0)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_remove_statistics);
-
-	cmd->seid = CPU_TO_LE16(seid);
-	cmd->vlan  = CPU_TO_LE16(vlan_id);
-	cmd->stat_index = CPU_TO_LE16(stat_index);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_set_port_parameters - set physical port parameters.
  * @hw: pointer to the hw struct
@@ -5332,35 +3955,6 @@ enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_aq_config_switch_comp_bw_limit - Configure Switching component BW Limit
- * @hw: pointer to the hw struct
- * @seid: switching component seid
- * @credit: BW limit credits (0 = disabled)
- * @max_bw: Max BW limit credits
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
-				u16 seid, u16 credit, u8 max_bw,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_configure_switching_comp_bw_limit *cmd =
-	  (struct i40e_aqc_configure_switching_comp_bw_limit *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-				i40e_aqc_opc_configure_switching_comp_bw_limit);
-
-	cmd->seid = CPU_TO_LE16(seid);
-	cmd->credit = CPU_TO_LE16(credit);
-	cmd->max_bw = max_bw;
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_config_vsi_ets_sla_bw_limit - Config VSI BW Limit per TC
  * @hw: pointer to the hw struct
@@ -5430,23 +4024,6 @@ enum i40e_status_code i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
 			    cmd_details);
 }
 
-/**
- * i40e_aq_config_switch_comp_ets_bw_limit - Config Switch comp BW Limit per TC
- * @hw: pointer to the hw struct
- * @seid: seid of the switching component
- * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit(
-	struct i40e_hw *hw, u16 seid,
-	struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data,
-	struct i40e_asq_cmd_details *cmd_details)
-{
-	return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
-			    i40e_aqc_opc_configure_switching_comp_ets_bw_limit,
-			    cmd_details);
-}
-
 /**
  * i40e_aq_query_vsi_bw_config - Query VSI BW configuration
  * @hw: pointer to the hw struct
@@ -5499,27 +4076,10 @@ enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
 }
 
 /**
- * i40e_aq_query_port_ets_config - Query Physical Port ETS configuration
+ * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration
  * @hw: pointer to the hw struct
- * @seid: seid of the VSI or switching component connected to Physical Port
- * @bw_data: Buffer to hold current ETS configuration for the Physical Port
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw,
-			u16 seid,
-			struct i40e_aqc_query_port_ets_config_resp *bw_data,
-			struct i40e_asq_cmd_details *cmd_details)
-{
-	return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
-				    i40e_aqc_opc_query_port_ets_config,
-				    cmd_details);
-}
-
-/**
- * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration
- * @hw: pointer to the hw struct
- * @seid: seid of the switching component
- * @bw_data: Buffer to hold switching component's BW configuration
+ * @seid: seid of the switching component
+ * @bw_data: Buffer to hold switching component's BW configuration
  * @cmd_details: pointer to command details structure or NULL
  **/
 enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
@@ -5758,28 +4318,6 @@ enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
 	return status;
 }
 
-/**
- * i40e_add_filter_to_drop_tx_flow_control_frames- filter to drop flow control
- * @hw: pointer to the hw struct
- * @seid: VSI seid to add ethertype filter from
- **/
-void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
-						    u16 seid)
-{
-#define I40E_FLOW_CONTROL_ETHTYPE 0x8808
-	u16 flag = I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC |
-		   I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP |
-		   I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX;
-	u16 ethtype = I40E_FLOW_CONTROL_ETHTYPE;
-	enum i40e_status_code status;
-
-	status = i40e_aq_add_rem_control_packet_filter(hw, NULL, ethtype, flag,
-						       seid, 0, true, NULL,
-						       NULL);
-	if (status)
-		DEBUGOUT("Ethtype Filter Add failed: Error pruning Tx flow control frames\n");
-}
-
 /**
  * i40e_fix_up_geneve_vni - adjust Geneve VNI for HW issue
  * @filters: list of cloud filters
@@ -5900,649 +4438,195 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
 		}
 	}
 
-	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
-	return status;
-}
-
-/**
- * i40e_aq_rem_cloud_filters
- * @hw: pointer to the hardware structure
- * @seid: VSI seid to remove cloud filters from
- * @filters: Buffer which contains the filters to be removed
- * @filter_count: number of filters contained in the buffer
- *
- * Remove the cloud filters for a given VSI.  The contents of the
- * i40e_aqc_cloud_filters_element_data are filled in by the caller
- * of the function.
- *
- **/
-enum i40e_status_code
-i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
-			  struct i40e_aqc_cloud_filters_element_data *filters,
-			  u8 filter_count)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_cloud_filters *cmd =
-	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 buff_len;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_remove_cloud_filters);
-
-	buff_len = filter_count * sizeof(*filters);
-	desc.datalen = CPU_TO_LE16(buff_len);
-	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
-	cmd->num_filters = filter_count;
-	cmd->seid = CPU_TO_LE16(seid);
-
-	i40e_fix_up_geneve_vni(filters, filter_count);
-
-	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
-	return status;
-}
-
-/**
- * i40e_aq_rem_cloud_filters_bb
- * @hw: pointer to the hardware structure
- * @seid: VSI seid to remove cloud filters from
- * @filters: Buffer which contains the filters in big buffer to be removed
- * @filter_count: number of filters contained in the buffer
- *
- * Remove the big buffer cloud filters for a given VSI.  The contents of the
- * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
- * function.
- *
- **/
-enum i40e_status_code
-i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
-			     struct i40e_aqc_cloud_filters_element_bb *filters,
-			     u8 filter_count)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_add_remove_cloud_filters *cmd =
-	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 buff_len;
-	int i;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_remove_cloud_filters);
-
-	buff_len = filter_count * sizeof(*filters);
-	desc.datalen = CPU_TO_LE16(buff_len);
-	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
-	cmd->num_filters = filter_count;
-	cmd->seid = CPU_TO_LE16(seid);
-	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
-
-	for (i = 0; i < filter_count; i++) {
-		u16 tnl_type;
-		u32 ti;
-
-		tnl_type = (LE16_TO_CPU(filters[i].element.flags) &
-			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
-			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
-
-		/* Due to hardware eccentricities, the VNI for Geneve is shifted
-		 * one more byte further than normally used for Tenant ID in
-		 * other tunnel types.
-		 */
-		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
-			ti = LE32_TO_CPU(filters[i].element.tenant_id);
-			filters[i].element.tenant_id = CPU_TO_LE32(ti << 8);
-		}
-	}
-
-	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
-	return status;
-}
-
-/**
- * i40e_aq_replace_cloud_filters - Replace cloud filter command
- * @hw: pointer to the hw struct
- * @filters: pointer to the i40e_aqc_replace_cloud_filter_cmd struct
- * @cmd_buf: pointer to the i40e_aqc_replace_cloud_filter_cmd_buf struct
- *
- **/
-enum
-i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
-	struct i40e_aqc_replace_cloud_filters_cmd *filters,
-	struct i40e_aqc_replace_cloud_filters_cmd_buf *cmd_buf)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_replace_cloud_filters_cmd *cmd =
-		(struct i40e_aqc_replace_cloud_filters_cmd *)&desc.params.raw;
-	enum i40e_status_code status = I40E_SUCCESS;
-	int i = 0;
-
-	/* X722 doesn't support this command */
-	if (hw->mac.type == I40E_MAC_X722)
-		return I40E_ERR_DEVICE_NOT_SUPPORTED;
-
-	/* need FW version greater than 6.00 */
-	if (hw->aq.fw_maj_ver < 6)
-		return I40E_NOT_SUPPORTED;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_replace_cloud_filters);
-
-	desc.datalen = CPU_TO_LE16(32);
-	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
-	cmd->old_filter_type = filters->old_filter_type;
-	cmd->new_filter_type = filters->new_filter_type;
-	cmd->valid_flags = filters->valid_flags;
-	cmd->tr_bit = filters->tr_bit;
-	cmd->tr_bit2 = filters->tr_bit2;
-
-	status = i40e_asq_send_command(hw, &desc, cmd_buf,
-		sizeof(struct i40e_aqc_replace_cloud_filters_cmd_buf),  NULL);
-
-	/* for get cloud filters command */
-	for (i = 0; i < 32; i += 4) {
-		cmd_buf->filters[i / 4].filter_type = cmd_buf->data[i];
-		cmd_buf->filters[i / 4].input[0] = cmd_buf->data[i + 1];
-		cmd_buf->filters[i / 4].input[1] = cmd_buf->data[i + 2];
-		cmd_buf->filters[i / 4].input[2] = cmd_buf->data[i + 3];
-	}
-
-	return status;
-}
-
-
-/**
- * i40e_aq_alternate_write
- * @hw: pointer to the hardware structure
- * @reg_addr0: address of first dword to be read
- * @reg_val0: value to be written under 'reg_addr0'
- * @reg_addr1: address of second dword to be read
- * @reg_val1: value to be written under 'reg_addr1'
- *
- * Write one or two dwords to alternate structure. Fields are indicated
- * by 'reg_addr0' and 'reg_addr1' register numbers.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw,
-				u32 reg_addr0, u32 reg_val0,
-				u32 reg_addr1, u32 reg_val1)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_write *cmd_resp =
-		(struct i40e_aqc_alternate_write *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_write);
-	cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
-	cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
-	cmd_resp->data0 = CPU_TO_LE32(reg_val0);
-	cmd_resp->data1 = CPU_TO_LE32(reg_val1);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
-	return status;
-}
-
-/**
- * i40e_aq_alternate_write_indirect
- * @hw: pointer to the hardware structure
- * @addr: address of a first register to be modified
- * @dw_count: number of alternate structure fields to write
- * @buffer: pointer to the command buffer
- *
- * Write 'dw_count' dwords from 'buffer' to alternate structure
- * starting at 'addr'.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw,
-				u32 addr, u32 dw_count, void *buffer)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_ind_write *cmd_resp =
-		(struct i40e_aqc_alternate_ind_write *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (buffer == NULL)
-		return I40E_ERR_PARAM;
-
-	/* Indirect command */
-	i40e_fill_default_direct_cmd_desc(&desc,
-					 i40e_aqc_opc_alternate_write_indirect);
-
-	desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD);
-	desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF);
-	if (dw_count > (I40E_AQ_LARGE_BUF/4))
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
-	cmd_resp->address = CPU_TO_LE32(addr);
-	cmd_resp->length = CPU_TO_LE32(dw_count);
-
-	status = i40e_asq_send_command(hw, &desc, buffer,
-				       I40E_LO_DWORD(4*dw_count), NULL);
-
-	return status;
-}
-
-/**
- * i40e_aq_alternate_read
- * @hw: pointer to the hardware structure
- * @reg_addr0: address of first dword to be read
- * @reg_val0: pointer for data read from 'reg_addr0'
- * @reg_addr1: address of second dword to be read
- * @reg_val1: pointer for data read from 'reg_addr1'
- *
- * Read one or two dwords from alternate structure. Fields are indicated
- * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer
- * is not passed then only register at 'reg_addr0' is read.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
-				u32 reg_addr0, u32 *reg_val0,
-				u32 reg_addr1, u32 *reg_val1)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_write *cmd_resp =
-		(struct i40e_aqc_alternate_write *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (reg_val0 == NULL)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read);
-	cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
-	cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
-	if (status == I40E_SUCCESS) {
-		*reg_val0 = LE32_TO_CPU(cmd_resp->data0);
-
-		if (reg_val1 != NULL)
-			*reg_val1 = LE32_TO_CPU(cmd_resp->data1);
-	}
-
-	return status;
-}
-
-/**
- * i40e_aq_alternate_read_indirect
- * @hw: pointer to the hardware structure
- * @addr: address of the alternate structure field
- * @dw_count: number of alternate structure fields to read
- * @buffer: pointer to the command buffer
- *
- * Read 'dw_count' dwords from alternate structure starting at 'addr' and
- * place them in 'buffer'. The buffer should be allocated by caller.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw,
-				u32 addr, u32 dw_count, void *buffer)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_ind_write *cmd_resp =
-		(struct i40e_aqc_alternate_ind_write *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (buffer == NULL)
-		return I40E_ERR_PARAM;
-
-	/* Indirect command */
-	i40e_fill_default_direct_cmd_desc(&desc,
-		i40e_aqc_opc_alternate_read_indirect);
-
-	desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD);
-	desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF);
-	if (dw_count > (I40E_AQ_LARGE_BUF/4))
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
-	cmd_resp->address = CPU_TO_LE32(addr);
-	cmd_resp->length = CPU_TO_LE32(dw_count);
-
-	status = i40e_asq_send_command(hw, &desc, buffer,
-				       I40E_LO_DWORD(4*dw_count), NULL);
-
-	return status;
-}
-
-/**
- *  i40e_aq_alternate_clear
- *  @hw: pointer to the HW structure.
- *
- *  Clear the alternate structures of the port from which the function
- *  is called.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_alternate_clear_port);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
-	return status;
-}
-
-/**
- *  i40e_aq_alternate_write_done
- *  @hw: pointer to the HW structure.
- *  @bios_mode: indicates whether the command is executed by UEFI or legacy BIOS
- *  @reset_needed: indicates the SW should trigger GLOBAL reset
- *
- *  Indicates to the FW that alternate structures have been changed.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw,
-		u8 bios_mode, bool *reset_needed)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_write_done *cmd =
-		(struct i40e_aqc_alternate_write_done *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	if (reset_needed == NULL)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_alternate_write_done);
-
-	cmd->cmd_flags = CPU_TO_LE16(bios_mode);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-	if (!status && reset_needed)
-		*reset_needed = ((LE16_TO_CPU(cmd->cmd_flags) &
-				 I40E_AQ_ALTERNATE_RESET_NEEDED) != 0);
-
-	return status;
-}
-
-/**
- *  i40e_aq_set_oem_mode
- *  @hw: pointer to the HW structure.
- *  @oem_mode: the OEM mode to be used
- *
- *  Sets the device to a specific operating mode. Currently the only supported
- *  mode is no_clp, which causes FW to refrain from using Alternate RAM.
- *
- **/
-enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw,
-		u8 oem_mode)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_alternate_write_done *cmd =
-		(struct i40e_aqc_alternate_write_done *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_alternate_set_mode);
-
-	cmd->cmd_flags = CPU_TO_LE16(oem_mode);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
 
 	return status;
 }
 
 /**
- * i40e_aq_resume_port_tx
+ * i40e_aq_rem_cloud_filters
  * @hw: pointer to the hardware structure
- * @cmd_details: pointer to command details structure or NULL
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
  *
- * Resume port's Tx traffic
  **/
-enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw,
-				struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count)
 {
 	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
 	enum i40e_status_code status;
+	u16 buff_len;
 
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_remove_cloud_filters);
 
-	return status;
-}
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = CPU_TO_LE16(buff_len);
+	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = CPU_TO_LE16(seid);
 
-/**
- * i40e_set_pci_config_data - store PCI bus info
- * @hw: pointer to hardware structure
- * @link_status: the link status word from PCI config space
- *
- * Stores the PCI bus info (speed, width, type) within the i40e_hw structure
- **/
-void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status)
-{
-	hw->bus.type = i40e_bus_type_pci_express;
+	i40e_fix_up_geneve_vni(filters, filter_count);
 
-	switch (link_status & I40E_PCI_LINK_WIDTH) {
-	case I40E_PCI_LINK_WIDTH_1:
-		hw->bus.width = i40e_bus_width_pcie_x1;
-		break;
-	case I40E_PCI_LINK_WIDTH_2:
-		hw->bus.width = i40e_bus_width_pcie_x2;
-		break;
-	case I40E_PCI_LINK_WIDTH_4:
-		hw->bus.width = i40e_bus_width_pcie_x4;
-		break;
-	case I40E_PCI_LINK_WIDTH_8:
-		hw->bus.width = i40e_bus_width_pcie_x8;
-		break;
-	default:
-		hw->bus.width = i40e_bus_width_unknown;
-		break;
-	}
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
 
-	switch (link_status & I40E_PCI_LINK_SPEED) {
-	case I40E_PCI_LINK_SPEED_2500:
-		hw->bus.speed = i40e_bus_speed_2500;
-		break;
-	case I40E_PCI_LINK_SPEED_5000:
-		hw->bus.speed = i40e_bus_speed_5000;
-		break;
-	case I40E_PCI_LINK_SPEED_8000:
-		hw->bus.speed = i40e_bus_speed_8000;
-		break;
-	default:
-		hw->bus.speed = i40e_bus_speed_unknown;
-		break;
-	}
+	return status;
 }
 
 /**
- * i40e_aq_debug_dump
+ * i40e_aq_rem_cloud_filters_bb
  * @hw: pointer to the hardware structure
- * @cluster_id: specific cluster to dump
- * @table_id: table id within cluster
- * @start_index: index of line in the block to read
- * @buff_size: dump buffer size
- * @buff: dump buffer
- * @ret_buff_size: actual buffer size returned
- * @ret_next_table: next block to read
- * @ret_next_index: next index to read
- * @cmd_details: pointer to command details structure or NULL
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be removed
+ * @filter_count: number of filters contained in the buffer
  *
- * Dump internal FW/HW data for debug purposes.
+ * Remove the big buffer cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
  *
  **/
-enum i40e_status_code i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
-				u8 table_id, u32 start_index, u16 buff_size,
-				void *buff, u16 *ret_buff_size,
-				u8 *ret_next_table, u32 *ret_next_index,
-				struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count)
 {
 	struct i40e_aq_desc desc;
-	struct i40e_aqc_debug_dump_internals *cmd =
-		(struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
-	struct i40e_aqc_debug_dump_internals *resp =
-		(struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
 	enum i40e_status_code status;
-
-	if (buff_size == 0 || !buff)
-		return I40E_ERR_PARAM;
+	u16 buff_len;
+	int i;
 
 	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_debug_dump_internals);
-	/* Indirect Command */
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	if (buff_size > I40E_AQ_LARGE_BUF)
-		desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
+					  i40e_aqc_opc_remove_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = CPU_TO_LE16(buff_len);
+	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = CPU_TO_LE16(seid);
+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
 
-	cmd->cluster_id = cluster_id;
-	cmd->table_id = table_id;
-	cmd->idx = CPU_TO_LE32(start_index);
+	for (i = 0; i < filter_count; i++) {
+		u16 tnl_type;
+		u32 ti;
 
-	desc.datalen = CPU_TO_LE16(buff_size);
+		tnl_type = (LE16_TO_CPU(filters[i].element.flags) &
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
 
-	status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
-	if (!status) {
-		if (ret_buff_size != NULL)
-			*ret_buff_size = LE16_TO_CPU(desc.datalen);
-		if (ret_next_table != NULL)
-			*ret_next_table = resp->table_id;
-		if (ret_next_index != NULL)
-			*ret_next_index = LE32_TO_CPU(resp->idx);
+		/* Due to hardware eccentricities, the VNI for Geneve is shifted
+		 * one more byte further than normally used for Tenant ID in
+		 * other tunnel types.
+		 */
+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+			ti = LE32_TO_CPU(filters[i].element.tenant_id);
+			filters[i].element.tenant_id = CPU_TO_LE32(ti << 8);
+		}
 	}
 
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
 	return status;
 }
 
-
 /**
- * i40e_enable_eee
- * @hw: pointer to the hardware structure
- * @enable: state of Energy Efficient Ethernet mode to be set
+ * i40e_aq_replace_cloud_filters - Replace cloud filter command
+ * @hw: pointer to the hw struct
+ * @filters: pointer to the i40e_aqc_replace_cloud_filter_cmd struct
+ * @cmd_buf: pointer to the i40e_aqc_replace_cloud_filter_cmd_buf struct
  *
- * Enables or disables Energy Efficient Ethernet (EEE) mode
- * accordingly to @enable parameter.
  **/
-enum i40e_status_code i40e_enable_eee(struct i40e_hw *hw, bool enable)
+enum
+i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
+	struct i40e_aqc_replace_cloud_filters_cmd *filters,
+	struct i40e_aqc_replace_cloud_filters_cmd_buf *cmd_buf)
 {
-	struct i40e_aq_get_phy_abilities_resp abilities;
-	struct i40e_aq_set_phy_config config;
-	enum i40e_status_code status;
-	__le16 eee_capability;
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_replace_cloud_filters_cmd *cmd =
+		(struct i40e_aqc_replace_cloud_filters_cmd *)&desc.params.raw;
+	enum i40e_status_code status = I40E_SUCCESS;
+	int i = 0;
 
-	/* Get initial PHY capabilities */
-	status = i40e_aq_get_phy_capabilities(hw, false, true, &abilities,
-					      NULL);
-	if (status)
-		goto err;
+	/* X722 doesn't support this command */
+	if (hw->mac.type == I40E_MAC_X722)
+		return I40E_ERR_DEVICE_NOT_SUPPORTED;
 
-	/* Check whether NIC configuration is compatible with Energy Efficient
-	 * Ethernet (EEE) mode.
-	 */
-	if (abilities.eee_capability == 0) {
-		status = I40E_ERR_CONFIG;
-		goto err;
-	}
+	/* need FW version greater than 6.00 */
+	if (hw->aq.fw_maj_ver < 6)
+		return I40E_NOT_SUPPORTED;
 
-	/* Cache initial EEE capability */
-	eee_capability = abilities.eee_capability;
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_replace_cloud_filters);
 
-	/* Get current configuration */
-	status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
-					      NULL);
-	if (status)
-		goto err;
+	desc.datalen = CPU_TO_LE16(32);
+	desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->old_filter_type = filters->old_filter_type;
+	cmd->new_filter_type = filters->new_filter_type;
+	cmd->valid_flags = filters->valid_flags;
+	cmd->tr_bit = filters->tr_bit;
+	cmd->tr_bit2 = filters->tr_bit2;
 
-	/* Cache current configuration */
-	config.phy_type = abilities.phy_type;
-	config.phy_type_ext = abilities.phy_type_ext;
-	config.link_speed = abilities.link_speed;
-	config.abilities = abilities.abilities |
-			   I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
-	config.eeer = abilities.eeer_val;
-	config.low_power_ctrl = abilities.d3_lpan;
-	config.fec_config = abilities.fec_cfg_curr_mod_ext_info &
-			    I40E_AQ_PHY_FEC_CONFIG_MASK;
-
-	/* Set desired EEE state */
-	if (enable) {
-		config.eee_capability = eee_capability;
-		config.eeer |= I40E_PRTPM_EEER_TX_LPI_EN_MASK;
-	} else {
-		config.eee_capability = 0;
-		config.eeer &= ~I40E_PRTPM_EEER_TX_LPI_EN_MASK;
+	status = i40e_asq_send_command(hw, &desc, cmd_buf,
+		sizeof(struct i40e_aqc_replace_cloud_filters_cmd_buf),  NULL);
+
+	/* for get cloud filters command */
+	for (i = 0; i < 32; i += 4) {
+		cmd_buf->filters[i / 4].filter_type = cmd_buf->data[i];
+		cmd_buf->filters[i / 4].input[0] = cmd_buf->data[i + 1];
+		cmd_buf->filters[i / 4].input[1] = cmd_buf->data[i + 2];
+		cmd_buf->filters[i / 4].input[2] = cmd_buf->data[i + 3];
 	}
 
-	/* Save modified config */
-	status = i40e_aq_set_phy_config(hw, &config, NULL);
-err:
 	return status;
 }
 
 /**
- * i40e_read_bw_from_alt_ram
+ * i40e_aq_alternate_read
  * @hw: pointer to the hardware structure
- * @max_bw: pointer for max_bw read
- * @min_bw: pointer for min_bw read
- * @min_valid: pointer for bool that is true if min_bw is a valid value
- * @max_valid: pointer for bool that is true if max_bw is a valid value
+ * @reg_addr0: address of first dword to be read
+ * @reg_val0: pointer for data read from 'reg_addr0'
+ * @reg_addr1: address of second dword to be read
+ * @reg_val1: pointer for data read from 'reg_addr1'
  *
- * Read bw from the alternate ram for the given pf
- **/
-enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
-					u32 *max_bw, u32 *min_bw,
-					bool *min_valid, bool *max_valid)
-{
-	enum i40e_status_code status;
-	u32 max_bw_addr, min_bw_addr;
-
-	/* Calculate the address of the min/max bw registers */
-	max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET +
-		      I40E_ALT_STRUCT_MAX_BW_OFFSET +
-		      (I40E_ALT_STRUCT_DWORDS_PER_PF * hw->pf_id);
-	min_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET +
-		      I40E_ALT_STRUCT_MIN_BW_OFFSET +
-		      (I40E_ALT_STRUCT_DWORDS_PER_PF * hw->pf_id);
-
-	/* Read the bandwidths from alt ram */
-	status = i40e_aq_alternate_read(hw, max_bw_addr, max_bw,
-					min_bw_addr, min_bw);
-
-	if (*min_bw & I40E_ALT_BW_VALID_MASK)
-		*min_valid = true;
-	else
-		*min_valid = false;
-
-	if (*max_bw & I40E_ALT_BW_VALID_MASK)
-		*max_valid = true;
-	else
-		*max_valid = false;
-
-	return status;
-}
-
-/**
- * i40e_aq_configure_partition_bw
- * @hw: pointer to the hardware structure
- * @bw_data: Buffer holding valid pfs and bw limits
- * @cmd_details: pointer to command details
+ * Read one or two dwords from alternate structure. Fields are indicated
+ * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer
+ * is not passed then only register at 'reg_addr0' is read.
  *
- * Configure partitions guaranteed/max bw
  **/
-enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw,
-			struct i40e_aqc_configure_partition_bw_data *bw_data,
-			struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
+				u32 reg_addr0, u32 *reg_val0,
+				u32 reg_addr1, u32 *reg_val1)
 {
-	enum i40e_status_code status;
 	struct i40e_aq_desc desc;
-	u16 bwd_size = sizeof(*bw_data);
+	struct i40e_aqc_alternate_write *cmd_resp =
+		(struct i40e_aqc_alternate_write *)&desc.params.raw;
+	enum i40e_status_code status;
 
-	i40e_fill_default_direct_cmd_desc(&desc,
-				i40e_aqc_opc_configure_partition_bw);
+	if (reg_val0 == NULL)
+		return I40E_ERR_PARAM;
 
-	/* Indirect command */
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
+	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read);
+	cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
+	cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
+
+	status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
 
-	desc.datalen = CPU_TO_LE16(bwd_size);
+	if (status == I40E_SUCCESS) {
+		*reg_val0 = LE32_TO_CPU(cmd_resp->data0);
 
-	status = i40e_asq_send_command(hw, &desc, bw_data, bwd_size, cmd_details);
+		if (reg_val1 != NULL)
+			*reg_val1 = LE32_TO_CPU(cmd_resp->data1);
+	}
 
 	return status;
 }
@@ -6758,93 +4842,18 @@ enum i40e_status_code i40e_write_phy_register_clause45(struct i40e_hw *hw,
 		  (I40E_GLGEN_MSCA_MDIINPROGEN_MASK);
 	status = I40E_ERR_TIMEOUT;
 	retry = 1000;
-	wr32(hw, I40E_GLGEN_MSCA(port_num), command);
-	do {
-		command = rd32(hw, I40E_GLGEN_MSCA(port_num));
-		if (!(command & I40E_GLGEN_MSCA_MDICMD_MASK)) {
-			status = I40E_SUCCESS;
-			break;
-		}
-		i40e_usec_delay(10);
-		retry--;
-	} while (retry);
-
-phy_write_end:
-	return status;
-}
-
-/**
- * i40e_write_phy_register
- * @hw: pointer to the HW structure
- * @page: registers page number
- * @reg: register address in the page
- * @phy_addr: PHY address on MDIO interface
- * @value: PHY register value
- *
- * Writes value to specified PHY register
- **/
-enum i40e_status_code i40e_write_phy_register(struct i40e_hw *hw,
-				u8 page, u16 reg, u8 phy_addr, u16 value)
-{
-	enum i40e_status_code status;
-
-	switch (hw->device_id) {
-	case I40E_DEV_ID_1G_BASE_T_X722:
-		status = i40e_write_phy_register_clause22(hw,
-			reg, phy_addr, value);
-		break;
-	case I40E_DEV_ID_10G_BASE_T:
-	case I40E_DEV_ID_10G_BASE_T4:
-	case I40E_DEV_ID_10G_BASE_T_BC:
-	case I40E_DEV_ID_5G_BASE_T_BC:
-	case I40E_DEV_ID_10G_BASE_T_X722:
-	case I40E_DEV_ID_25G_B:
-	case I40E_DEV_ID_25G_SFP28:
-		status = i40e_write_phy_register_clause45(hw,
-			page, reg, phy_addr, value);
-		break;
-	default:
-		status = I40E_ERR_UNKNOWN_PHY;
-		break;
-	}
-
-	return status;
-}
-
-/**
- * i40e_read_phy_register
- * @hw: pointer to the HW structure
- * @page: registers page number
- * @reg: register address in the page
- * @phy_addr: PHY address on MDIO interface
- * @value: PHY register value
- *
- * Reads specified PHY register value
- **/
-enum i40e_status_code i40e_read_phy_register(struct i40e_hw *hw,
-				u8 page, u16 reg, u8 phy_addr, u16 *value)
-{
-	enum i40e_status_code status;
-
-	switch (hw->device_id) {
-	case I40E_DEV_ID_1G_BASE_T_X722:
-		status = i40e_read_phy_register_clause22(hw, reg, phy_addr,
-							 value);
-		break;
-	case I40E_DEV_ID_10G_BASE_T:
-	case I40E_DEV_ID_10G_BASE_T4:
-	case I40E_DEV_ID_5G_BASE_T_BC:
-	case I40E_DEV_ID_10G_BASE_T_X722:
-	case I40E_DEV_ID_25G_B:
-	case I40E_DEV_ID_25G_SFP28:
-		status = i40e_read_phy_register_clause45(hw, page, reg,
-							 phy_addr, value);
-		break;
-	default:
-		status = I40E_ERR_UNKNOWN_PHY;
-		break;
-	}
+	wr32(hw, I40E_GLGEN_MSCA(port_num), command);
+	do {
+		command = rd32(hw, I40E_GLGEN_MSCA(port_num));
+		if (!(command & I40E_GLGEN_MSCA_MDICMD_MASK)) {
+			status = I40E_SUCCESS;
+			break;
+		}
+		i40e_usec_delay(10);
+		retry--;
+	} while (retry);
 
+phy_write_end:
 	return status;
 }
 
@@ -6863,80 +4872,6 @@ u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num)
 	return (u8)(reg_val >> ((dev_num + 1) * 5)) & 0x1f;
 }
 
-/**
- * i40e_blink_phy_led
- * @hw: pointer to the HW structure
- * @time: time how long led will blinks in secs
- * @interval: gap between LED on and off in msecs
- *
- * Blinks PHY link LED
- **/
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
-					      u32 time, u32 interval)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	u32 i;
-	u16 led_ctl = 0;
-	u16 gpio_led_port;
-	u16 led_reg;
-	u16 led_addr = I40E_PHY_LED_PROV_REG_1;
-	u8 phy_addr = 0;
-	u8 port_num;
-
-	i = rd32(hw, I40E_PFGEN_PORTNUM);
-	port_num = (u8)(i & I40E_PFGEN_PORTNUM_PORT_NUM_MASK);
-	phy_addr = i40e_get_phy_address(hw, port_num);
-
-	for (gpio_led_port = 0; gpio_led_port < 3; gpio_led_port++,
-	     led_addr++) {
-		status = i40e_read_phy_register_clause45(hw,
-							 I40E_PHY_COM_REG_PAGE,
-							 led_addr, phy_addr,
-							 &led_reg);
-		if (status)
-			goto phy_blinking_end;
-		led_ctl = led_reg;
-		if (led_reg & I40E_PHY_LED_LINK_MODE_MASK) {
-			led_reg = 0;
-			status = i40e_write_phy_register_clause45(hw,
-							 I40E_PHY_COM_REG_PAGE,
-							 led_addr, phy_addr,
-							 led_reg);
-			if (status)
-				goto phy_blinking_end;
-			break;
-		}
-	}
-
-	if (time > 0 && interval > 0) {
-		for (i = 0; i < time * 1000; i += interval) {
-			status = i40e_read_phy_register_clause45(hw,
-						I40E_PHY_COM_REG_PAGE,
-						led_addr, phy_addr, &led_reg);
-			if (status)
-				goto restore_config;
-			if (led_reg & I40E_PHY_LED_MANUAL_ON)
-				led_reg = 0;
-			else
-				led_reg = I40E_PHY_LED_MANUAL_ON;
-			status = i40e_write_phy_register_clause45(hw,
-						I40E_PHY_COM_REG_PAGE,
-						led_addr, phy_addr, led_reg);
-			if (status)
-				goto restore_config;
-			i40e_msec_delay(interval);
-		}
-	}
-
-restore_config:
-	status = i40e_write_phy_register_clause45(hw,
-						  I40E_PHY_COM_REG_PAGE,
-						  led_addr, phy_addr, led_ctl);
-
-phy_blinking_end:
-	return status;
-}
-
 /**
  * i40e_led_get_reg - read LED register
  * @hw: pointer to the HW structure
@@ -6995,153 +4930,7 @@ enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
 	return status;
 }
 
-/**
- * i40e_led_get_phy - return current on/off mode
- * @hw: pointer to the hw struct
- * @led_addr: address of led register to use
- * @val: original value of register to use
- *
- **/
-enum i40e_status_code i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
-				       u16 *val)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	u16 gpio_led_port;
-	u32 reg_val_aq;
-	u16 temp_addr;
-	u8 phy_addr = 0;
-	u16 reg_val;
-
-	if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
-		status = i40e_aq_get_phy_register(hw,
-						I40E_AQ_PHY_REG_ACCESS_EXTERNAL,
-						I40E_PHY_COM_REG_PAGE, true,
-						I40E_PHY_LED_PROV_REG_1,
-						&reg_val_aq, NULL);
-		if (status == I40E_SUCCESS)
-			*val = (u16)reg_val_aq;
-		return status;
-	}
-	temp_addr = I40E_PHY_LED_PROV_REG_1;
-	phy_addr = i40e_get_phy_address(hw, hw->port);
-	for (gpio_led_port = 0; gpio_led_port < 3; gpio_led_port++,
-	     temp_addr++) {
-		status = i40e_read_phy_register_clause45(hw,
-							 I40E_PHY_COM_REG_PAGE,
-							 temp_addr, phy_addr,
-							 &reg_val);
-		if (status)
-			return status;
-		*val = reg_val;
-		if (reg_val & I40E_PHY_LED_LINK_MODE_MASK) {
-			*led_addr = temp_addr;
-			break;
-		}
-	}
-	return status;
-}
-
-/**
- * i40e_led_set_phy
- * @hw: pointer to the HW structure
- * @on: true or false
- * @led_addr: address of led register to use
- * @mode: original val plus bit for set or ignore
- *
- * Set led's on or off when controlled by the PHY
- *
- **/
-enum i40e_status_code i40e_led_set_phy(struct i40e_hw *hw, bool on,
-				       u16 led_addr, u32 mode)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	u32 led_ctl = 0;
-	u32 led_reg = 0;
-
-	status = i40e_led_get_reg(hw, led_addr, &led_reg);
-	if (status)
-		return status;
-	led_ctl = led_reg;
-	if (led_reg & I40E_PHY_LED_LINK_MODE_MASK) {
-		led_reg = 0;
-		status = i40e_led_set_reg(hw, led_addr, led_reg);
-		if (status)
-			return status;
-	}
-	status = i40e_led_get_reg(hw, led_addr, &led_reg);
-	if (status)
-		goto restore_config;
-	if (on)
-		led_reg = I40E_PHY_LED_MANUAL_ON;
-	else
-		led_reg = 0;
-	status = i40e_led_set_reg(hw, led_addr, led_reg);
-	if (status)
-		goto restore_config;
-	if (mode & I40E_PHY_LED_MODE_ORIG) {
-		led_ctl = (mode & I40E_PHY_LED_MODE_MASK);
-		status = i40e_led_set_reg(hw, led_addr, led_ctl);
-	}
-	return status;
-
-restore_config:
-	status = i40e_led_set_reg(hw, led_addr, led_ctl);
-	return status;
-}
 #endif /* PF_DRIVER */
-/**
- * i40e_get_phy_lpi_status - read LPI status from PHY or MAC register
- * @hw: pointer to the hw struct
- * @stat: pointer to structure with status of rx and tx lpi
- *
- * Read LPI state directly from external PHY register or from MAC
- * register, depending on device ID and current link speed.
- */
-enum i40e_status_code i40e_get_phy_lpi_status(struct i40e_hw *hw,
-					      struct i40e_hw_port_stats *stat)
-{
-	enum i40e_status_code ret = I40E_SUCCESS;
-	bool eee_mrvl_phy;
-	bool eee_bcm_phy;
-	u32 val;
-
-	stat->rx_lpi_status = 0;
-	stat->tx_lpi_status = 0;
-
-	eee_bcm_phy =
-		(hw->device_id == I40E_DEV_ID_10G_BASE_T_BC ||
-		 hw->device_id == I40E_DEV_ID_5G_BASE_T_BC) &&
-		(hw->phy.link_info.link_speed == I40E_LINK_SPEED_2_5GB ||
-		 hw->phy.link_info.link_speed == I40E_LINK_SPEED_5GB);
-	eee_mrvl_phy =
-		hw->device_id == I40E_DEV_ID_1G_BASE_T_X722;
-
-	if (eee_bcm_phy || eee_mrvl_phy) {
-		/* read Clause 45 PCS Status 1 register */
-		ret = i40e_aq_get_phy_register(hw,
-					       I40E_AQ_PHY_REG_ACCESS_EXTERNAL,
-					       I40E_BCM_PHY_PCS_STATUS1_PAGE,
-					       true,
-					       I40E_BCM_PHY_PCS_STATUS1_REG,
-					       &val, NULL);
-
-		if (ret != I40E_SUCCESS)
-			return ret;
-
-		stat->rx_lpi_status = !!(val & I40E_BCM_PHY_PCS_STATUS1_RX_LPI);
-		stat->tx_lpi_status = !!(val & I40E_BCM_PHY_PCS_STATUS1_TX_LPI);
-
-		return ret;
-	}
-
-	val = rd32(hw, I40E_PRTPM_EEE_STAT);
-	stat->rx_lpi_status = (val & I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK) >>
-			       I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT;
-	stat->tx_lpi_status = (val & I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK) >>
-			       I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT;
-
-	return ret;
-}
 
 /**
  * i40e_get_lpi_counters - read LPI counters from EEE statistics
@@ -7185,108 +4974,6 @@ enum i40e_status_code i40e_get_lpi_counters(struct i40e_hw *hw,
 	return I40E_SUCCESS;
 }
 
-/**
- * i40e_get_lpi_duration - read LPI time duration from EEE statistics
- * @hw: pointer to the hw struct
- * @stat: pointer to structure with status of rx and tx lpi
- * @tx_duration: pointer to memory for TX LPI time duration
- * @rx_duration: pointer to memory for RX LPI time duration
- *
- * Read Low Power Idle (LPI) mode time duration from Energy Efficient
- * Ethernet (EEE) statistics.
- */
-enum i40e_status_code i40e_get_lpi_duration(struct i40e_hw *hw,
-					    struct i40e_hw_port_stats *stat,
-					    u64 *tx_duration, u64 *rx_duration)
-{
-	u32 tx_time_dur, rx_time_dur;
-	enum i40e_status_code retval;
-	u32 cmd_status;
-
-	if (hw->device_id != I40E_DEV_ID_10G_BASE_T_BC &&
-	    hw->device_id != I40E_DEV_ID_5G_BASE_T_BC)
-		return I40E_ERR_NOT_IMPLEMENTED;
-
-	retval = i40e_aq_run_phy_activity
-		(hw, I40E_AQ_RUN_PHY_ACT_ID_USR_DFND,
-		I40E_AQ_RUN_PHY_ACT_DNL_OPCODE_GET_EEE_DUR,
-		&cmd_status, &tx_time_dur, &rx_time_dur, NULL);
-
-	if (retval)
-		return retval;
-	if ((cmd_status & I40E_AQ_RUN_PHY_ACT_CMD_STAT_MASK) !=
-	    I40E_AQ_RUN_PHY_ACT_CMD_STAT_SUCC)
-		return I40E_ERR_ADMIN_QUEUE_ERROR;
-
-	if (hw->phy.link_info.link_speed == I40E_LINK_SPEED_1GB &&
-	    !tx_time_dur && !rx_time_dur &&
-	    stat->tx_lpi_status && stat->rx_lpi_status) {
-		retval = i40e_aq_run_phy_activity
-			(hw, I40E_AQ_RUN_PHY_ACT_ID_USR_DFND,
-			I40E_AQ_RUN_PHY_ACT_DNL_OPCODE_GET_EEE_STAT_DUR,
-			&cmd_status,
-			&tx_time_dur, &rx_time_dur, NULL);
-
-		if (retval)
-			return retval;
-		if ((cmd_status & I40E_AQ_RUN_PHY_ACT_CMD_STAT_MASK) !=
-		    I40E_AQ_RUN_PHY_ACT_CMD_STAT_SUCC)
-			return I40E_ERR_ADMIN_QUEUE_ERROR;
-		tx_time_dur = 0;
-		rx_time_dur = 0;
-	}
-
-	*tx_duration = tx_time_dur;
-	*rx_duration = rx_time_dur;
-
-	return retval;
-}
-
-/**
- * i40e_lpi_stat_update - update LPI counters with values relative to offset
- * @hw: pointer to the hw struct
- * @offset_loaded: flag indicating need of writing current value to offset
- * @tx_offset: pointer to offset of TX LPI counter
- * @tx_stat: pointer to value of TX LPI counter
- * @rx_offset: pointer to offset of RX LPI counter
- * @rx_stat: pointer to value of RX LPI counter
- *
- * Update Low Power Idle (LPI) mode counters while having regard to passed
- * offsets.
- **/
-enum i40e_status_code i40e_lpi_stat_update(struct i40e_hw *hw,
-					   bool offset_loaded, u64 *tx_offset,
-					   u64 *tx_stat, u64 *rx_offset,
-					   u64 *rx_stat)
-{
-	enum i40e_status_code retval;
-	u32 tx_counter, rx_counter;
-	bool is_clear;
-
-	retval = i40e_get_lpi_counters(hw, &tx_counter, &rx_counter, &is_clear);
-	if (retval)
-		goto err;
-
-	if (is_clear) {
-		*tx_stat += tx_counter;
-		*rx_stat += rx_counter;
-	} else {
-		if (!offset_loaded) {
-			*tx_offset = tx_counter;
-			*rx_offset = rx_counter;
-		}
-
-		*tx_stat = (tx_counter >= *tx_offset) ?
-			(u32)(tx_counter - *tx_offset) :
-			(u32)((tx_counter + BIT_ULL(32)) - *tx_offset);
-		*rx_stat = (rx_counter >= *rx_offset) ?
-			(u32)(rx_counter - *rx_offset) :
-			(u32)((rx_counter + BIT_ULL(32)) - *rx_offset);
-	}
-err:
-	return retval;
-}
-
 /**
  * i40e_aq_rx_ctl_read_register - use FW to read from an Rx control register
  * @hw: pointer to the hw struct
@@ -7674,195 +5361,6 @@ enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw)
 }
 #endif /* VF_DRIVER */
 
-/**
- * i40e_aq_set_arp_proxy_config
- * @hw: pointer to the HW structure
- * @proxy_config: pointer to proxy config command table struct
- * @cmd_details: pointer to command details
- *
- * Set ARP offload parameters from pre-populated
- * i40e_aqc_arp_proxy_data struct
- **/
-enum i40e_status_code i40e_aq_set_arp_proxy_config(struct i40e_hw *hw,
-				struct i40e_aqc_arp_proxy_data *proxy_config,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	if (!proxy_config)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_proxy_config);
-
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
-	desc.params.external.addr_high =
-				  CPU_TO_LE32(I40E_HI_DWORD((u64)proxy_config));
-	desc.params.external.addr_low =
-				  CPU_TO_LE32(I40E_LO_DWORD((u64)proxy_config));
-	desc.datalen = CPU_TO_LE16(sizeof(struct i40e_aqc_arp_proxy_data));
-
-	status = i40e_asq_send_command(hw, &desc, proxy_config,
-				       sizeof(struct i40e_aqc_arp_proxy_data),
-				       cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_opc_set_ns_proxy_table_entry
- * @hw: pointer to the HW structure
- * @ns_proxy_table_entry: pointer to NS table entry command struct
- * @cmd_details: pointer to command details
- *
- * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
- * from pre-populated i40e_aqc_ns_proxy_data struct
- **/
-enum i40e_status_code i40e_aq_set_ns_proxy_table_entry(struct i40e_hw *hw,
-			struct i40e_aqc_ns_proxy_data *ns_proxy_table_entry,
-			struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	if (!ns_proxy_table_entry)
-		return I40E_ERR_PARAM;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-				i40e_aqc_opc_set_ns_proxy_table_entry);
-
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
-	desc.params.external.addr_high =
-		CPU_TO_LE32(I40E_HI_DWORD((u64)ns_proxy_table_entry));
-	desc.params.external.addr_low =
-		CPU_TO_LE32(I40E_LO_DWORD((u64)ns_proxy_table_entry));
-	desc.datalen = CPU_TO_LE16(sizeof(struct i40e_aqc_ns_proxy_data));
-
-	status = i40e_asq_send_command(hw, &desc, ns_proxy_table_entry,
-				       sizeof(struct i40e_aqc_ns_proxy_data),
-				       cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_set_clear_wol_filter
- * @hw: pointer to the hw struct
- * @filter_index: index of filter to modify (0-7)
- * @filter: buffer containing filter to be set
- * @set_filter: true to set filter, false to clear filter
- * @no_wol_tco: if true, pass through packets cannot cause wake-up
- *		if false, pass through packets may cause wake-up
- * @filter_valid: true if filter action is valid
- * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
- * @cmd_details: pointer to command details structure or NULL
- *
- * Set or clear WoL filter for port attached to the PF
- **/
-enum i40e_status_code i40e_aq_set_clear_wol_filter(struct i40e_hw *hw,
-				u8 filter_index,
-				struct i40e_aqc_set_wol_filter_data *filter,
-				bool set_filter, bool no_wol_tco,
-				bool filter_valid, bool no_wol_tco_valid,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_set_wol_filter *cmd =
-		(struct i40e_aqc_set_wol_filter *)&desc.params.raw;
-	enum i40e_status_code status;
-	u16 cmd_flags = 0;
-	u16 valid_flags = 0;
-	u16 buff_len = 0;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_wol_filter);
-
-	if (filter_index >= I40E_AQC_MAX_NUM_WOL_FILTERS)
-		return  I40E_ERR_PARAM;
-	cmd->filter_index = CPU_TO_LE16(filter_index);
-
-	if (set_filter) {
-		if (!filter)
-			return  I40E_ERR_PARAM;
-
-		cmd_flags |= I40E_AQC_SET_WOL_FILTER;
-		cmd_flags |= I40E_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
-	}
-
-	if (no_wol_tco)
-		cmd_flags |= I40E_AQC_SET_WOL_FILTER_NO_TCO_WOL;
-	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
-
-	if (filter_valid)
-		valid_flags |= I40E_AQC_SET_WOL_FILTER_ACTION_VALID;
-	if (no_wol_tco_valid)
-		valid_flags |= I40E_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
-	cmd->valid_flags = CPU_TO_LE16(valid_flags);
-
-	buff_len = sizeof(*filter);
-	desc.datalen = CPU_TO_LE16(buff_len);
-
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
-	desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
-
-	cmd->address_high = CPU_TO_LE32(I40E_HI_DWORD((u64)filter));
-	cmd->address_low = CPU_TO_LE32(I40E_LO_DWORD((u64)filter));
-
-	status = i40e_asq_send_command(hw, &desc, filter,
-				       buff_len, cmd_details);
-
-	return status;
-}
-
-/**
- * i40e_aq_get_wake_event_reason
- * @hw: pointer to the hw struct
- * @wake_reason: return value, index of matching filter
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get information for the reason of a Wake Up event
- **/
-enum i40e_status_code i40e_aq_get_wake_event_reason(struct i40e_hw *hw,
-				u16 *wake_reason,
-				struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	struct i40e_aqc_get_wake_reason_completion *resp =
-		(struct i40e_aqc_get_wake_reason_completion *)&desc.params.raw;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_wake_reason);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	if (status == I40E_SUCCESS)
-		*wake_reason = LE16_TO_CPU(resp->wake_reason);
-
-	return status;
-}
-
-/**
-* i40e_aq_clear_all_wol_filters
-* @hw: pointer to the hw struct
-* @cmd_details: pointer to command details structure or NULL
-*
-* Get information for the reason of a Wake Up event
-**/
-enum i40e_status_code i40e_aq_clear_all_wol_filters(struct i40e_hw *hw,
-	struct i40e_asq_cmd_details *cmd_details)
-{
-	struct i40e_aq_desc desc;
-	enum i40e_status_code status;
-
-	i40e_fill_default_direct_cmd_desc(&desc,
-					  i40e_aqc_opc_clear_all_wol_filters);
-
-	status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-	return status;
-}
-
 /**
  * i40e_aq_write_ddp - Write dynamic device personalization (ddp)
  * @hw: pointer to the hw struct
@@ -8243,42 +5741,3 @@ i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
 	}
 	return status;
 }
-
-/**
- * i40e_add_pinfo_to_list
- * @hw: pointer to the hardware structure
- * @profile: pointer to the profile segment of the package
- * @profile_info_sec: buffer for information section
- * @track_id: package tracking id
- *
- * Register a profile to the list of loaded profiles.
- */
-enum i40e_status_code
-i40e_add_pinfo_to_list(struct i40e_hw *hw,
-		       struct i40e_profile_segment *profile,
-		       u8 *profile_info_sec, u32 track_id)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	struct i40e_profile_section_header *sec = NULL;
-	struct i40e_profile_info *pinfo;
-	u32 offset = 0, info = 0;
-
-	sec = (struct i40e_profile_section_header *)profile_info_sec;
-	sec->tbl_size = 1;
-	sec->data_end = sizeof(struct i40e_profile_section_header) +
-			sizeof(struct i40e_profile_info);
-	sec->section.type = SECTION_TYPE_INFO;
-	sec->section.offset = sizeof(struct i40e_profile_section_header);
-	sec->section.size = sizeof(struct i40e_profile_info);
-	pinfo = (struct i40e_profile_info *)(profile_info_sec +
-					     sec->section.offset);
-	pinfo->track_id = track_id;
-	pinfo->version = profile->version;
-	pinfo->op = I40E_DDP_ADD_TRACKID;
-	i40e_memcpy(pinfo->name, profile->name, I40E_DDP_NAME_SIZE,
-		    I40E_NONDMA_TO_NONDMA);
-
-	status = i40e_aq_write_ddp(hw, (void *)sec, sec->data_end,
-				   track_id, &offset, &info, NULL);
-	return status;
-}
diff --git a/drivers/net/i40e/base/i40e_dcb.c b/drivers/net/i40e/base/i40e_dcb.c
index 388af3d64d..ceb2f37927 100644
--- a/drivers/net/i40e/base/i40e_dcb.c
+++ b/drivers/net/i40e/base/i40e_dcb.c
@@ -932,49 +932,6 @@ enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change)
 	return ret;
 }
 
-/**
- * i40e_get_fw_lldp_status
- * @hw: pointer to the hw struct
- * @lldp_status: pointer to the status enum
- *
- * Get status of FW Link Layer Discovery Protocol (LLDP) Agent.
- * Status of agent is reported via @lldp_status parameter.
- **/
-enum i40e_status_code
-i40e_get_fw_lldp_status(struct i40e_hw *hw,
-			enum i40e_get_fw_lldp_status_resp *lldp_status)
-{
-	enum i40e_status_code ret;
-	struct i40e_virt_mem mem;
-	u8 *lldpmib;
-
-	if (!lldp_status)
-		return I40E_ERR_PARAM;
-
-	/* Allocate buffer for the LLDPDU */
-	ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE);
-	if (ret)
-		return ret;
-
-	lldpmib = (u8 *)mem.va;
-	ret = i40e_aq_get_lldp_mib(hw, 0, 0, (void *)lldpmib,
-				   I40E_LLDPDU_SIZE, NULL, NULL, NULL);
-
-	if (ret == I40E_SUCCESS) {
-		*lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED;
-	} else if (hw->aq.asq_last_status == I40E_AQ_RC_ENOENT) {
-		/* MIB is not available yet but the agent is running */
-		*lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED;
-		ret = I40E_SUCCESS;
-	} else if (hw->aq.asq_last_status == I40E_AQ_RC_EPERM) {
-		*lldp_status = I40E_GET_FW_LLDP_STATUS_DISABLED;
-		ret = I40E_SUCCESS;
-	}
-
-	i40e_free_virt_mem(hw, &mem);
-	return ret;
-}
-
 /**
  * i40e_add_ieee_ets_tlv - Prepare ETS TLV in IEEE format
  * @tlv: Fill the ETS config data in IEEE format
diff --git a/drivers/net/i40e/base/i40e_dcb.h b/drivers/net/i40e/base/i40e_dcb.h
index 0409fd3e1a..01c1d8af11 100644
--- a/drivers/net/i40e/base/i40e_dcb.h
+++ b/drivers/net/i40e/base/i40e_dcb.h
@@ -199,9 +199,6 @@ enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
 enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw);
 enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw,
 				    bool enable_mib_change);
-enum i40e_status_code
-i40e_get_fw_lldp_status(struct i40e_hw *hw,
-			enum i40e_get_fw_lldp_status_resp *lldp_status);
 enum i40e_status_code i40e_set_dcb_config(struct i40e_hw *hw);
 enum i40e_status_code i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
 					      struct i40e_dcbx_config *dcbcfg);
diff --git a/drivers/net/i40e/base/i40e_diag.c b/drivers/net/i40e/base/i40e_diag.c
deleted file mode 100644
index b3c4cfd3aa..0000000000
--- a/drivers/net/i40e/base/i40e_diag.c
+++ /dev/null
@@ -1,146 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2020 Intel Corporation
- */
-
-#include "i40e_diag.h"
-#include "i40e_prototype.h"
-
-/**
- * i40e_diag_set_loopback
- * @hw: pointer to the hw struct
- * @mode: loopback mode
- *
- * Set chosen loopback mode
- **/
-enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw,
-					     enum i40e_lb_mode mode)
-{
-	enum i40e_status_code ret_code = I40E_SUCCESS;
-
-	if (i40e_aq_set_lb_modes(hw, mode, NULL))
-		ret_code = I40E_ERR_DIAG_TEST_FAILED;
-
-	return ret_code;
-}
-
-/**
- * i40e_diag_reg_pattern_test
- * @hw: pointer to the hw struct
- * @reg: reg to be tested
- * @mask: bits to be touched
- **/
-static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw,
-							u32 reg, u32 mask)
-{
-	const u32 patterns[] = {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF};
-	u32 pat, val, orig_val;
-	int i;
-
-	orig_val = rd32(hw, reg);
-	for (i = 0; i < ARRAY_SIZE(patterns); i++) {
-		pat = patterns[i];
-		wr32(hw, reg, (pat & mask));
-		val = rd32(hw, reg);
-		if ((val & mask) != (pat & mask)) {
-			return I40E_ERR_DIAG_TEST_FAILED;
-		}
-	}
-
-	wr32(hw, reg, orig_val);
-	val = rd32(hw, reg);
-	if (val != orig_val) {
-		return I40E_ERR_DIAG_TEST_FAILED;
-	}
-
-	return I40E_SUCCESS;
-}
-
-static struct i40e_diag_reg_test_info i40e_reg_list[] = {
-	/* offset               mask         elements   stride */
-	{I40E_QTX_CTL(0),       0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
-	{I40E_PFINT_ITR0(0),    0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)},
-	{I40E_PFINT_ITRN(0, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(0, 1) - I40E_PFINT_ITRN(0, 0)},
-	{I40E_PFINT_ITRN(1, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(1, 1) - I40E_PFINT_ITRN(1, 0)},
-	{I40E_PFINT_ITRN(2, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(2, 1) - I40E_PFINT_ITRN(2, 0)},
-	{I40E_PFINT_STAT_CTL0,  0x0000000C, 1, 0},
-	{I40E_PFINT_LNKLST0,    0x00001FFF, 1, 0},
-	{I40E_PFINT_LNKLSTN(0), 0x000007FF, 1, I40E_PFINT_LNKLSTN(1) - I40E_PFINT_LNKLSTN(0)},
-	{I40E_QINT_TQCTL(0),    0x000000FF, 1, I40E_QINT_TQCTL(1) - I40E_QINT_TQCTL(0)},
-	{I40E_QINT_RQCTL(0),    0x000000FF, 1, I40E_QINT_RQCTL(1) - I40E_QINT_RQCTL(0)},
-	{I40E_PFINT_ICR0_ENA,   0xF7F20000, 1, 0},
-	{ 0 }
-};
-
-/**
- * i40e_diag_reg_test
- * @hw: pointer to the hw struct
- *
- * Perform registers diagnostic test
- **/
-enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw)
-{
-	enum i40e_status_code ret_code = I40E_SUCCESS;
-	u32 reg, mask;
-	u32 i, j;
-
-	for (i = 0; i40e_reg_list[i].offset != 0 &&
-					     ret_code == I40E_SUCCESS; i++) {
-
-		/* set actual reg range for dynamically allocated resources */
-		if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
-		    hw->func_caps.num_tx_qp != 0)
-			i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
-		if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
-		     i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
-		     i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
-		     i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
-		     i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
-		    hw->func_caps.num_msix_vectors != 0)
-			i40e_reg_list[i].elements =
-				hw->func_caps.num_msix_vectors - 1;
-
-		/* test register access */
-		mask = i40e_reg_list[i].mask;
-		for (j = 0; j < i40e_reg_list[i].elements &&
-			    ret_code == I40E_SUCCESS; j++) {
-			reg = i40e_reg_list[i].offset
-				+ (j * i40e_reg_list[i].stride);
-			ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
-		}
-	}
-
-	return ret_code;
-}
-
-/**
- * i40e_diag_eeprom_test
- * @hw: pointer to the hw struct
- *
- * Perform EEPROM diagnostic test
- **/
-enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw)
-{
-	enum i40e_status_code ret_code;
-	u16 reg_val;
-
-	/* read NVM control word and if NVM valid, validate EEPROM checksum*/
-	ret_code = i40e_read_nvm_word(hw, I40E_SR_NVM_CONTROL_WORD, &reg_val);
-	if ((ret_code == I40E_SUCCESS) &&
-	    ((reg_val & I40E_SR_CONTROL_WORD_1_MASK) ==
-	     BIT(I40E_SR_CONTROL_WORD_1_SHIFT)))
-		return i40e_validate_nvm_checksum(hw, NULL);
-	else
-		return I40E_ERR_DIAG_TEST_FAILED;
-}
-
-/**
- * i40e_diag_fw_alive_test
- * @hw: pointer to the hw struct
- *
- * Perform FW alive diagnostic test
- **/
-enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw)
-{
-	UNREFERENCED_1PARAMETER(hw);
-	return I40E_SUCCESS;
-}
diff --git a/drivers/net/i40e/base/i40e_diag.h b/drivers/net/i40e/base/i40e_diag.h
deleted file mode 100644
index cb59285d9c..0000000000
--- a/drivers/net/i40e/base/i40e_diag.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2020 Intel Corporation
- */
-
-#ifndef _I40E_DIAG_H_
-#define _I40E_DIAG_H_
-
-#include "i40e_type.h"
-
-enum i40e_lb_mode {
-	I40E_LB_MODE_NONE       = 0x0,
-	I40E_LB_MODE_PHY_LOCAL  = I40E_AQ_LB_PHY_LOCAL,
-	I40E_LB_MODE_PHY_REMOTE = I40E_AQ_LB_PHY_REMOTE,
-	I40E_LB_MODE_MAC_LOCAL  = I40E_AQ_LB_MAC_LOCAL,
-};
-
-struct i40e_diag_reg_test_info {
-	u32 offset;	/* the base register */
-	u32 mask;	/* bits that can be tested */
-	u32 elements;	/* number of elements if array */
-	u32 stride;	/* bytes between each element */
-};
-
-enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw,
-					     enum i40e_lb_mode mode);
-enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw);
-enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw);
-enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw);
-
-#endif /* _I40E_DIAG_H_ */
diff --git a/drivers/net/i40e/base/i40e_lan_hmc.c b/drivers/net/i40e/base/i40e_lan_hmc.c
index d3969396f0..5242ba8deb 100644
--- a/drivers/net/i40e/base/i40e_lan_hmc.c
+++ b/drivers/net/i40e/base/i40e_lan_hmc.c
@@ -914,228 +914,6 @@ static void i40e_write_qword(u8 *hmc_bits,
 	i40e_memcpy(dest, &dest_qword, sizeof(dest_qword), I40E_NONDMA_TO_DMA);
 }
 
-/**
- * i40e_read_byte - read HMC context byte into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_byte(u8 *hmc_bits,
-			   struct i40e_context_ele *ce_info,
-			   u8 *dest)
-{
-	u8 dest_byte, mask;
-	u8 *src, *target;
-	u16 shift_width;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-	mask = (u8)(BIT(ce_info->width) - 1);
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-
-	/* get the current bits from the src bit string */
-	src = hmc_bits + (ce_info->lsb / 8);
-
-	i40e_memcpy(&dest_byte, src, sizeof(dest_byte), I40E_DMA_TO_NONDMA);
-
-	dest_byte &= ~(mask);
-
-	dest_byte >>= shift_width;
-
-	/* get the address from the struct field */
-	target = dest + ce_info->offset;
-
-	/* put it back in the struct */
-	i40e_memcpy(target, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_word - read HMC context word into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_word(u8 *hmc_bits,
-			   struct i40e_context_ele *ce_info,
-			   u8 *dest)
-{
-	u16 dest_word, mask;
-	u8 *src, *target;
-	u16 shift_width;
-	__le16 src_word;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-	mask = BIT(ce_info->width) - 1;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-
-	/* get the current bits from the src bit string */
-	src = hmc_bits + (ce_info->lsb / 8);
-
-	i40e_memcpy(&src_word, src, sizeof(src_word), I40E_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_word &= ~(CPU_TO_LE16(mask));
-
-	/* get the data back into host order before shifting */
-	dest_word = LE16_TO_CPU(src_word);
-
-	dest_word >>= shift_width;
-
-	/* get the address from the struct field */
-	target = dest + ce_info->offset;
-
-	/* put it back in the struct */
-	i40e_memcpy(target, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_dword - read HMC context dword into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_dword(u8 *hmc_bits,
-			    struct i40e_context_ele *ce_info,
-			    u8 *dest)
-{
-	u32 dest_dword, mask;
-	u8 *src, *target;
-	u16 shift_width;
-	__le32 src_dword;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-
-	/* if the field width is exactly 32 on an x86 machine, then the shift
-	 * operation will not work because the SHL instructions count is masked
-	 * to 5 bits so the shift will do nothing
-	 */
-	if (ce_info->width < 32)
-		mask = BIT(ce_info->width) - 1;
-	else
-		mask = ~(u32)0;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-
-	/* get the current bits from the src bit string */
-	src = hmc_bits + (ce_info->lsb / 8);
-
-	i40e_memcpy(&src_dword, src, sizeof(src_dword), I40E_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_dword &= ~(CPU_TO_LE32(mask));
-
-	/* get the data back into host order before shifting */
-	dest_dword = LE32_TO_CPU(src_dword);
-
-	dest_dword >>= shift_width;
-
-	/* get the address from the struct field */
-	target = dest + ce_info->offset;
-
-	/* put it back in the struct */
-	i40e_memcpy(target, &dest_dword, sizeof(dest_dword),
-		    I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_qword - read HMC context qword into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_qword(u8 *hmc_bits,
-			    struct i40e_context_ele *ce_info,
-			    u8 *dest)
-{
-	u64 dest_qword, mask;
-	u8 *src, *target;
-	u16 shift_width;
-	__le64 src_qword;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-
-	/* if the field width is exactly 64 on an x86 machine, then the shift
-	 * operation will not work because the SHL instructions count is masked
-	 * to 6 bits so the shift will do nothing
-	 */
-	if (ce_info->width < 64)
-		mask = BIT_ULL(ce_info->width) - 1;
-	else
-		mask = ~(u64)0;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-
-	/* get the current bits from the src bit string */
-	src = hmc_bits + (ce_info->lsb / 8);
-
-	i40e_memcpy(&src_qword, src, sizeof(src_qword), I40E_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_qword &= ~(CPU_TO_LE64(mask));
-
-	/* get the data back into host order before shifting */
-	dest_qword = LE64_TO_CPU(src_qword);
-
-	dest_qword >>= shift_width;
-
-	/* get the address from the struct field */
-	target = dest + ce_info->offset;
-
-	/* put it back in the struct */
-	i40e_memcpy(target, &dest_qword, sizeof(dest_qword),
-		    I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_get_hmc_context - extract HMC context bits
- * @context_bytes: pointer to the context bit array
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static enum i40e_status_code i40e_get_hmc_context(u8 *context_bytes,
-					struct i40e_context_ele *ce_info,
-					u8 *dest)
-{
-	int f;
-
-	for (f = 0; ce_info[f].width != 0; f++) {
-		switch (ce_info[f].size_of) {
-		case 1:
-			i40e_read_byte(context_bytes, &ce_info[f], dest);
-			break;
-		case 2:
-			i40e_read_word(context_bytes, &ce_info[f], dest);
-			break;
-		case 4:
-			i40e_read_dword(context_bytes, &ce_info[f], dest);
-			break;
-		case 8:
-			i40e_read_qword(context_bytes, &ce_info[f], dest);
-			break;
-		default:
-			/* nothing to do, just keep going */
-			break;
-		}
-	}
-
-	return I40E_SUCCESS;
-}
-
 /**
  * i40e_clear_hmc_context - zero out the HMC context bits
  * @hw:       the hardware struct
@@ -1261,27 +1039,6 @@ enum i40e_status_code i40e_hmc_get_object_va(struct i40e_hw *hw,
 	return ret_code;
 }
 
-/**
- * i40e_get_lan_tx_queue_context - return the HMC context for the queue
- * @hw:    the hardware struct
- * @queue: the queue we care about
- * @s:     the struct to be filled
- **/
-enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw,
-						    u16 queue,
-						    struct i40e_hmc_obj_txq *s)
-{
-	enum i40e_status_code err;
-	u8 *context_bytes;
-
-	err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_TX, queue);
-	if (err < 0)
-		return err;
-
-	return i40e_get_hmc_context(context_bytes,
-				    i40e_hmc_txq_ce_info, (u8 *)s);
-}
-
 /**
  * i40e_clear_lan_tx_queue_context - clear the HMC context for the queue
  * @hw:    the hardware struct
@@ -1321,27 +1078,6 @@ enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
 				    i40e_hmc_txq_ce_info, (u8 *)s);
 }
 
-/**
- * i40e_get_lan_rx_queue_context - return the HMC context for the queue
- * @hw:    the hardware struct
- * @queue: the queue we care about
- * @s:     the struct to be filled
- **/
-enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw,
-						    u16 queue,
-						    struct i40e_hmc_obj_rxq *s)
-{
-	enum i40e_status_code err;
-	u8 *context_bytes;
-
-	err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_RX, queue);
-	if (err < 0)
-		return err;
-
-	return i40e_get_hmc_context(context_bytes,
-				    i40e_hmc_rxq_ce_info, (u8 *)s);
-}
-
 /**
  * i40e_clear_lan_rx_queue_context - clear the HMC context for the queue
  * @hw:    the hardware struct
diff --git a/drivers/net/i40e/base/i40e_lan_hmc.h b/drivers/net/i40e/base/i40e_lan_hmc.h
index aa5dceb792..1d2707e5ad 100644
--- a/drivers/net/i40e/base/i40e_lan_hmc.h
+++ b/drivers/net/i40e/base/i40e_lan_hmc.h
@@ -147,17 +147,11 @@ enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw);
 
 u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
 			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
-enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw,
-						    u16 queue,
-						    struct i40e_hmc_obj_txq *s);
 enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
 						      u16 queue);
 enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
 						    u16 queue,
 						    struct i40e_hmc_obj_txq *s);
-enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw,
-						    u16 queue,
-						    struct i40e_hmc_obj_rxq *s);
 enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
 						      u16 queue);
 enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c
index 561ed21136..f1d1ff3685 100644
--- a/drivers/net/i40e/base/i40e_nvm.c
+++ b/drivers/net/i40e/base/i40e_nvm.c
@@ -599,61 +599,6 @@ enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer,
 	return ret_code;
 }
 
-/**
- * __i40e_write_nvm_word - Writes Shadow RAM word
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to write
- * @data: word to write to the Shadow RAM
- *
- * Writes a 16 bit word to the SR using the i40e_write_nvm_aq() method.
- * NVM ownership have to be acquired and released (on ARQ completion event
- * reception) by caller. To commit SR to NVM update checksum function
- * should be called.
- **/
-enum i40e_status_code __i40e_write_nvm_word(struct i40e_hw *hw, u32 offset,
-					    void *data)
-{
-	DEBUGFUNC("i40e_write_nvm_word");
-
-	*((__le16 *)data) = CPU_TO_LE16(*((u16 *)data));
-
-	/* Value 0x00 below means that we treat SR as a flat mem */
-	return i40e_write_nvm_aq(hw, 0x00, offset, 1, data, false);
-}
-
-/**
- * __i40e_write_nvm_buffer - Writes Shadow RAM buffer
- * @hw: pointer to the HW structure
- * @module_pointer: module pointer location in words from the NVM beginning
- * @offset: offset of the Shadow RAM buffer to write
- * @words: number of words to write
- * @data: words to write to the Shadow RAM
- *
- * Writes a 16 bit words buffer to the Shadow RAM using the admin command.
- * NVM ownership must be acquired before calling this function and released
- * on ARQ completion event reception by caller. To commit SR to NVM update
- * checksum function should be called.
- **/
-enum i40e_status_code __i40e_write_nvm_buffer(struct i40e_hw *hw,
-					      u8 module_pointer, u32 offset,
-					      u16 words, void *data)
-{
-	__le16 *le_word_ptr = (__le16 *)data;
-	u16 *word_ptr = (u16 *)data;
-	u32 i = 0;
-
-	DEBUGFUNC("i40e_write_nvm_buffer");
-
-	for (i = 0; i < words; i++)
-		le_word_ptr[i] = CPU_TO_LE16(word_ptr[i]);
-
-	/* Here we will only write one buffer as the size of the modules
-	 * mirrored in the Shadow RAM is always less than 4K.
-	 */
-	return i40e_write_nvm_aq(hw, module_pointer, offset, words,
-				 data, false);
-}
-
 /**
  * i40e_calc_nvm_checksum - Calculates and returns the checksum
  * @hw: pointer to hardware structure
@@ -807,521 +752,6 @@ enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw,
 	return ret_code;
 }
 
-STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno);
-STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
-						   struct i40e_nvm_access *cmd,
-						   int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw,
-						   struct i40e_nvm_access *cmd,
-						   u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw,
-						  struct i40e_nvm_access *cmd,
-						  u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_exec_aq(struct i40e_hw *hw,
-						 struct i40e_nvm_access *cmd,
-						 u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno);
-STATIC INLINE u8 i40e_nvmupd_get_module(u32 val)
-{
-	return (u8)(val & I40E_NVM_MOD_PNT_MASK);
-}
-STATIC INLINE u8 i40e_nvmupd_get_transaction(u32 val)
-{
-	return (u8)((val & I40E_NVM_TRANS_MASK) >> I40E_NVM_TRANS_SHIFT);
-}
-
-STATIC INLINE u8 i40e_nvmupd_get_preservation_flags(u32 val)
-{
-	return (u8)((val & I40E_NVM_PRESERVATION_FLAGS_MASK) >>
-		    I40E_NVM_PRESERVATION_FLAGS_SHIFT);
-}
-
-STATIC const char *i40e_nvm_update_state_str[] = {
-	"I40E_NVMUPD_INVALID",
-	"I40E_NVMUPD_READ_CON",
-	"I40E_NVMUPD_READ_SNT",
-	"I40E_NVMUPD_READ_LCB",
-	"I40E_NVMUPD_READ_SA",
-	"I40E_NVMUPD_WRITE_ERA",
-	"I40E_NVMUPD_WRITE_CON",
-	"I40E_NVMUPD_WRITE_SNT",
-	"I40E_NVMUPD_WRITE_LCB",
-	"I40E_NVMUPD_WRITE_SA",
-	"I40E_NVMUPD_CSUM_CON",
-	"I40E_NVMUPD_CSUM_SA",
-	"I40E_NVMUPD_CSUM_LCB",
-	"I40E_NVMUPD_STATUS",
-	"I40E_NVMUPD_EXEC_AQ",
-	"I40E_NVMUPD_GET_AQ_RESULT",
-	"I40E_NVMUPD_GET_AQ_EVENT",
-	"I40E_NVMUPD_GET_FEATURES",
-};
-
-/**
- * i40e_nvmupd_command - Process an NVM update command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * Dispatches command depending on what update state is current
- **/
-enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw,
-					  struct i40e_nvm_access *cmd,
-					  u8 *bytes, int *perrno)
-{
-	enum i40e_status_code status;
-	enum i40e_nvmupd_cmd upd_cmd;
-
-	DEBUGFUNC("i40e_nvmupd_command");
-
-	/* assume success */
-	*perrno = 0;
-
-	/* early check for status command and debug msgs */
-	upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
-	i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d opc 0x%04x cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
-		   i40e_nvm_update_state_str[upd_cmd],
-		   hw->nvmupd_state,
-		   hw->nvm_release_on_done, hw->nvm_wait_opcode,
-		   cmd->command, cmd->config, cmd->offset, cmd->data_size);
-
-	if (upd_cmd == I40E_NVMUPD_INVALID) {
-		*perrno = -EFAULT;
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_validate_command returns %d errno %d\n",
-			   upd_cmd, *perrno);
-	}
-
-	/* a status request returns immediately rather than
-	 * going into the state machine
-	 */
-	if (upd_cmd == I40E_NVMUPD_STATUS) {
-		if (!cmd->data_size) {
-			*perrno = -EFAULT;
-			return I40E_ERR_BUF_TOO_SHORT;
-		}
-
-		bytes[0] = hw->nvmupd_state;
-
-		if (cmd->data_size >= 4) {
-			bytes[1] = 0;
-			*((u16 *)&bytes[2]) = hw->nvm_wait_opcode;
-		}
-
-		/* Clear error status on read */
-		if (hw->nvmupd_state == I40E_NVMUPD_STATE_ERROR)
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-
-		return I40E_SUCCESS;
-	}
-
-	/*
-	 * A supported features request returns immediately
-	 * rather than going into state machine
-	 */
-	if (upd_cmd == I40E_NVMUPD_FEATURES) {
-		if (cmd->data_size < hw->nvmupd_features.size) {
-			*perrno = -EFAULT;
-			return I40E_ERR_BUF_TOO_SHORT;
-		}
-
-		/*
-		 * If buffer is bigger than i40e_nvmupd_features structure,
-		 * make sure the trailing bytes are set to 0x0.
-		 */
-		if (cmd->data_size > hw->nvmupd_features.size)
-			i40e_memset(bytes + hw->nvmupd_features.size, 0x0,
-				    cmd->data_size - hw->nvmupd_features.size,
-				    I40E_NONDMA_MEM);
-
-		i40e_memcpy(bytes, &hw->nvmupd_features,
-			    hw->nvmupd_features.size, I40E_NONDMA_MEM);
-
-		return I40E_SUCCESS;
-	}
-
-	/* Clear status even it is not read and log */
-	if (hw->nvmupd_state == I40E_NVMUPD_STATE_ERROR) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "Clearing I40E_NVMUPD_STATE_ERROR state without reading\n");
-		hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-	}
-
-	/* Acquire lock to prevent race condition where adminq_task
-	 * can execute after i40e_nvmupd_nvm_read/write but before state
-	 * variables (nvm_wait_opcode, nvm_release_on_done) are updated.
-	 *
-	 * During NVMUpdate, it is observed that lock could be held for
-	 * ~5ms for most commands. However lock is held for ~60ms for
-	 * NVMUPD_CSUM_LCB command.
-	 */
-	i40e_acquire_spinlock(&hw->aq.arq_spinlock);
-	switch (hw->nvmupd_state) {
-	case I40E_NVMUPD_STATE_INIT:
-		status = i40e_nvmupd_state_init(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_STATE_READING:
-		status = i40e_nvmupd_state_reading(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_STATE_WRITING:
-		status = i40e_nvmupd_state_writing(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_STATE_INIT_WAIT:
-	case I40E_NVMUPD_STATE_WRITE_WAIT:
-		/* if we need to stop waiting for an event, clear
-		 * the wait info and return before doing anything else
-		 */
-		if (cmd->offset == 0xffff) {
-			i40e_nvmupd_clear_wait_state(hw);
-			status = I40E_SUCCESS;
-			break;
-		}
-
-		status = I40E_ERR_NOT_READY;
-		*perrno = -EBUSY;
-		break;
-
-	default:
-		/* invalid state, should never happen */
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "NVMUPD: no such state %d\n", hw->nvmupd_state);
-		status = I40E_NOT_SUPPORTED;
-		*perrno = -ESRCH;
-		break;
-	}
-
-	i40e_release_spinlock(&hw->aq.arq_spinlock);
-	return status;
-}
-
-/**
- * i40e_nvmupd_state_init - Handle NVM update state Init
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * Process legitimate commands of the Init state and conditionally set next
- * state. Reject all other commands.
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	enum i40e_nvmupd_cmd upd_cmd;
-
-	DEBUGFUNC("i40e_nvmupd_state_init");
-
-	upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
-	switch (upd_cmd) {
-	case I40E_NVMUPD_READ_SA:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
-			i40e_release_nvm(hw);
-		}
-		break;
-
-	case I40E_NVMUPD_READ_SNT:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
-			if (status)
-				i40e_release_nvm(hw);
-			else
-				hw->nvmupd_state = I40E_NVMUPD_STATE_READING;
-		}
-		break;
-
-	case I40E_NVMUPD_WRITE_ERA:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_nvmupd_nvm_erase(hw, cmd, perrno);
-			if (status) {
-				i40e_release_nvm(hw);
-			} else {
-				hw->nvm_release_on_done = true;
-				hw->nvm_wait_opcode = i40e_aqc_opc_nvm_erase;
-				hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-			}
-		}
-		break;
-
-	case I40E_NVMUPD_WRITE_SA:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-			if (status) {
-				i40e_release_nvm(hw);
-			} else {
-				hw->nvm_release_on_done = true;
-				hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-				hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-			}
-		}
-		break;
-
-	case I40E_NVMUPD_WRITE_SNT:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-			if (status) {
-				i40e_release_nvm(hw);
-			} else {
-				hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-				hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
-			}
-		}
-		break;
-
-	case I40E_NVMUPD_CSUM_SA:
-		status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
-		if (status) {
-			*perrno = i40e_aq_rc_to_posix(status,
-						     hw->aq.asq_last_status);
-		} else {
-			status = i40e_update_nvm_checksum(hw);
-			if (status) {
-				*perrno = hw->aq.asq_last_status ?
-				   i40e_aq_rc_to_posix(status,
-						       hw->aq.asq_last_status) :
-				   -EIO;
-				i40e_release_nvm(hw);
-			} else {
-				hw->nvm_release_on_done = true;
-				hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-				hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-			}
-		}
-		break;
-
-	case I40E_NVMUPD_EXEC_AQ:
-		status = i40e_nvmupd_exec_aq(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_GET_AQ_RESULT:
-		status = i40e_nvmupd_get_aq_result(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_GET_AQ_EVENT:
-		status = i40e_nvmupd_get_aq_event(hw, cmd, bytes, perrno);
-		break;
-
-	default:
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "NVMUPD: bad cmd %s in init state\n",
-			   i40e_nvm_update_state_str[upd_cmd]);
-		status = I40E_ERR_NVM;
-		*perrno = -ESRCH;
-		break;
-	}
-	return status;
-}
-
-/**
- * i40e_nvmupd_state_reading - Handle NVM update state Reading
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * NVM ownership is already held.  Process legitimate commands and set any
- * change in state; reject all other commands.
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	enum i40e_nvmupd_cmd upd_cmd;
-
-	DEBUGFUNC("i40e_nvmupd_state_reading");
-
-	upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
-	switch (upd_cmd) {
-	case I40E_NVMUPD_READ_SA:
-	case I40E_NVMUPD_READ_CON:
-		status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
-		break;
-
-	case I40E_NVMUPD_READ_LCB:
-		status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
-		i40e_release_nvm(hw);
-		hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-		break;
-
-	default:
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "NVMUPD: bad cmd %s in reading state.\n",
-			   i40e_nvm_update_state_str[upd_cmd]);
-		status = I40E_NOT_SUPPORTED;
-		*perrno = -ESRCH;
-		break;
-	}
-	return status;
-}
-
-/**
- * i40e_nvmupd_state_writing - Handle NVM update state Writing
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * NVM ownership is already held.  Process legitimate commands and set any
- * change in state; reject all other commands
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	enum i40e_nvmupd_cmd upd_cmd;
-	bool retry_attempt = false;
-
-	DEBUGFUNC("i40e_nvmupd_state_writing");
-
-	upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
-retry:
-	switch (upd_cmd) {
-	case I40E_NVMUPD_WRITE_CON:
-		status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-		if (!status) {
-			hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
-		}
-		break;
-
-	case I40E_NVMUPD_WRITE_LCB:
-		status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-		if (status) {
-			*perrno = hw->aq.asq_last_status ?
-				   i40e_aq_rc_to_posix(status,
-						       hw->aq.asq_last_status) :
-				   -EIO;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-		} else {
-			hw->nvm_release_on_done = true;
-			hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-		}
-		break;
-
-	case I40E_NVMUPD_CSUM_CON:
-		/* Assumes the caller has acquired the nvm */
-		status = i40e_update_nvm_checksum(hw);
-		if (status) {
-			*perrno = hw->aq.asq_last_status ?
-				   i40e_aq_rc_to_posix(status,
-						       hw->aq.asq_last_status) :
-				   -EIO;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-		} else {
-			hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
-		}
-		break;
-
-	case I40E_NVMUPD_CSUM_LCB:
-		/* Assumes the caller has acquired the nvm */
-		status = i40e_update_nvm_checksum(hw);
-		if (status) {
-			*perrno = hw->aq.asq_last_status ?
-				   i40e_aq_rc_to_posix(status,
-						       hw->aq.asq_last_status) :
-				   -EIO;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-		} else {
-			hw->nvm_release_on_done = true;
-			hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
-			hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-		}
-		break;
-
-	default:
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "NVMUPD: bad cmd %s in writing state.\n",
-			   i40e_nvm_update_state_str[upd_cmd]);
-		status = I40E_NOT_SUPPORTED;
-		*perrno = -ESRCH;
-		break;
-	}
-
-	/* In some circumstances, a multi-write transaction takes longer
-	 * than the default 3 minute timeout on the write semaphore.  If
-	 * the write failed with an EBUSY status, this is likely the problem,
-	 * so here we try to reacquire the semaphore then retry the write.
-	 * We only do one retry, then give up.
-	 */
-	if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) &&
-	    !retry_attempt) {
-		enum i40e_status_code old_status = status;
-		u32 old_asq_status = hw->aq.asq_last_status;
-		u32 gtime;
-
-		gtime = rd32(hw, I40E_GLVFGEN_TIMER);
-		if (gtime >= hw->nvm.hw_semaphore_timeout) {
-			i40e_debug(hw, I40E_DEBUG_ALL,
-				   "NVMUPD: write semaphore expired (%d >= %" PRIu64 "), retrying\n",
-				   gtime, hw->nvm.hw_semaphore_timeout);
-			i40e_release_nvm(hw);
-			status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
-			if (status) {
-				i40e_debug(hw, I40E_DEBUG_ALL,
-					   "NVMUPD: write semaphore reacquire failed aq_err = %d\n",
-					   hw->aq.asq_last_status);
-				status = old_status;
-				hw->aq.asq_last_status = old_asq_status;
-			} else {
-				retry_attempt = true;
-				goto retry;
-			}
-		}
-	}
-
-	return status;
-}
-
 /**
  * i40e_nvmupd_clear_wait_state - clear wait state on hw
  * @hw: pointer to the hardware structure
@@ -1374,421 +804,3 @@ void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode,
 		i40e_nvmupd_clear_wait_state(hw);
 	}
 }
-
-/**
- * i40e_nvmupd_validate_command - Validate given command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @perrno: pointer to return error code
- *
- * Return one of the valid command types or I40E_NVMUPD_INVALID
- **/
-STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    int *perrno)
-{
-	enum i40e_nvmupd_cmd upd_cmd;
-	u8 module, transaction;
-
-	DEBUGFUNC("i40e_nvmupd_validate_command\n");
-
-	/* anything that doesn't match a recognized case is an error */
-	upd_cmd = I40E_NVMUPD_INVALID;
-
-	transaction = i40e_nvmupd_get_transaction(cmd->config);
-	module = i40e_nvmupd_get_module(cmd->config);
-
-	/* limits on data size */
-	if ((cmd->data_size < 1) ||
-	    (cmd->data_size > I40E_NVMUPD_MAX_DATA)) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_validate_command data_size %d\n",
-			   cmd->data_size);
-		*perrno = -EFAULT;
-		return I40E_NVMUPD_INVALID;
-	}
-
-	switch (cmd->command) {
-	case I40E_NVM_READ:
-		switch (transaction) {
-		case I40E_NVM_CON:
-			upd_cmd = I40E_NVMUPD_READ_CON;
-			break;
-		case I40E_NVM_SNT:
-			upd_cmd = I40E_NVMUPD_READ_SNT;
-			break;
-		case I40E_NVM_LCB:
-			upd_cmd = I40E_NVMUPD_READ_LCB;
-			break;
-		case I40E_NVM_SA:
-			upd_cmd = I40E_NVMUPD_READ_SA;
-			break;
-		case I40E_NVM_EXEC:
-			switch (module) {
-			case I40E_NVM_EXEC_GET_AQ_RESULT:
-				upd_cmd = I40E_NVMUPD_GET_AQ_RESULT;
-				break;
-			case I40E_NVM_EXEC_FEATURES:
-				upd_cmd = I40E_NVMUPD_FEATURES;
-				break;
-			case I40E_NVM_EXEC_STATUS:
-				upd_cmd = I40E_NVMUPD_STATUS;
-				break;
-			default:
-				*perrno = -EFAULT;
-				return I40E_NVMUPD_INVALID;
-			}
-			break;
-		case I40E_NVM_AQE:
-			upd_cmd = I40E_NVMUPD_GET_AQ_EVENT;
-			break;
-		}
-		break;
-
-	case I40E_NVM_WRITE:
-		switch (transaction) {
-		case I40E_NVM_CON:
-			upd_cmd = I40E_NVMUPD_WRITE_CON;
-			break;
-		case I40E_NVM_SNT:
-			upd_cmd = I40E_NVMUPD_WRITE_SNT;
-			break;
-		case I40E_NVM_LCB:
-			upd_cmd = I40E_NVMUPD_WRITE_LCB;
-			break;
-		case I40E_NVM_SA:
-			upd_cmd = I40E_NVMUPD_WRITE_SA;
-			break;
-		case I40E_NVM_ERA:
-			upd_cmd = I40E_NVMUPD_WRITE_ERA;
-			break;
-		case I40E_NVM_CSUM:
-			upd_cmd = I40E_NVMUPD_CSUM_CON;
-			break;
-		case (I40E_NVM_CSUM|I40E_NVM_SA):
-			upd_cmd = I40E_NVMUPD_CSUM_SA;
-			break;
-		case (I40E_NVM_CSUM|I40E_NVM_LCB):
-			upd_cmd = I40E_NVMUPD_CSUM_LCB;
-			break;
-		case I40E_NVM_EXEC:
-			if (module == 0)
-				upd_cmd = I40E_NVMUPD_EXEC_AQ;
-			break;
-		}
-		break;
-	}
-
-	return upd_cmd;
-}
-
-/**
- * i40e_nvmupd_exec_aq - Run an AQ command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_exec_aq(struct i40e_hw *hw,
-						 struct i40e_nvm_access *cmd,
-						 u8 *bytes, int *perrno)
-{
-	struct i40e_asq_cmd_details cmd_details;
-	enum i40e_status_code status;
-	struct i40e_aq_desc *aq_desc;
-	u32 buff_size = 0;
-	u8 *buff = NULL;
-	u32 aq_desc_len;
-	u32 aq_data_len;
-
-	i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
-	if (cmd->offset == 0xffff)
-		return I40E_SUCCESS;
-
-	memset(&cmd_details, 0, sizeof(cmd_details));
-	cmd_details.wb_desc = &hw->nvm_wb_desc;
-
-	aq_desc_len = sizeof(struct i40e_aq_desc);
-	memset(&hw->nvm_wb_desc, 0, aq_desc_len);
-
-	/* get the aq descriptor */
-	if (cmd->data_size < aq_desc_len) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "NVMUPD: not enough aq desc bytes for exec, size %d < %d\n",
-			   cmd->data_size, aq_desc_len);
-		*perrno = -EINVAL;
-		return I40E_ERR_PARAM;
-	}
-	aq_desc = (struct i40e_aq_desc *)bytes;
-
-	/* if data buffer needed, make sure it's ready */
-	aq_data_len = cmd->data_size - aq_desc_len;
-	buff_size = max(aq_data_len, (u32)LE16_TO_CPU(aq_desc->datalen));
-	if (buff_size) {
-		if (!hw->nvm_buff.va) {
-			status = i40e_allocate_virt_mem(hw, &hw->nvm_buff,
-							hw->aq.asq_buf_size);
-			if (status)
-				i40e_debug(hw, I40E_DEBUG_NVM,
-					   "NVMUPD: i40e_allocate_virt_mem for exec buff failed, %d\n",
-					   status);
-		}
-
-		if (hw->nvm_buff.va) {
-			buff = hw->nvm_buff.va;
-			i40e_memcpy(buff, &bytes[aq_desc_len], aq_data_len,
-				I40E_NONDMA_TO_NONDMA);
-		}
-	}
-
-	if (cmd->offset)
-		memset(&hw->nvm_aq_event_desc, 0, aq_desc_len);
-
-	/* and away we go! */
-	status = i40e_asq_send_command(hw, aq_desc, buff,
-				       buff_size, &cmd_details);
-	if (status) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_exec_aq err %s aq_err %s\n",
-			   i40e_stat_str(hw, status),
-			   i40e_aq_str(hw, hw->aq.asq_last_status));
-		*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
-		return status;
-	}
-
-	/* should we wait for a followup event? */
-	if (cmd->offset) {
-		hw->nvm_wait_opcode = cmd->offset;
-		hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
-	}
-
-	return status;
-}
-
-/**
- * i40e_nvmupd_get_aq_result - Get the results from the previous exec_aq
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno)
-{
-	u32 aq_total_len;
-	u32 aq_desc_len;
-	int remainder;
-	u8 *buff;
-
-	i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
-
-	aq_desc_len = sizeof(struct i40e_aq_desc);
-	aq_total_len = aq_desc_len + LE16_TO_CPU(hw->nvm_wb_desc.datalen);
-
-	/* check offset range */
-	if (cmd->offset > aq_total_len) {
-		i40e_debug(hw, I40E_DEBUG_NVM, "%s: offset too big %d > %d\n",
-			   __func__, cmd->offset, aq_total_len);
-		*perrno = -EINVAL;
-		return I40E_ERR_PARAM;
-	}
-
-	/* check copylength range */
-	if (cmd->data_size > (aq_total_len - cmd->offset)) {
-		int new_len = aq_total_len - cmd->offset;
-
-		i40e_debug(hw, I40E_DEBUG_NVM, "%s: copy length %d too big, trimming to %d\n",
-			   __func__, cmd->data_size, new_len);
-		cmd->data_size = new_len;
-	}
-
-	remainder = cmd->data_size;
-	if (cmd->offset < aq_desc_len) {
-		u32 len = aq_desc_len - cmd->offset;
-
-		len = min(len, cmd->data_size);
-		i40e_debug(hw, I40E_DEBUG_NVM, "%s: aq_desc bytes %d to %d\n",
-			   __func__, cmd->offset, cmd->offset + len);
-
-		buff = ((u8 *)&hw->nvm_wb_desc) + cmd->offset;
-		i40e_memcpy(bytes, buff, len, I40E_NONDMA_TO_NONDMA);
-
-		bytes += len;
-		remainder -= len;
-		buff = hw->nvm_buff.va;
-	} else {
-		buff = (u8 *)hw->nvm_buff.va + (cmd->offset - aq_desc_len);
-	}
-
-	if (remainder > 0) {
-		int start_byte = buff - (u8 *)hw->nvm_buff.va;
-
-		i40e_debug(hw, I40E_DEBUG_NVM, "%s: databuf bytes %d to %d\n",
-			   __func__, start_byte, start_byte + remainder);
-		i40e_memcpy(bytes, buff, remainder, I40E_NONDMA_TO_NONDMA);
-	}
-
-	return I40E_SUCCESS;
-}
-
-/**
- * i40e_nvmupd_get_aq_event - Get the Admin Queue event from previous exec_aq
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
-						    struct i40e_nvm_access *cmd,
-						    u8 *bytes, int *perrno)
-{
-	u32 aq_total_len;
-	u32 aq_desc_len;
-
-	i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
-
-	aq_desc_len = sizeof(struct i40e_aq_desc);
-	aq_total_len = aq_desc_len + LE16_TO_CPU(hw->nvm_aq_event_desc.datalen);
-
-	/* check copylength range */
-	if (cmd->data_size > aq_total_len) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "%s: copy length %d too big, trimming to %d\n",
-			   __func__, cmd->data_size, aq_total_len);
-		cmd->data_size = aq_total_len;
-	}
-
-	i40e_memcpy(bytes, &hw->nvm_aq_event_desc, cmd->data_size,
-		    I40E_NONDMA_TO_NONDMA);
-
-	return I40E_SUCCESS;
-}
-
-/**
- * i40e_nvmupd_nvm_read - Read NVM
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw,
-						  struct i40e_nvm_access *cmd,
-						  u8 *bytes, int *perrno)
-{
-	struct i40e_asq_cmd_details cmd_details;
-	enum i40e_status_code status;
-	u8 module, transaction;
-	bool last;
-
-	transaction = i40e_nvmupd_get_transaction(cmd->config);
-	module = i40e_nvmupd_get_module(cmd->config);
-	last = (transaction == I40E_NVM_LCB) || (transaction == I40E_NVM_SA);
-
-	memset(&cmd_details, 0, sizeof(cmd_details));
-	cmd_details.wb_desc = &hw->nvm_wb_desc;
-
-	status = i40e_aq_read_nvm(hw, module, cmd->offset, (u16)cmd->data_size,
-				  bytes, last, &cmd_details);
-	if (status) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_read mod 0x%x  off 0x%x  len 0x%x\n",
-			   module, cmd->offset, cmd->data_size);
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_read status %d aq %d\n",
-			   status, hw->aq.asq_last_status);
-		*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
-	}
-
-	return status;
-}
-
-/**
- * i40e_nvmupd_nvm_erase - Erase an NVM module
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @perrno: pointer to return error code
- *
- * module, offset, data_size and data are in cmd structure
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
-						   struct i40e_nvm_access *cmd,
-						   int *perrno)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	struct i40e_asq_cmd_details cmd_details;
-	u8 module, transaction;
-	bool last;
-
-	transaction = i40e_nvmupd_get_transaction(cmd->config);
-	module = i40e_nvmupd_get_module(cmd->config);
-	last = (transaction & I40E_NVM_LCB);
-
-	memset(&cmd_details, 0, sizeof(cmd_details));
-	cmd_details.wb_desc = &hw->nvm_wb_desc;
-
-	status = i40e_aq_erase_nvm(hw, module, cmd->offset, (u16)cmd->data_size,
-				   last, &cmd_details);
-	if (status) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_erase mod 0x%x  off 0x%x len 0x%x\n",
-			   module, cmd->offset, cmd->data_size);
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_erase status %d aq %d\n",
-			   status, hw->aq.asq_last_status);
-		*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
-	}
-
-	return status;
-}
-
-/**
- * i40e_nvmupd_nvm_write - Write NVM
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * module, offset, data_size and data are in cmd structure
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw,
-						   struct i40e_nvm_access *cmd,
-						   u8 *bytes, int *perrno)
-{
-	enum i40e_status_code status = I40E_SUCCESS;
-	struct i40e_asq_cmd_details cmd_details;
-	u8 module, transaction;
-	u8 preservation_flags;
-	bool last;
-
-	transaction = i40e_nvmupd_get_transaction(cmd->config);
-	module = i40e_nvmupd_get_module(cmd->config);
-	last = (transaction & I40E_NVM_LCB);
-	preservation_flags = i40e_nvmupd_get_preservation_flags(cmd->config);
-
-	memset(&cmd_details, 0, sizeof(cmd_details));
-	cmd_details.wb_desc = &hw->nvm_wb_desc;
-
-	status = i40e_aq_update_nvm(hw, module, cmd->offset,
-				    (u16)cmd->data_size, bytes, last,
-				    preservation_flags, &cmd_details);
-	if (status) {
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_write mod 0x%x off 0x%x len 0x%x\n",
-			   module, cmd->offset, cmd->data_size);
-		i40e_debug(hw, I40E_DEBUG_NVM,
-			   "i40e_nvmupd_nvm_write status %d aq %d\n",
-			   status, hw->aq.asq_last_status);
-		*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
-	}
-
-	return status;
-}
diff --git a/drivers/net/i40e/base/i40e_prototype.h b/drivers/net/i40e/base/i40e_prototype.h
index 124222e476..73ec0e340a 100644
--- a/drivers/net/i40e/base/i40e_prototype.h
+++ b/drivers/net/i40e/base/i40e_prototype.h
@@ -67,27 +67,12 @@ const char *i40e_stat_str(struct i40e_hw *hw, enum i40e_status_code stat_err);
 
 u32 i40e_led_get(struct i40e_hw *hw);
 void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink);
-enum i40e_status_code i40e_led_set_phy(struct i40e_hw *hw, bool on,
-				       u16 led_addr, u32 mode);
-enum i40e_status_code i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
-				       u16 *val);
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
-					      u32 time, u32 interval);
 enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr,
 				       u32 *reg_val);
 enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
 				       u32 reg_val);
-enum i40e_status_code i40e_get_phy_lpi_status(struct i40e_hw *hw,
-					      struct i40e_hw_port_stats *stats);
 enum i40e_status_code i40e_get_lpi_counters(struct i40e_hw *hw, u32 *tx_counter,
 					    u32 *rx_counter, bool *is_clear);
-enum i40e_status_code i40e_lpi_stat_update(struct i40e_hw *hw,
-					   bool offset_loaded, u64 *tx_offset,
-					   u64 *tx_stat, u64 *rx_offset,
-					   u64 *rx_stat);
-enum i40e_status_code i40e_get_lpi_duration(struct i40e_hw *hw,
-					    struct i40e_hw_port_stats *stat,
-					    u64 *tx_duration, u64 *rx_duration);
 /* admin send queue commands */
 
 enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw,
@@ -101,12 +86,6 @@ enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw,
 enum i40e_status_code i40e_aq_debug_read_register(struct i40e_hw *hw,
 				u32  reg_addr, u64 *reg_val,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
 			bool qualified_modules, bool report_init,
 			struct i40e_aq_get_phy_abilities_resp *abilities,
@@ -122,27 +101,13 @@ enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw,
 				u16 max_frame_size, bool crc_en, u16 pacing,
 				bool auto_drop_blocking_packets,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw,
-				u64 *advt_reg,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw,
-				u64 *advt_reg,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, u16 lb_modes,
 				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
 			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw,
-		bool enable_link, struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw,
 				bool enable_lse, struct i40e_link_status *link,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
-				u64 advt_reg,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw,
-				struct i40e_driver_version *dv,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw,
 				struct i40e_vsi_context *vsi_ctx,
 				struct i40e_asq_cmd_details *cmd_details);
@@ -154,18 +119,6 @@ enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
 		bool rx_only_promisc);
 enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
 		u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
-				u16 seid, bool set,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
-				u16 seid, bool enable, u16 vid,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
 				u16 seid, bool enable,
 				struct i40e_asq_cmd_details *cmd_details);
@@ -191,15 +144,6 @@ enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id,
 enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id,
 			struct i40e_aqc_remove_macvlan_element_data *mv_list,
 			u16 count, struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
-			u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
-			struct i40e_asq_cmd_details *cmd_details,
-			u16 *rule_id, u16 *rules_used, u16 *rules_free);
-enum i40e_status_code i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
-			u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
-			struct i40e_asq_cmd_details *cmd_details,
-			u16 *rules_used, u16 *rules_free);
-
 enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 vsi_id,
 			struct i40e_aqc_add_remove_vlan_element_data *v_list,
 			u8 count, struct i40e_asq_cmd_details *cmd_details);
@@ -232,21 +176,6 @@ enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
 enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
 				u32 offset, u16 length, bool last_command,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_read_nvm_config(struct i40e_hw *hw,
-				u8 cmd_flags, u32 field_id, void *data,
-				u16 buf_size, u16 *element_count,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_write_nvm_config(struct i40e_hw *hw,
-				u8 cmd_flags, void *data, u16 buf_size,
-				u16 element_count,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_min_rollback_rev_update(struct i40e_hw *hw, u8 mode, u8 module,
-				u32 min_rrev,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_oem_post_update(struct i40e_hw *hw,
-				void *buff, u16 buff_size,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw,
 				void *buff, u16 buff_size, u16 *data_size,
 				enum i40e_admin_queue_opc list_type_opc,
@@ -255,13 +184,6 @@ enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
 				u32 offset, u16 length, void *data,
 				bool last_command, u8 preservation_flags,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_rearrange_nvm(struct i40e_hw *hw,
-				u8 rearrange_nvm,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_nvm_update_in_process(struct i40e_hw *hw,
-			      bool update_flow_state,
-			      struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
 				u8 mib_type, void *buff, u16 buff_size,
 				u16 *local_len, u16 *remote_len,
@@ -272,63 +194,25 @@ enum i40e_status_code i40e_aq_set_lldp_mib(struct i40e_hw *hw,
 enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
 				bool enable_update,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
-		     struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
 				bool persist,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_dcb_parameters(struct i40e_hw *hw,
-						 bool dcb_enable,
-						 struct i40e_asq_cmd_details
-						 *cmd_details);
 enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw,
 				bool persist,
 				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
 				void *buff, u16 buff_size,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_start_stop_dcbx(struct i40e_hw *hw,
-				bool start_agent,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
 				u16 udp_port, u8 protocol_index,
 				u8 *filter_index,
 				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw,
-			u8 *num_entries,
-			struct i40e_aqc_switch_resource_alloc_element_resp *buf,
-			u16 count,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags,
-				       u16 mac_seid, u16 vsi_seid,
-				       u16 *ret_seid);
-enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue,
-				u16 vsi_seid, u16 tag, u16 queue_num,
-				u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid,
-				u16 tag, u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pe_seid,
 				u16 etag, u8 num_tags_in_buf, void *buf,
 				u16 *tags_used, u16 *tags_free,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pe_seid,
-				u16 etag, u16 *tags_used, u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid,
-				u16 old_tag, u16 new_tag, u16 *tags_used,
-				u16 *tags_free,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid,
-				u16 vlan_id, u16 *stat_index,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid,
-				u16 vlan_id, u16 stat_index,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw,
 				u16 bad_frame_vsi, bool save_bad_pac,
 				bool pad_short_pac, bool double_vlan,
@@ -341,22 +225,10 @@ enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw,
 enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
 				u16 seid, u16 credit, u8 max_credit,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw,
-				u8 tcmap, bool request, u8 *tcmap_ret,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit(
-	struct i40e_hw *hw, u16 seid,
-	struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data,
-	struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw,
 			u16 seid,
 			struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data,
 			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw,
-				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
-				u16 seid, u16 credit, u8 max_bw,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid,
 			struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
 			struct i40e_asq_cmd_details *cmd_details);
@@ -381,16 +253,10 @@ enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
 		u16 seid,
 		struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
 		struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw,
-		u16 seid,
-		struct i40e_aqc_query_port_ets_config_resp *bw_data,
-		struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
 		u16 seid,
 		struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
 		struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw,
-				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code
 i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
 			     struct i40e_aqc_cloud_filters_element_bb *filters,
@@ -415,38 +281,15 @@ enum i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
 enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
 				u32 reg_addr0, u32 *reg_val0,
 				u32 reg_addr1, u32 *reg_val1);
-enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw,
-				u32 addr, u32 dw_count, void *buffer);
-enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw,
-				u32 reg_addr0, u32 reg_val0,
-				u32 reg_addr1, u32 reg_val1);
-enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw,
-				u32 addr, u32 dw_count, void *buffer);
-enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw);
-enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw,
-				u8 bios_mode, bool *reset_needed);
-enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw,
-				u8 oem_mode);
 
 /* i40e_common */
 enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw);
 enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw);
 void i40e_clear_hw(struct i40e_hw *hw);
 void i40e_clear_pxe_mode(struct i40e_hw *hw);
-enum i40e_status_code i40e_get_link_status(struct i40e_hw *hw, bool *link_up);
 enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw);
 enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
-		u32 *max_bw, u32 *min_bw, bool *min_valid, bool *max_valid);
-enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw,
-			struct i40e_aqc_configure_partition_bw_data *bw_data,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_status_code i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
-					    u32 pba_num_size);
 void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable);
-enum i40e_status_code i40e_get_san_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw);
 /* prototype for functions used for NVM access */
 enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw);
 enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw,
@@ -466,24 +309,14 @@ enum i40e_status_code __i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
 					   u16 *data);
 enum i40e_status_code __i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
 					     u16 *words, u16 *data);
-enum i40e_status_code __i40e_write_nvm_word(struct i40e_hw *hw, u32 offset,
-					  void *data);
-enum i40e_status_code __i40e_write_nvm_buffer(struct i40e_hw *hw, u8 module,
-					    u32 offset, u16 words, void *data);
 enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum);
 enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw);
 enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw,
 						 u16 *checksum);
-enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw,
-					  struct i40e_nvm_access *cmd,
-					  u8 *bytes, int *);
 void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode,
 				  struct i40e_aq_desc *desc);
 void i40e_nvmupd_clear_wait_state(struct i40e_hw *hw);
-void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status);
 #endif /* PF_DRIVER */
-enum i40e_status_code i40e_enable_eee(struct i40e_hw *hw, bool enable);
-
 enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw);
 
 extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[];
@@ -551,13 +384,6 @@ enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
 				u16 vsi_seid, u16 queue, bool is_add,
 				struct i40e_control_filter_stats *stats,
 				struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
-				u8 table_id, u32 start_index, u16 buff_size,
-				void *buff, u16 *ret_buff_size,
-				u8 *ret_next_table, u32 *ret_next_index,
-				struct i40e_asq_cmd_details *cmd_details);
-void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
-						    u16 vsi_seid);
 enum i40e_status_code i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
 				u32 reg_addr, u32 *reg_val,
 				struct i40e_asq_cmd_details *cmd_details);
@@ -589,24 +415,6 @@ enum i40e_status_code
 i40e_aq_run_phy_activity(struct i40e_hw *hw, u16 activity_id, u32 opcode,
 			 u32 *cmd_status, u32 *data0, u32 *data1,
 			 struct i40e_asq_cmd_details *cmd_details);
-
-enum i40e_status_code i40e_aq_set_arp_proxy_config(struct i40e_hw *hw,
-			struct i40e_aqc_arp_proxy_data *proxy_config,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_ns_proxy_table_entry(struct i40e_hw *hw,
-			struct i40e_aqc_ns_proxy_data *ns_proxy_table_entry,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_clear_wol_filter(struct i40e_hw *hw,
-			u8 filter_index,
-			struct i40e_aqc_set_wol_filter_data *filter,
-			bool set_filter, bool no_wol_tco,
-			bool filter_valid, bool no_wol_tco_valid,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_wake_event_reason(struct i40e_hw *hw,
-			u16 *wake_reason,
-			struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_clear_all_wol_filters(struct i40e_hw *hw,
-			struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_read_phy_register_clause22(struct i40e_hw *hw,
 					u16 reg, u8 phy_addr, u16 *value);
 enum i40e_status_code i40e_write_phy_register_clause22(struct i40e_hw *hw,
@@ -615,13 +423,7 @@ enum i40e_status_code i40e_read_phy_register_clause45(struct i40e_hw *hw,
 				u8 page, u16 reg, u8 phy_addr, u16 *value);
 enum i40e_status_code i40e_write_phy_register_clause45(struct i40e_hw *hw,
 				u8 page, u16 reg, u8 phy_addr, u16 value);
-enum i40e_status_code i40e_read_phy_register(struct i40e_hw *hw,
-				u8 page, u16 reg, u8 phy_addr, u16 *value);
-enum i40e_status_code i40e_write_phy_register(struct i40e_hw *hw,
-				u8 page, u16 reg, u8 phy_addr, u16 value);
 u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num);
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
-					      u32 time, u32 interval);
 enum i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
 					u16 buff_size, u32 track_id,
 					u32 *error_offset, u32 *error_info,
@@ -643,8 +445,4 @@ i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
 enum i40e_status_code
 i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
 		      u32 track_id);
-enum i40e_status_code
-i40e_add_pinfo_to_list(struct i40e_hw *hw,
-		       struct i40e_profile_segment *profile,
-		       u8 *profile_info_sec, u32 track_id);
 #endif /* _I40E_PROTOTYPE_H_ */
diff --git a/drivers/net/i40e/base/meson.build b/drivers/net/i40e/base/meson.build
index 8bc6a0fa0b..1a07449fa5 100644
--- a/drivers/net/i40e/base/meson.build
+++ b/drivers/net/i40e/base/meson.build
@@ -5,7 +5,6 @@ sources = [
 	'i40e_adminq.c',
 	'i40e_common.c',
 	'i40e_dcb.c',
-	'i40e_diag.c',
 	'i40e_hmc.c',
 	'i40e_lan_hmc.c',
 	'i40e_nvm.c'
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 6d5912d8c1..db3dbbda48 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -293,8 +293,6 @@ int iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid,
 		     bool rx, bool on);
 int iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
 		     bool rx, bool on);
-int iavf_enable_queues(struct iavf_adapter *adapter);
-int iavf_enable_queues_lv(struct iavf_adapter *adapter);
 int iavf_disable_queues(struct iavf_adapter *adapter);
 int iavf_disable_queues_lv(struct iavf_adapter *adapter);
 int iavf_configure_rss_lut(struct iavf_adapter *adapter);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 33d03af653..badcd312cc 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -521,34 +521,6 @@ iavf_get_supported_rxdid(struct iavf_adapter *adapter)
 	return 0;
 }
 
-int
-iavf_enable_queues(struct iavf_adapter *adapter)
-{
-	struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
-	struct virtchnl_queue_select queue_select;
-	struct iavf_cmd_info args;
-	int err;
-
-	memset(&queue_select, 0, sizeof(queue_select));
-	queue_select.vsi_id = vf->vsi_res->vsi_id;
-
-	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
-	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
-
-	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
-	args.in_args = (u8 *)&queue_select;
-	args.in_args_size = sizeof(queue_select);
-	args.out_buffer = vf->aq_resp;
-	args.out_size = IAVF_AQ_BUF_SZ;
-	err = iavf_execute_vf_cmd(adapter, &args);
-	if (err) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of OP_ENABLE_QUEUES");
-		return err;
-	}
-	return 0;
-}
-
 int
 iavf_disable_queues(struct iavf_adapter *adapter)
 {
@@ -608,50 +580,6 @@ iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid,
 	return err;
 }
 
-int
-iavf_enable_queues_lv(struct iavf_adapter *adapter)
-{
-	struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
-	struct virtchnl_del_ena_dis_queues *queue_select;
-	struct virtchnl_queue_chunk *queue_chunk;
-	struct iavf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl_del_ena_dis_queues) +
-		  sizeof(struct virtchnl_queue_chunk) *
-		  (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (!queue_select)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
-	queue_select->vport_id = vf->vsi_res->vsi_id;
-
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
-		adapter->eth_dev->data->nb_tx_queues;
-
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
-	queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
-		adapter->eth_dev->data->nb_rx_queues;
-
-	args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = vf->aq_resp;
-	args.out_size = IAVF_AQ_BUF_SZ;
-	err = iavf_execute_vf_cmd(adapter, &args);
-	if (err) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of OP_ENABLE_QUEUES_V2");
-		return err;
-	}
-	return 0;
-}
-
 int
 iavf_disable_queues_lv(struct iavf_adapter *adapter)
 {
diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c
index 763cd2af9e..0f73f4a0e7 100644
--- a/drivers/net/ice/base/ice_acl.c
+++ b/drivers/net/ice/base/ice_acl.c
@@ -115,79 +115,6 @@ ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
 				entry_idx, buf, cd);
 }
 
-/**
- * ice_aq_query_acl_entry - query ACL entry
- * @hw: pointer to the HW struct
- * @tcam_idx: Updated TCAM block index
- * @entry_idx: updated entry index
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Query ACL entry (direct 0x0C24)
- *
- * NOTE: Caller of this API to parse 'buf' appropriately since it contains
- * response (key and key invert)
- */
-enum ice_status
-ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
-		       struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd)
-{
-	return ice_aq_acl_entry(hw, ice_aqc_opc_query_acl_entry, tcam_idx,
-				entry_idx, buf, cd);
-}
-
-/* Helper function to alloc/dealloc ACL action pair */
-static enum ice_status
-ice_aq_actpair_a_d(struct ice_hw *hw, u16 opcode, u16 alloc_id,
-		   struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_acl_tbl_actpair *cmd;
-	struct ice_aq_desc desc;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
-	cmd = &desc.params.tbl_actpair;
-	cmd->alloc_id = CPU_TO_LE16(alloc_id);
-
-	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
-}
-
-/**
- * ice_aq_alloc_actpair - allocate actionpair for specified ACL table
- * @hw: pointer to the HW struct
- * @alloc_id: allocation ID of the table being associated with the actionpair
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Allocate ACL actionpair (direct 0x0C12)
- *
- * This command doesn't need and doesn't have its own command buffer
- * but for response format is as specified in 'struct ice_aqc_acl_generic'
- */
-enum ice_status
-ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id,
-		     struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
-	return ice_aq_actpair_a_d(hw, ice_aqc_opc_alloc_acl_actpair, alloc_id,
-				  buf, cd);
-}
-
-/**
- * ice_aq_dealloc_actpair - dealloc actionpair for specified ACL table
- * @hw: pointer to the HW struct
- * @alloc_id: allocation ID of the table being associated with the actionpair
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- *  Deallocate ACL actionpair (direct 0x0C13)
- */
-enum ice_status
-ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id,
-		       struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
-	return ice_aq_actpair_a_d(hw, ice_aqc_opc_dealloc_acl_actpair, alloc_id,
-				  buf, cd);
-}
-
 /* Helper function to program/query ACL action pair */
 static enum ice_status
 ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx,
@@ -227,41 +154,6 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 				  act_mem_idx, act_entry_idx, buf, cd);
 }
 
-/**
- * ice_aq_query_actpair - query ACL actionpair
- * @hw: pointer to the HW struct
- * @act_mem_idx: action memory index to program/update/query
- * @act_entry_idx: the entry index in action memory to be programmed/updated
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Query ACL actionpair (indirect 0x0C25)
- */
-enum ice_status
-ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
-		     struct ice_aqc_actpair *buf, struct ice_sq_cd *cd)
-{
-	return ice_aq_actpair_p_q(hw, ice_aqc_opc_query_acl_actpair,
-				  act_mem_idx, act_entry_idx, buf, cd);
-}
-
-/**
- * ice_aq_dealloc_acl_res - deallocate ACL resources
- * @hw: pointer to the HW struct
- * @cd: pointer to command details structure or NULL
- *
- * De-allocate ACL resources (direct 0x0C1A). Used by SW to release all the
- * resources allocated for it using a single command
- */
-enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd)
-{
-	struct ice_aq_desc desc;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_res);
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
 /**
  * ice_acl_prof_aq_send - sending ACL profile AQ commands
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h
index 21aa5088f7..ef5a8245a3 100644
--- a/drivers/net/ice/base/ice_acl.h
+++ b/drivers/net/ice/base/ice_acl.h
@@ -142,22 +142,9 @@ enum ice_status
 ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
 			 struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd);
 enum ice_status
-ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
-		       struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id,
-		     struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id,
-		       struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd);
-enum ice_status
 ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 		       struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
 enum ice_status
-ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
-		     struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
-enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd);
-enum ice_status
 ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id,
 			struct ice_aqc_acl_prof_generic_frmt *buf,
 			struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 304e55e210..b6d80fd383 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -844,36 +844,6 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	return status;
 }
 
-/**
- * ice_deinit_hw - unroll initialization operations done by ice_init_hw
- * @hw: pointer to the hardware structure
- *
- * This should be called only during nominal operation, not as a result of
- * ice_init_hw() failing since ice_init_hw() will take care of unrolling
- * applicable initializations if it fails for any reason.
- */
-void ice_deinit_hw(struct ice_hw *hw)
-{
-	ice_free_fd_res_cntr(hw, hw->fd_ctr_base);
-	ice_cleanup_fltr_mgmt_struct(hw);
-
-	ice_sched_cleanup_all(hw);
-	ice_sched_clear_agg(hw);
-	ice_free_seg(hw);
-	ice_free_hw_tbls(hw);
-	ice_destroy_lock(&hw->tnl_lock);
-
-	if (hw->port_info) {
-		ice_free(hw, hw->port_info);
-		hw->port_info = NULL;
-	}
-
-	ice_destroy_all_ctrlq(hw);
-
-	/* Clear VSI contexts if not already cleared */
-	ice_clear_all_vsi_ctx(hw);
-}
-
 /**
  * ice_check_reset - Check to see if a global reset is complete
  * @hw: pointer to the hardware structure
@@ -1157,38 +1127,6 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
 	{ 0 }
 };
 
-/**
- * ice_copy_tx_cmpltnq_ctx_to_hw
- * @hw: pointer to the hardware structure
- * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
- * @tx_cmpltnq_index: the index of the completion queue
- *
- * Copies Tx completion queue context from dense structure to HW register space
- */
-static enum ice_status
-ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
-			      u32 tx_cmpltnq_index)
-{
-	u8 i;
-
-	if (!ice_tx_cmpltnq_ctx)
-		return ICE_ERR_BAD_PTR;
-
-	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
-		return ICE_ERR_PARAM;
-
-	/* Copy each dword separately to HW */
-	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
-		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
-		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
-
-		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
-			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
-	}
-
-	return ICE_SUCCESS;
-}
-
 /* LAN Tx Completion Queue Context */
 static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
 				       /* Field			Width   LSB */
@@ -1205,80 +1143,6 @@ static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
 	{ 0 }
 };
 
-/**
- * ice_write_tx_cmpltnq_ctx
- * @hw: pointer to the hardware structure
- * @tx_cmpltnq_ctx: pointer to the completion queue context
- * @tx_cmpltnq_index: the index of the completion queue
- *
- * Converts completion queue context from sparse to dense structure and then
- * writes it to HW register space
- */
-enum ice_status
-ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
-			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
-			 u32 tx_cmpltnq_index)
-{
-	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
-
-	ice_set_ctx(hw, (u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
-	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
-}
-
-/**
- * ice_clear_tx_cmpltnq_ctx
- * @hw: pointer to the hardware structure
- * @tx_cmpltnq_index: the index of the completion queue to clear
- *
- * Clears Tx completion queue context in HW register space
- */
-enum ice_status
-ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
-{
-	u8 i;
-
-	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
-		return ICE_ERR_PARAM;
-
-	/* Clear each dword register separately */
-	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
-		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
-
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_copy_tx_drbell_q_ctx_to_hw
- * @hw: pointer to the hardware structure
- * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
- * @tx_drbell_q_index: the index of the doorbell queue
- *
- * Copies doorbell queue context from dense structure to HW register space
- */
-static enum ice_status
-ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
-			       u32 tx_drbell_q_index)
-{
-	u8 i;
-
-	if (!ice_tx_drbell_q_ctx)
-		return ICE_ERR_BAD_PTR;
-
-	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
-		return ICE_ERR_PARAM;
-
-	/* Copy each dword separately to HW */
-	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
-		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
-		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
-
-		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
-			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
-	}
-
-	return ICE_SUCCESS;
-}
-
 /* LAN Tx Doorbell Queue Context info */
 static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
 					/* Field		Width   LSB */
@@ -1296,49 +1160,6 @@ static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
 	{ 0 }
 };
 
-/**
- * ice_write_tx_drbell_q_ctx
- * @hw: pointer to the hardware structure
- * @tx_drbell_q_ctx: pointer to the doorbell queue context
- * @tx_drbell_q_index: the index of the doorbell queue
- *
- * Converts doorbell queue context from sparse to dense structure and then
- * writes it to HW register space
- */
-enum ice_status
-ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
-			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
-			  u32 tx_drbell_q_index)
-{
-	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
-
-	ice_set_ctx(hw, (u8 *)tx_drbell_q_ctx, ctx_buf,
-		    ice_tx_drbell_q_ctx_info);
-	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
-}
-
-/**
- * ice_clear_tx_drbell_q_ctx
- * @hw: pointer to the hardware structure
- * @tx_drbell_q_index: the index of the doorbell queue to clear
- *
- * Clears doorbell queue context in HW register space
- */
-enum ice_status
-ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
-{
-	u8 i;
-
-	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
-		return ICE_ERR_PARAM;
-
-	/* Clear each dword register separately */
-	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
-		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
-
-	return ICE_SUCCESS;
-}
-
 /* FW Admin Queue command wrappers */
 
 /**
@@ -2238,69 +2059,6 @@ ice_discover_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_caps)
 	return status;
 }
 
-/**
- * ice_set_safe_mode_caps - Override dev/func capabilities when in safe mode
- * @hw: pointer to the hardware structure
- */
-void ice_set_safe_mode_caps(struct ice_hw *hw)
-{
-	struct ice_hw_func_caps *func_caps = &hw->func_caps;
-	struct ice_hw_dev_caps *dev_caps = &hw->dev_caps;
-	struct ice_hw_common_caps cached_caps;
-	u32 num_funcs;
-
-	/* cache some func_caps values that should be restored after memset */
-	cached_caps = func_caps->common_cap;
-
-	/* unset func capabilities */
-	memset(func_caps, 0, sizeof(*func_caps));
-
-#define ICE_RESTORE_FUNC_CAP(name) \
-	func_caps->common_cap.name = cached_caps.name
-
-	/* restore cached values */
-	ICE_RESTORE_FUNC_CAP(valid_functions);
-	ICE_RESTORE_FUNC_CAP(txq_first_id);
-	ICE_RESTORE_FUNC_CAP(rxq_first_id);
-	ICE_RESTORE_FUNC_CAP(msix_vector_first_id);
-	ICE_RESTORE_FUNC_CAP(max_mtu);
-	ICE_RESTORE_FUNC_CAP(nvm_unified_update);
-
-	/* one Tx and one Rx queue in safe mode */
-	func_caps->common_cap.num_rxq = 1;
-	func_caps->common_cap.num_txq = 1;
-
-	/* two MSIX vectors, one for traffic and one for misc causes */
-	func_caps->common_cap.num_msix_vectors = 2;
-	func_caps->guar_num_vsi = 1;
-
-	/* cache some dev_caps values that should be restored after memset */
-	cached_caps = dev_caps->common_cap;
-	num_funcs = dev_caps->num_funcs;
-
-	/* unset dev capabilities */
-	memset(dev_caps, 0, sizeof(*dev_caps));
-
-#define ICE_RESTORE_DEV_CAP(name) \
-	dev_caps->common_cap.name = cached_caps.name
-
-	/* restore cached values */
-	ICE_RESTORE_DEV_CAP(valid_functions);
-	ICE_RESTORE_DEV_CAP(txq_first_id);
-	ICE_RESTORE_DEV_CAP(rxq_first_id);
-	ICE_RESTORE_DEV_CAP(msix_vector_first_id);
-	ICE_RESTORE_DEV_CAP(max_mtu);
-	ICE_RESTORE_DEV_CAP(nvm_unified_update);
-	dev_caps->num_funcs = num_funcs;
-
-	/* one Tx and one Rx queue per function in safe mode */
-	dev_caps->common_cap.num_rxq = num_funcs;
-	dev_caps->common_cap.num_txq = num_funcs;
-
-	/* two MSIX vectors per function */
-	dev_caps->common_cap.num_msix_vectors = 2 * num_funcs;
-}
-
 /**
  * ice_get_caps - get info about the HW
  * @hw: pointer to the hardware structure
@@ -2370,182 +2128,6 @@ void ice_clear_pxe_mode(struct ice_hw *hw)
 		ice_aq_clear_pxe_mode(hw);
 }
 
-/**
- * ice_get_link_speed_based_on_phy_type - returns link speed
- * @phy_type_low: lower part of phy_type
- * @phy_type_high: higher part of phy_type
- *
- * This helper function will convert an entry in PHY type structure
- * [phy_type_low, phy_type_high] to its corresponding link speed.
- * Note: In the structure of [phy_type_low, phy_type_high], there should
- * be one bit set, as this function will convert one PHY type to its
- * speed.
- * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
- * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
- */
-static u16
-ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
-{
-	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
-	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
-
-	switch (phy_type_low) {
-	case ICE_PHY_TYPE_LOW_100BASE_TX:
-	case ICE_PHY_TYPE_LOW_100M_SGMII:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
-		break;
-	case ICE_PHY_TYPE_LOW_1000BASE_T:
-	case ICE_PHY_TYPE_LOW_1000BASE_SX:
-	case ICE_PHY_TYPE_LOW_1000BASE_LX:
-	case ICE_PHY_TYPE_LOW_1000BASE_KX:
-	case ICE_PHY_TYPE_LOW_1G_SGMII:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
-		break;
-	case ICE_PHY_TYPE_LOW_2500BASE_T:
-	case ICE_PHY_TYPE_LOW_2500BASE_X:
-	case ICE_PHY_TYPE_LOW_2500BASE_KX:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
-		break;
-	case ICE_PHY_TYPE_LOW_5GBASE_T:
-	case ICE_PHY_TYPE_LOW_5GBASE_KR:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
-		break;
-	case ICE_PHY_TYPE_LOW_10GBASE_T:
-	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
-	case ICE_PHY_TYPE_LOW_10GBASE_SR:
-	case ICE_PHY_TYPE_LOW_10GBASE_LR:
-	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
-	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
-		break;
-	case ICE_PHY_TYPE_LOW_25GBASE_T:
-	case ICE_PHY_TYPE_LOW_25GBASE_CR:
-	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
-	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
-	case ICE_PHY_TYPE_LOW_25GBASE_SR:
-	case ICE_PHY_TYPE_LOW_25GBASE_LR:
-	case ICE_PHY_TYPE_LOW_25GBASE_KR:
-	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
-	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
-	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
-		break;
-	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
-	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
-	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
-	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
-	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_40G_XLAUI:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
-		break;
-	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
-	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
-	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
-	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
-	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_50G_LAUI2:
-	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_50G_AUI2:
-	case ICE_PHY_TYPE_LOW_50GBASE_CP:
-	case ICE_PHY_TYPE_LOW_50GBASE_SR:
-	case ICE_PHY_TYPE_LOW_50GBASE_FR:
-	case ICE_PHY_TYPE_LOW_50GBASE_LR:
-	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
-	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_50G_AUI1:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
-		break;
-	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
-	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
-	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
-	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
-	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_100G_CAUI4:
-	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
-	case ICE_PHY_TYPE_LOW_100G_AUI4:
-	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
-	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
-	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
-	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
-	case ICE_PHY_TYPE_LOW_100GBASE_DR:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
-		break;
-	default:
-		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
-		break;
-	}
-
-	switch (phy_type_high) {
-	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
-	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
-	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
-	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
-	case ICE_PHY_TYPE_HIGH_100G_AUI2:
-		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
-		break;
-	default:
-		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
-		break;
-	}
-
-	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
-	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
-		return ICE_AQ_LINK_SPEED_UNKNOWN;
-	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
-		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
-		return ICE_AQ_LINK_SPEED_UNKNOWN;
-	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
-		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
-		return speed_phy_type_low;
-	else
-		return speed_phy_type_high;
-}
-
-/**
- * ice_update_phy_type
- * @phy_type_low: pointer to the lower part of phy_type
- * @phy_type_high: pointer to the higher part of phy_type
- * @link_speeds_bitmap: targeted link speeds bitmap
- *
- * Note: For the link_speeds_bitmap structure, you can check it at
- * [ice_aqc_get_link_status->link_speed]. Caller can pass in
- * link_speeds_bitmap include multiple speeds.
- *
- * Each entry in this [phy_type_low, phy_type_high] structure will
- * present a certain link speed. This helper function will turn on bits
- * in [phy_type_low, phy_type_high] structure based on the value of
- * link_speeds_bitmap input parameter.
- */
-void
-ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
-		    u16 link_speeds_bitmap)
-{
-	u64 pt_high;
-	u64 pt_low;
-	int index;
-	u16 speed;
-
-	/* We first check with low part of phy_type */
-	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
-		pt_low = BIT_ULL(index);
-		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
-
-		if (link_speeds_bitmap & speed)
-			*phy_type_low |= BIT_ULL(index);
-	}
-
-	/* We then check with high part of phy_type */
-	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
-		pt_high = BIT_ULL(index);
-		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
-
-		if (link_speeds_bitmap & speed)
-			*phy_type_high |= BIT_ULL(index);
-	}
-}
-
 /**
  * ice_aq_set_phy_cfg
  * @hw: pointer to the HW struct
@@ -2642,787 +2224,279 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
 }
 
 /**
- * ice_cache_phy_user_req
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
  * @pi: port information structure
- * @cache_data: PHY logging data
- * @cache_mode: PHY logging mode
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
  *
- * Log the user request on (FC, FEC, SPEED) for later user.
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
  */
-static void
-ice_cache_phy_user_req(struct ice_port_info *pi,
-		       struct ice_phy_cache_mode_data cache_data,
-		       enum ice_phy_cache_mode cache_mode)
+void
+ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
+			 struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg)
 {
-	if (!pi)
+	if (!pi || !caps || !cfg)
 		return;
 
-	switch (cache_mode) {
-	case ICE_FC_MODE:
-		pi->phy.curr_user_fc_req = cache_data.data.curr_user_fc_req;
-		break;
-	case ICE_SPEED_MODE:
-		pi->phy.curr_user_speed_req =
-			cache_data.data.curr_user_speed_req;
-		break;
-	case ICE_FEC_MODE:
-		pi->phy.curr_user_fec_req = cache_data.data.curr_user_fec_req;
-		break;
-	default:
-		break;
-	}
-}
-
-/**
- * ice_caps_to_fc_mode
- * @caps: PHY capabilities
- *
- * Convert PHY FC capabilities to ice FC mode
- */
-enum ice_fc_mode ice_caps_to_fc_mode(u8 caps)
-{
-	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
-	    caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
-		return ICE_FC_FULL;
+	ice_memset(cfg, 0, sizeof(*cfg), ICE_NONDMA_MEM);
+	cfg->phy_type_low = caps->phy_type_low;
+	cfg->phy_type_high = caps->phy_type_high;
+	cfg->caps = caps->caps;
+	cfg->low_power_ctrl_an = caps->low_power_ctrl_an;
+	cfg->eee_cap = caps->eee_cap;
+	cfg->eeer_value = caps->eeer_value;
+	cfg->link_fec_opt = caps->link_fec_options;
+	cfg->module_compliance_enforcement =
+		caps->module_compliance_enforcement;
 
-	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
-		return ICE_FC_TX_PAUSE;
+	if (ice_fw_supports_link_override(pi->hw)) {
+		struct ice_link_default_override_tlv tlv;
 
-	if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
-		return ICE_FC_RX_PAUSE;
+		if (ice_get_link_default_override(&tlv, pi))
+			return;
 
-	return ICE_FC_NONE;
+		if (tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE)
+			cfg->module_compliance_enforcement |=
+				ICE_LINK_OVERRIDE_STRICT_MODE;
+	}
 }
 
 /**
- * ice_caps_to_fec_mode
- * @caps: PHY capabilities
- * @fec_options: Link FEC options
+ * ice_aq_set_event_mask
+ * @hw: pointer to the HW struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
  *
- * Convert PHY FEC capabilities to ice FEC mode
+ * Set event mask (0x0613)
  */
-enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options)
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
 {
-	if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
-		return ICE_FEC_AUTO;
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
 
-	if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
-			   ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
-			   ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
-			   ICE_AQC_PHY_FEC_25G_KR_REQ))
-		return ICE_FEC_BASER;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
 
-	if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
-			   ICE_AQC_PHY_FEC_25G_RS_544_REQ |
-			   ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
-		return ICE_FEC_RS;
+	cmd->lport_num = port_num;
 
-	return ICE_FEC_NONE;
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
 /**
- * ice_cfg_phy_fc - Configure PHY FC data based on FC mode
- * @pi: port information structure
- * @cfg: PHY configuration data to set FC mode
- * @req_mode: FC mode to configure
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @params: RSS LUT parameters
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
  */
 static enum ice_status
-ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
-	       enum ice_fc_mode req_mode)
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set)
 {
-	struct ice_phy_cache_mode_data cache_data;
-	u8 pause_mask = 0x0;
+	u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle;
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u8 *lut;
 
-	if (!pi || !cfg)
-		return ICE_ERR_BAD_PTR;
+	if (!params)
+		return ICE_ERR_PARAM;
 
-	switch (req_mode) {
-	case ICE_FC_AUTO:
-	{
-		struct ice_aqc_get_phy_caps_data *pcaps;
-		enum ice_status status;
+	vsi_handle = params->vsi_handle;
+	lut = params->lut;
 
-		pcaps = (struct ice_aqc_get_phy_caps_data *)
-			ice_malloc(pi->hw, sizeof(*pcaps));
-		if (!pcaps)
-			return ICE_ERR_NO_MEMORY;
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
 
-		/* Query the value of FC that both the NIC and attached media
-		 * can do.
-		 */
-		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
-					     pcaps, NULL);
-		if (status) {
-			ice_free(pi->hw, pcaps);
-			return status;
-		}
+	lut_size = params->lut_size;
+	lut_type = params->lut_type;
+	glob_lut_idx = params->global_lut_id;
+	vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
 
-		pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE;
-		pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+	cmd_resp = &desc.params.get_set_rss_lut;
 
-		ice_free(pi->hw, pcaps);
-		break;
-	}
-	case ICE_FC_FULL:
-		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
-		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
-		break;
-	case ICE_FC_RX_PAUSE:
-		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
-		break;
-	case ICE_FC_TX_PAUSE:
-		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
-		break;
-	default:
-		break;
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
 	}
 
-	/* clear the old pause settings */
-	cfg->caps &= ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
-		ICE_AQC_PHY_EN_RX_LINK_PAUSE);
-
-	/* set the new capabilities */
-	cfg->caps |= pause_mask;
-
-	/* Cache user FC request */
-	cache_data.data.curr_user_fc_req = req_mode;
-	ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE);
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
 
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_set_fc
- * @pi: port information structure
- * @aq_failures: pointer to status code, specific to ice_set_fc routine
- * @ena_auto_link_update: enable automatic link update
- *
- * Set the requested flow control mode.
- */
-enum ice_status
-ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
-{
-	struct ice_aqc_set_phy_cfg_data  cfg = { 0 };
-	struct ice_aqc_get_phy_caps_data *pcaps;
-	enum ice_status status;
-	struct ice_hw *hw;
-
-	if (!pi || !aq_failures)
-		return ICE_ERR_BAD_PTR;
-
-	*aq_failures = 0;
-	hw = pi->hw;
-
-	pcaps = (struct ice_aqc_get_phy_caps_data *)
-		ice_malloc(hw, sizeof(*pcaps));
-	if (!pcaps)
-		return ICE_ERR_NO_MEMORY;
-
-	/* Get the current PHY config */
-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
-				     NULL);
-	if (status) {
-		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
-		goto out;
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
 	}
 
-	ice_copy_phy_caps_to_cfg(pi, pcaps, &cfg);
-
-	/* Configure the set PHY data */
-	status = ice_cfg_phy_fc(pi, &cfg, pi->fc.req_mode);
-	if (status) {
-		if (status != ICE_ERR_BAD_PTR)
-			*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
 
-		goto out;
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
 	}
 
-	/* If the capabilities have changed, then set the new config */
-	if (cfg.caps != pcaps->caps) {
-		int retry_count, retry_max = 10;
-
-		/* Auto restart link so settings take effect */
-		if (ena_auto_link_update)
-			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
-
-		status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
-		if (status) {
-			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
-			goto out;
-		}
-
-		/* Update the link info
-		 * It sometimes takes a really long time for link to
-		 * come back from the atomic reset. Thus, we wait a
-		 * little bit.
-		 */
-		for (retry_count = 0; retry_count < retry_max; retry_count++) {
-			status = ice_update_link_info(pi);
-
-			if (status == ICE_SUCCESS)
-				break;
-
-			ice_msec_delay(100, true);
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
 		}
-
-		if (status)
-			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
 	}
 
-out:
-	ice_free(hw, pcaps);
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
 	return status;
 }
 
 /**
- * ice_phy_caps_equals_cfg
- * @phy_caps: PHY capabilities
- * @phy_cfg: PHY configuration
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @get_params: RSS LUT parameters used to specify which RSS LUT to get
  *
- * Helper function to determine if PHY capabilities matches PHY
- * configuration
+ * get the RSS lookup table, PF or VSI type
  */
-bool
-ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps,
-			struct ice_aqc_set_phy_cfg_data *phy_cfg)
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params)
 {
-	u8 caps_mask, cfg_mask;
-
-	if (!phy_caps || !phy_cfg)
-		return false;
-
-	/* These bits are not common between capabilities and configuration.
-	 * Do not use them to determine equality.
-	 */
-	caps_mask = ICE_AQC_PHY_CAPS_MASK & ~(ICE_AQC_PHY_AN_MODE |
-					      ICE_AQC_PHY_EN_MOD_QUAL);
-	cfg_mask = ICE_AQ_PHY_ENA_VALID_MASK & ~ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
-
-	if (phy_caps->phy_type_low != phy_cfg->phy_type_low ||
-	    phy_caps->phy_type_high != phy_cfg->phy_type_high ||
-	    ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) ||
-	    phy_caps->low_power_ctrl_an != phy_cfg->low_power_ctrl_an ||
-	    phy_caps->eee_cap != phy_cfg->eee_cap ||
-	    phy_caps->eeer_value != phy_cfg->eeer_value ||
-	    phy_caps->link_fec_options != phy_cfg->link_fec_opt)
-		return false;
-
-	return true;
+	return __ice_aq_get_set_rss_lut(hw, get_params, false);
 }
 
 /**
- * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
- * @pi: port information structure
- * @caps: PHY ability structure to copy date from
- * @cfg: PHY configuration structure to copy data to
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @set_params: RSS LUT parameters used to specify how to set the RSS LUT
  *
- * Helper function to copy AQC PHY get ability data to PHY set configuration
- * data structure
- */
-void
-ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
-			 struct ice_aqc_get_phy_caps_data *caps,
-			 struct ice_aqc_set_phy_cfg_data *cfg)
-{
-	if (!pi || !caps || !cfg)
-		return;
-
-	ice_memset(cfg, 0, sizeof(*cfg), ICE_NONDMA_MEM);
-	cfg->phy_type_low = caps->phy_type_low;
-	cfg->phy_type_high = caps->phy_type_high;
-	cfg->caps = caps->caps;
-	cfg->low_power_ctrl_an = caps->low_power_ctrl_an;
-	cfg->eee_cap = caps->eee_cap;
-	cfg->eeer_value = caps->eeer_value;
-	cfg->link_fec_opt = caps->link_fec_options;
-	cfg->module_compliance_enforcement =
-		caps->module_compliance_enforcement;
-
-	if (ice_fw_supports_link_override(pi->hw)) {
-		struct ice_link_default_override_tlv tlv;
-
-		if (ice_get_link_default_override(&tlv, pi))
-			return;
-
-		if (tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE)
-			cfg->module_compliance_enforcement |=
-				ICE_LINK_OVERRIDE_STRICT_MODE;
-	}
-}
-
-/**
- * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
- * @pi: port information structure
- * @cfg: PHY configuration data to set FEC mode
- * @fec: FEC mode to configure
+ * set the RSS lookup table, PF or VSI type
  */
 enum ice_status
-ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
-		enum ice_fec_mode fec)
+ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params)
 {
-	struct ice_aqc_get_phy_caps_data *pcaps;
-	enum ice_status status = ICE_SUCCESS;
-	struct ice_hw *hw;
-
-	if (!pi || !cfg)
-		return ICE_ERR_BAD_PTR;
-
-	hw = pi->hw;
-
-	pcaps = (struct ice_aqc_get_phy_caps_data *)
-		ice_malloc(hw, sizeof(*pcaps));
-	if (!pcaps)
-		return ICE_ERR_NO_MEMORY;
-
-	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps,
-				     NULL);
-	if (status)
-		goto out;
-
-	cfg->caps |= (pcaps->caps & ICE_AQC_PHY_EN_AUTO_FEC);
-	cfg->link_fec_opt = pcaps->link_fec_options;
-
-	switch (fec) {
-	case ICE_FEC_BASER:
-		/* Clear RS bits, and AND BASE-R ability
-		 * bits and OR request bits.
-		 */
-		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
-			ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
-		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
-			ICE_AQC_PHY_FEC_25G_KR_REQ;
-		break;
-	case ICE_FEC_RS:
-		/* Clear BASE-R bits, and AND RS ability
-		 * bits and OR request bits.
-		 */
-		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
-		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
-			ICE_AQC_PHY_FEC_25G_RS_544_REQ;
-		break;
-	case ICE_FEC_NONE:
-		/* Clear all FEC option bits. */
-		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
-		break;
-	case ICE_FEC_AUTO:
-		/* AND auto FEC bit, and all caps bits. */
-		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
-		cfg->link_fec_opt |= pcaps->link_fec_options;
-		break;
-	default:
-		status = ICE_ERR_PARAM;
-		break;
-	}
-
-	if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(pi->hw)) {
-		struct ice_link_default_override_tlv tlv;
-
-		if (ice_get_link_default_override(&tlv, pi))
-			goto out;
-
-		if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) &&
-		    (tlv.options & ICE_LINK_OVERRIDE_EN))
-			cfg->link_fec_opt = tlv.fec_options;
-	}
-
-out:
-	ice_free(hw, pcaps);
-
-	return status;
+	return __ice_aq_get_set_rss_lut(hw, set_params, true);
 }
 
 /**
- * ice_get_link_status - get status of the HW network link
- * @pi: port information structure
- * @link_up: pointer to bool (true/false = linkup/linkdown)
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the HW struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
  *
- * Variable link_up is true if link is up, false if link is down.
- * The variable link_up is invalid if status is non zero. As a
- * result of this call, link status reporting becomes enabled
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
  */
-enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
 {
-	struct ice_phy_info *phy_info;
-	enum ice_status status = ICE_SUCCESS;
-
-	if (!pi || !link_up)
-		return ICE_ERR_PARAM;
-
-	phy_info = &pi->phy;
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
 
-	if (phy_info->get_link_info) {
-		status = ice_update_link_info(pi);
+	cmd_resp = &desc.params.get_set_rss_key;
 
-		if (status)
-			ice_debug(pi->hw, ICE_DBG_LINK, "get link status error, status = %d\n",
-				  status);
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
 	}
 
-	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
 
-	return status;
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
 }
 
 /**
- * ice_aq_set_link_restart_an
- * @pi: pointer to the port information structure
- * @ena_link: if true: enable link, if false: disable link
- * @cd: pointer to command details structure or NULL
+ * ice_aq_get_rss_key
+ * @hw: pointer to the HW struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
  *
- * Sets up the link and restarts the Auto-Negotiation over the link.
+ * get the RSS key per VSI
  */
 enum ice_status
-ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
-			   struct ice_sq_cd *cd)
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
 {
-	struct ice_aqc_restart_an *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.restart_an;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
-
-	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
-	cmd->lport_num = pi->lport;
-	if (ena_link)
-		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
-	else
-		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
 
-	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
 }
 
 /**
- * ice_aq_set_event_mask
+ * ice_aq_set_rss_key
  * @hw: pointer to the HW struct
- * @port_num: port number of the physical function
- * @mask: event mask to be set
- * @cd: pointer to command details structure or NULL
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
  *
- * Set event mask (0x0613)
+ * set the RSS key per VSI
  */
 enum ice_status
-ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
-		      struct ice_sq_cd *cd)
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
 {
-	struct ice_aqc_set_event_mask *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.set_event_mask;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
-
-	cmd->lport_num = port_num;
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
 
-	cmd->event_mask = CPU_TO_LE16(mask);
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
 }
 
 /**
- * ice_aq_set_mac_loopback
- * @hw: pointer to the HW struct
- * @ena_lpbk: Enable or Disable loopback
- * @cd: pointer to command details structure or NULL
- *
- * Enable/disable loopback on a given port
- */
-enum ice_status
-ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_set_mac_lb *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.set_mac_lb;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
-	if (ena_lpbk)
-		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
-/**
- * ice_aq_set_port_id_led
- * @pi: pointer to the port information
- * @is_orig_mode: is this LED set to original mode (by the net-list)
- * @cd: pointer to command details structure or NULL
- *
- * Set LED value for the given port (0x06e9)
- */
-enum ice_status
-ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
-		       struct ice_sq_cd *cd)
-{
-	struct ice_aqc_set_port_id_led *cmd;
-	struct ice_hw *hw = pi->hw;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.set_port_id_led;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
-
-	if (is_orig_mode)
-		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
-	else
-		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
-/**
- * ice_aq_sff_eeprom
- * @hw: pointer to the HW struct
- * @lport: bits [7:0] = logical port, bit [8] = logical port valid
- * @bus_addr: I2C bus address of the eeprom (typically 0xA0, 0=topo default)
- * @mem_addr: I2C offset. lower 8 bits for address, 8 upper bits zero padding.
- * @page: QSFP page
- * @set_page: set or ignore the page
- * @data: pointer to data buffer to be read/written to the I2C device.
- * @length: 1-16 for read, 1 for write.
- * @write: 0 read, 1 for write.
- * @cd: pointer to command details structure or NULL
- *
- * Read/Write SFF EEPROM (0x06EE)
- */
-enum ice_status
-ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
-		  u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
-		  bool write, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_sff_eeprom *cmd;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-
-	if (!data || (mem_addr & 0xff00))
-		return ICE_ERR_PARAM;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_sff_eeprom);
-	cmd = &desc.params.read_write_sff_param;
-	desc.flags = CPU_TO_LE16(ICE_AQ_FLAG_RD);
-	cmd->lport_num = (u8)(lport & 0xff);
-	cmd->lport_num_valid = (u8)((lport >> 8) & 0x01);
-	cmd->i2c_bus_addr = CPU_TO_LE16(((bus_addr >> 1) &
-					 ICE_AQC_SFF_I2CBUS_7BIT_M) |
-					((set_page <<
-					  ICE_AQC_SFF_SET_EEPROM_PAGE_S) &
-					 ICE_AQC_SFF_SET_EEPROM_PAGE_M));
-	cmd->i2c_mem_addr = CPU_TO_LE16(mem_addr & 0xff);
-	cmd->eeprom_page = CPU_TO_LE16((u16)page << ICE_AQC_SFF_EEPROM_PAGE_S);
-	if (write)
-		cmd->i2c_bus_addr |= CPU_TO_LE16(ICE_AQC_SFF_IS_WRITE);
-
-	status = ice_aq_send_cmd(hw, &desc, data, length, cd);
-	return status;
-}
-
-/**
- * __ice_aq_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @params: RSS LUT parameters
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
- */
-static enum ice_status
-__ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set)
-{
-	u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle;
-	struct ice_aqc_get_set_rss_lut *cmd_resp;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-	u8 *lut;
-
-	if (!params)
-		return ICE_ERR_PARAM;
-
-	vsi_handle = params->vsi_handle;
-	lut = params->lut;
-
-	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
-		return ICE_ERR_PARAM;
-
-	lut_size = params->lut_size;
-	lut_type = params->lut_type;
-	glob_lut_idx = params->global_lut_id;
-	vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
-
-	cmd_resp = &desc.params.get_set_rss_lut;
-
-	if (set) {
-		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
-		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-	} else {
-		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
-	}
-
-	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
-					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
-					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
-				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
-
-	switch (lut_type) {
-	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
-	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
-	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
-		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
-			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
-		break;
-	default:
-		status = ICE_ERR_PARAM;
-		goto ice_aq_get_set_rss_lut_exit;
-	}
-
-	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
-		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
-			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
-
-		if (!set)
-			goto ice_aq_get_set_rss_lut_send;
-	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
-		if (!set)
-			goto ice_aq_get_set_rss_lut_send;
-	} else {
-		goto ice_aq_get_set_rss_lut_send;
-	}
-
-	/* LUT size is only valid for Global and PF table types */
-	switch (lut_size) {
-	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
-		break;
-	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
-		break;
-	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
-		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
-			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
-				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
-				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
-			break;
-		}
-		/* fall-through */
-	default:
-		status = ICE_ERR_PARAM;
-		goto ice_aq_get_set_rss_lut_exit;
-	}
-
-ice_aq_get_set_rss_lut_send:
-	cmd_resp->flags = CPU_TO_LE16(flags);
-	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
-
-ice_aq_get_set_rss_lut_exit:
-	return status;
-}
-
-/**
- * ice_aq_get_rss_lut
- * @hw: pointer to the hardware structure
- * @get_params: RSS LUT parameters used to specify which RSS LUT to get
- *
- * get the RSS lookup table, PF or VSI type
- */
-enum ice_status
-ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params)
-{
-	return __ice_aq_get_set_rss_lut(hw, get_params, false);
-}
-
-/**
- * ice_aq_set_rss_lut
- * @hw: pointer to the hardware structure
- * @set_params: RSS LUT parameters used to specify how to set the RSS LUT
- *
- * set the RSS lookup table, PF or VSI type
- */
-enum ice_status
-ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params)
-{
-	return __ice_aq_get_set_rss_lut(hw, set_params, true);
-}
-
-/**
- * __ice_aq_get_set_rss_key
- * @hw: pointer to the HW struct
- * @vsi_id: VSI FW index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get (0x0B04) or set (0x0B02) the RSS key per VSI
- */
-static enum
-ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
-				    struct ice_aqc_get_set_rss_keys *key,
-				    bool set)
-{
-	struct ice_aqc_get_set_rss_key *cmd_resp;
-	u16 key_size = sizeof(*key);
-	struct ice_aq_desc desc;
-
-	cmd_resp = &desc.params.get_set_rss_key;
-
-	if (set) {
-		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
-		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-	} else {
-		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
-	}
-
-	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
-					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
-					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
-				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
-
-	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
-}
-
-/**
- * ice_aq_get_rss_key
- * @hw: pointer to the HW struct
- * @vsi_handle: software VSI handle
- * @key: pointer to key info struct
- *
- * get the RSS key per VSI
- */
-enum ice_status
-ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
-		   struct ice_aqc_get_set_rss_keys *key)
-{
-	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
-		return ICE_ERR_PARAM;
-
-	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
-					key, false);
-}
-
-/**
- * ice_aq_set_rss_key
- * @hw: pointer to the HW struct
- * @vsi_handle: software VSI handle
- * @keys: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-enum ice_status
-ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
-		   struct ice_aqc_get_set_rss_keys *keys)
-{
-	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
-		return ICE_ERR_PARAM;
-
-	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
-					keys, true);
-}
-
-/**
- * ice_aq_add_lan_txq
- * @hw: pointer to the hardware structure
- * @num_qgrps: Number of added queue groups
- * @qg_list: list of queue groups to be added
- * @buf_size: size of buffer for indirect command
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
  * @cd: pointer to command details structure or NULL
  *
  * Add Tx LAN queue (0x0C30)
@@ -3567,400 +2641,107 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	return status;
 }
 
-/**
- * ice_aq_move_recfg_lan_txq
- * @hw: pointer to the hardware structure
- * @num_qs: number of queues to move/reconfigure
- * @is_move: true if this operation involves node movement
- * @is_tc_change: true if this operation involves a TC change
- * @subseq_call: true if this operation is a subsequent call
- * @flush_pipe: on timeout, true to flush pipe, false to return EAGAIN
- * @timeout: timeout in units of 100 usec (valid values 0-50)
- * @blocked_cgds: out param, bitmap of CGDs that timed out if returning EAGAIN
- * @buf: struct containing src/dest TEID and per-queue info
- * @buf_size: size of buffer for indirect command
- * @txqs_moved: out param, number of queues successfully moved
- * @cd: pointer to command details structure or NULL
- *
- * Move / Reconfigure Tx LAN queues (0x0C32)
- */
-enum ice_status
-ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move,
-			  bool is_tc_change, bool subseq_call, bool flush_pipe,
-			  u8 timeout, u32 *blocked_cgds,
-			  struct ice_aqc_move_txqs_data *buf, u16 buf_size,
-			  u8 *txqs_moved, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_move_txqs *cmd;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-
-	cmd = &desc.params.move_txqs;
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_move_recfg_txqs);
-
-#define ICE_LAN_TXQ_MOVE_TIMEOUT_MAX 50
-	if (timeout > ICE_LAN_TXQ_MOVE_TIMEOUT_MAX)
-		return ICE_ERR_PARAM;
-
-	if (is_tc_change && !flush_pipe && !blocked_cgds)
-		return ICE_ERR_PARAM;
-
-	if (!is_move && !is_tc_change)
-		return ICE_ERR_PARAM;
-
-	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
-	if (is_move)
-		cmd->cmd_type |= ICE_AQC_Q_CMD_TYPE_MOVE;
-
-	if (is_tc_change)
-		cmd->cmd_type |= ICE_AQC_Q_CMD_TYPE_TC_CHANGE;
-
-	if (subseq_call)
-		cmd->cmd_type |= ICE_AQC_Q_CMD_SUBSEQ_CALL;
-
-	if (flush_pipe)
-		cmd->cmd_type |= ICE_AQC_Q_CMD_FLUSH_PIPE;
-
-	cmd->num_qs = num_qs;
-	cmd->timeout = ((timeout << ICE_AQC_Q_CMD_TIMEOUT_S) &
-			ICE_AQC_Q_CMD_TIMEOUT_M);
-
-	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-
-	if (!status && txqs_moved)
-		*txqs_moved = cmd->num_qs;
-
-	if (hw->adminq.sq_last_status == ICE_AQ_RC_EAGAIN &&
-	    is_tc_change && !flush_pipe)
-		*blocked_cgds = LE32_TO_CPU(cmd->blocked_cgds);
-
-	return status;
-}
-
 /* End of FW Admin Queue command wrappers */
 
 /**
- * ice_write_byte - write a byte to a packed context structure
- * @src_ctx:  the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info:  a description of the struct to be filled
- */
-static void
-ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
-	u8 src_byte, dest_byte, mask;
-	u8 *from, *dest;
-	u16 shift_width;
-
-	/* copy from the next struct field */
-	from = src_ctx + ce_info->offset;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-	mask = (u8)(BIT(ce_info->width) - 1);
-
-	src_byte = *from;
-	src_byte &= mask;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-	src_byte <<= shift_width;
-
-	/* get the current bits from the target bit string */
-	dest = dest_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
-
-	dest_byte &= ~mask;	/* get the bits not changing */
-	dest_byte |= src_byte;	/* add in the new bits */
-
-	/* put it all back */
-	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_word - write a word to a packed context structure
- * @src_ctx:  the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info:  a description of the struct to be filled
- */
-static void
-ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
-	u16 src_word, mask;
-	__le16 dest_word;
-	u8 *from, *dest;
-	u16 shift_width;
-
-	/* copy from the next struct field */
-	from = src_ctx + ce_info->offset;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-	mask = BIT(ce_info->width) - 1;
-
-	/* don't swizzle the bits until after the mask because the mask bits
-	 * will be in a different bit position on big endian machines
-	 */
-	src_word = *(u16 *)from;
-	src_word &= mask;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-	src_word <<= shift_width;
-
-	/* get the current bits from the target bit string */
-	dest = dest_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
-
-	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
-	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
-
-	/* put it all back */
-	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_dword - write a dword to a packed context structure
- * @src_ctx:  the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info:  a description of the struct to be filled
- */
-static void
-ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
-	u32 src_dword, mask;
-	__le32 dest_dword;
-	u8 *from, *dest;
-	u16 shift_width;
-
-	/* copy from the next struct field */
-	from = src_ctx + ce_info->offset;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-
-	/* if the field width is exactly 32 on an x86 machine, then the shift
-	 * operation will not work because the SHL instructions count is masked
-	 * to 5 bits so the shift will do nothing
-	 */
-	if (ce_info->width < 32)
-		mask = BIT(ce_info->width) - 1;
-	else
-		mask = (u32)~0;
-
-	/* don't swizzle the bits until after the mask because the mask bits
-	 * will be in a different bit position on big endian machines
-	 */
-	src_dword = *(u32 *)from;
-	src_dword &= mask;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-	src_dword <<= shift_width;
-
-	/* get the current bits from the target bit string */
-	dest = dest_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
-
-	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
-	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
-
-	/* put it all back */
-	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_qword - write a qword to a packed context structure
- * @src_ctx:  the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info:  a description of the struct to be filled
- */
-static void
-ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
-	u64 src_qword, mask;
-	__le64 dest_qword;
-	u8 *from, *dest;
-	u16 shift_width;
-
-	/* copy from the next struct field */
-	from = src_ctx + ce_info->offset;
-
-	/* prepare the bits and mask */
-	shift_width = ce_info->lsb % 8;
-
-	/* if the field width is exactly 64 on an x86 machine, then the shift
-	 * operation will not work because the SHL instructions count is masked
-	 * to 6 bits so the shift will do nothing
-	 */
-	if (ce_info->width < 64)
-		mask = BIT_ULL(ce_info->width) - 1;
-	else
-		mask = (u64)~0;
-
-	/* don't swizzle the bits until after the mask because the mask bits
-	 * will be in a different bit position on big endian machines
-	 */
-	src_qword = *(u64 *)from;
-	src_qword &= mask;
-
-	/* shift to correct alignment */
-	mask <<= shift_width;
-	src_qword <<= shift_width;
-
-	/* get the current bits from the target bit string */
-	dest = dest_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
-
-	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
-	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
-
-	/* put it all back */
-	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_set_ctx - set context bits in packed structure
- * @hw: pointer to the hardware structure
- * @src_ctx:  pointer to a generic non-packed context structure
- * @dest_ctx: pointer to memory for the packed structure
- * @ce_info:  a description of the structure to be transformed
- */
-enum ice_status
-ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
-	    const struct ice_ctx_ele *ce_info)
-{
-	int f;
-
-	for (f = 0; ce_info[f].width; f++) {
-		/* We have to deal with each element of the FW response
-		 * using the correct size so that we are correct regardless
-		 * of the endianness of the machine.
-		 */
-		if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
-			ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
-				  f, ce_info[f].width, ce_info[f].size_of);
-			continue;
-		}
-		switch (ce_info[f].size_of) {
-		case sizeof(u8):
-			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
-			break;
-		case sizeof(u16):
-			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
-			break;
-		case sizeof(u32):
-			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
-			break;
-		case sizeof(u64):
-			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
-			break;
-		default:
-			return ICE_ERR_INVAL_SIZE;
-		}
-	}
-
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_read_byte - read context byte into struct
+ * ice_write_byte - write a byte to a packed context structure
  * @src_ctx:  the context structure to read from
  * @dest_ctx: the context to be written to
  * @ce_info:  a description of the struct to be filled
  */
 static void
-ice_read_byte(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
 {
-	u8 dest_byte, mask;
-	u8 *src, *target;
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
 	u16 shift_width;
 
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
 	/* prepare the bits and mask */
 	shift_width = ce_info->lsb % 8;
 	mask = (u8)(BIT(ce_info->width) - 1);
 
+	src_byte = *from;
+	src_byte &= mask;
+
 	/* shift to correct alignment */
 	mask <<= shift_width;
+	src_byte <<= shift_width;
 
-	/* get the current bits from the src bit string */
-	src = src_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&dest_byte, src, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
-
-	dest_byte &= ~(mask);
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
 
-	dest_byte >>= shift_width;
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
 
-	/* get the address from the struct field */
-	target = dest_ctx + ce_info->offset;
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
 
-	/* put it back in the struct */
-	ice_memcpy(target, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
 }
 
 /**
- * ice_read_word - read context word into struct
+ * ice_write_word - write a word to a packed context structure
  * @src_ctx:  the context structure to read from
  * @dest_ctx: the context to be written to
  * @ce_info:  a description of the struct to be filled
  */
 static void
-ice_read_word(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
 {
-	u16 dest_word, mask;
-	u8 *src, *target;
-	__le16 src_word;
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
 	u16 shift_width;
 
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
 	/* prepare the bits and mask */
 	shift_width = ce_info->lsb % 8;
 	mask = BIT(ce_info->width) - 1;
 
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
 	/* shift to correct alignment */
 	mask <<= shift_width;
+	src_word <<= shift_width;
 
-	/* get the current bits from the src bit string */
-	src = src_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&src_word, src, sizeof(src_word), ICE_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_word &= ~(CPU_TO_LE16(mask));
-
-	/* get the data back into host order before shifting */
-	dest_word = LE16_TO_CPU(src_word);
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
 
-	dest_word >>= shift_width;
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
 
-	/* get the address from the struct field */
-	target = dest_ctx + ce_info->offset;
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
 
-	/* put it back in the struct */
-	ice_memcpy(target, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
 }
 
 /**
- * ice_read_dword - read context dword into struct
+ * ice_write_dword - write a dword to a packed context structure
  * @src_ctx:  the context structure to read from
  * @dest_ctx: the context to be written to
  * @ce_info:  a description of the struct to be filled
  */
 static void
-ice_read_dword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
 {
-	u32 dest_dword, mask;
-	__le32 src_dword;
-	u8 *src, *target;
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
 	u16 shift_width;
 
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
 	/* prepare the bits and mask */
 	shift_width = ce_info->lsb % 8;
 
@@ -3973,45 +2754,45 @@ ice_read_dword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
 	else
 		mask = (u32)~0;
 
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
 	/* shift to correct alignment */
 	mask <<= shift_width;
+	src_dword <<= shift_width;
 
-	/* get the current bits from the src bit string */
-	src = src_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&src_dword, src, sizeof(src_dword), ICE_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_dword &= ~(CPU_TO_LE32(mask));
-
-	/* get the data back into host order before shifting */
-	dest_dword = LE32_TO_CPU(src_dword);
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
 
-	dest_dword >>= shift_width;
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
 
-	/* get the address from the struct field */
-	target = dest_ctx + ce_info->offset;
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
 
-	/* put it back in the struct */
-	ice_memcpy(target, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
 }
 
 /**
- * ice_read_qword - read context qword into struct
+ * ice_write_qword - write a qword to a packed context structure
  * @src_ctx:  the context structure to read from
  * @dest_ctx: the context to be written to
  * @ce_info:  a description of the struct to be filled
  */
 static void
-ice_read_qword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
 {
-	u64 dest_qword, mask;
-	__le64 src_qword;
-	u8 *src, *target;
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
 	u16 shift_width;
 
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
 	/* prepare the bits and mask */
 	shift_width = ce_info->lsb % 8;
 
@@ -4024,59 +2805,66 @@ ice_read_qword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
 	else
 		mask = (u64)~0;
 
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
 	/* shift to correct alignment */
 	mask <<= shift_width;
+	src_qword <<= shift_width;
 
-	/* get the current bits from the src bit string */
-	src = src_ctx + (ce_info->lsb / 8);
-
-	ice_memcpy(&src_qword, src, sizeof(src_qword), ICE_DMA_TO_NONDMA);
-
-	/* the data in the memory is stored as little endian so mask it
-	 * correctly
-	 */
-	src_qword &= ~(CPU_TO_LE64(mask));
-
-	/* get the data back into host order before shifting */
-	dest_qword = LE64_TO_CPU(src_qword);
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
 
-	dest_qword >>= shift_width;
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
 
-	/* get the address from the struct field */
-	target = dest_ctx + ce_info->offset;
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
 
-	/* put it back in the struct */
-	ice_memcpy(target, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
 }
 
 /**
- * ice_get_ctx - extract context bits from a packed structure
- * @src_ctx:  pointer to a generic packed context structure
- * @dest_ctx: pointer to a generic non-packed context structure
- * @ce_info:  a description of the structure to be read from
+ * ice_set_ctx - set context bits in packed structure
+ * @hw: pointer to the hardware structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
  */
 enum ice_status
-ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
+	    const struct ice_ctx_ele *ce_info)
 {
 	int f;
 
 	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
+			ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
+				  f, ce_info[f].width, ce_info[f].size_of);
+			continue;
+		}
 		switch (ce_info[f].size_of) {
-		case 1:
-			ice_read_byte(src_ctx, dest_ctx, &ce_info[f]);
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
 			break;
-		case 2:
-			ice_read_word(src_ctx, dest_ctx, &ce_info[f]);
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
 			break;
-		case 4:
-			ice_read_dword(src_ctx, dest_ctx, &ce_info[f]);
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
 			break;
-		case 8:
-			ice_read_qword(src_ctx, dest_ctx, &ce_info[f]);
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
 			break;
 		default:
-			/* nothing to do, just keep going */
-			break;
+			return ICE_ERR_INVAL_SIZE;
 		}
 	}
 
@@ -4350,224 +3138,6 @@ ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap,
 			      ICE_SCHED_NODE_OWNER_LAN);
 }
 
-/**
- * ice_is_main_vsi - checks whether the VSI is main VSI
- * @hw: pointer to the HW struct
- * @vsi_handle: VSI handle
- *
- * Checks whether the VSI is the main VSI (the first PF VSI created on
- * given PF).
- */
-static bool ice_is_main_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
-	return vsi_handle == ICE_MAIN_VSI_HANDLE && hw->vsi_ctx[vsi_handle];
-}
-
-/**
- * ice_replay_pre_init - replay pre initialization
- * @hw: pointer to the HW struct
- * @sw: pointer to switch info struct for which function initializes filters
- *
- * Initializes required config data for VSI, FD, ACL, and RSS before replay.
- */
-static enum ice_status
-ice_replay_pre_init(struct ice_hw *hw, struct ice_switch_info *sw)
-{
-	enum ice_status status;
-	u8 i;
-
-	/* Delete old entries from replay filter list head if there is any */
-	ice_rm_sw_replay_rule_info(hw, sw);
-	/* In start of replay, move entries into replay_rules list, it
-	 * will allow adding rules entries back to filt_rules list,
-	 * which is operational list.
-	 */
-	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
-		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
-				  &sw->recp_list[i].filt_replay_rules);
-	ice_sched_replay_agg_vsi_preinit(hw);
-
-	status = ice_sched_replay_root_node_bw(hw->port_info);
-	if (status)
-		return status;
-
-	return ice_sched_replay_tc_node_bw(hw->port_info);
-}
-
-/**
- * ice_replay_vsi - replay VSI configuration
- * @hw: pointer to the HW struct
- * @vsi_handle: driver VSI handle
- *
- * Restore all VSI configuration after reset. It is required to call this
- * function with main VSI first.
- */
-enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
-	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_port_info *pi = hw->port_info;
-	enum ice_status status;
-
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_ERR_PARAM;
-
-	/* Replay pre-initialization if there is any */
-	if (ice_is_main_vsi(hw, vsi_handle)) {
-		status = ice_replay_pre_init(hw, sw);
-		if (status)
-			return status;
-	}
-	/* Replay per VSI all RSS configurations */
-	status = ice_replay_rss_cfg(hw, vsi_handle);
-	if (status)
-		return status;
-	/* Replay per VSI all filters */
-	status = ice_replay_vsi_all_fltr(hw, pi, vsi_handle);
-	if (!status)
-		status = ice_replay_vsi_agg(hw, vsi_handle);
-	return status;
-}
-
-/**
- * ice_replay_post - post replay configuration cleanup
- * @hw: pointer to the HW struct
- *
- * Post replay cleanup.
- */
-void ice_replay_post(struct ice_hw *hw)
-{
-	/* Delete old entries from replay filter list head */
-	ice_rm_all_sw_replay_rule_info(hw);
-	ice_sched_replay_agg(hw);
-}
-
-/**
- * ice_stat_update40 - read 40 bit stat from the chip and update stat values
- * @hw: ptr to the hardware info
- * @reg: offset of 64 bit HW register to read from
- * @prev_stat_loaded: bool to specify if previous stats are loaded
- * @prev_stat: ptr to previous loaded stat value
- * @cur_stat: ptr to current stat value
- */
-void
-ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
-		  u64 *prev_stat, u64 *cur_stat)
-{
-	u64 new_data = rd64(hw, reg) & (BIT_ULL(40) - 1);
-
-	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. Thus, save the value from the first read
-	 * without adding to the statistic value so that we report stats which
-	 * count up from zero.
-	 */
-	if (!prev_stat_loaded) {
-		*prev_stat = new_data;
-		return;
-	}
-
-	/* Calculate the difference between the new and old values, and then
-	 * add it to the software stat value.
-	 */
-	if (new_data >= *prev_stat)
-		*cur_stat += new_data - *prev_stat;
-	else
-		/* to manage the potential roll-over */
-		*cur_stat += (new_data + BIT_ULL(40)) - *prev_stat;
-
-	/* Update the previously stored value to prepare for next read */
-	*prev_stat = new_data;
-}
-
-/**
- * ice_stat_update32 - read 32 bit stat from the chip and update stat values
- * @hw: ptr to the hardware info
- * @reg: offset of HW register to read from
- * @prev_stat_loaded: bool to specify if previous stats are loaded
- * @prev_stat: ptr to previous loaded stat value
- * @cur_stat: ptr to current stat value
- */
-void
-ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
-		  u64 *prev_stat, u64 *cur_stat)
-{
-	u32 new_data;
-
-	new_data = rd32(hw, reg);
-
-	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. Thus, save the value from the first read
-	 * without adding to the statistic value so that we report stats which
-	 * count up from zero.
-	 */
-	if (!prev_stat_loaded) {
-		*prev_stat = new_data;
-		return;
-	}
-
-	/* Calculate the difference between the new and old values, and then
-	 * add it to the software stat value.
-	 */
-	if (new_data >= *prev_stat)
-		*cur_stat += new_data - *prev_stat;
-	else
-		/* to manage the potential roll-over */
-		*cur_stat += (new_data + BIT_ULL(32)) - *prev_stat;
-
-	/* Update the previously stored value to prepare for next read */
-	*prev_stat = new_data;
-}
-
-/**
- * ice_stat_update_repc - read GLV_REPC stats from chip and update stat values
- * @hw: ptr to the hardware info
- * @vsi_handle: VSI handle
- * @prev_stat_loaded: bool to specify if the previous stat values are loaded
- * @cur_stats: ptr to current stats structure
- *
- * The GLV_REPC statistic register actually tracks two 16bit statistics, and
- * thus cannot be read using the normal ice_stat_update32 function.
- *
- * Read the GLV_REPC register associated with the given VSI, and update the
- * rx_no_desc and rx_error values in the ice_eth_stats structure.
- *
- * Because the statistics in GLV_REPC stick at 0xFFFF, the register must be
- * cleared each time it's read.
- *
- * Note that the GLV_RDPC register also counts the causes that would trigger
- * GLV_REPC. However, it does not give the finer grained detail about why the
- * packets are being dropped. The GLV_REPC values can be used to distinguish
- * whether Rx packets are dropped due to errors or due to no available
- * descriptors.
- */
-void
-ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
-		     struct ice_eth_stats *cur_stats)
-{
-	u16 vsi_num, no_desc, error_cnt;
-	u32 repc;
-
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return;
-
-	vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
-
-	/* If we haven't loaded stats yet, just clear the current value */
-	if (!prev_stat_loaded) {
-		wr32(hw, GLV_REPC(vsi_num), 0);
-		return;
-	}
-
-	repc = rd32(hw, GLV_REPC(vsi_num));
-	no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >> GLV_REPC_NO_DESC_CNT_S;
-	error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >> GLV_REPC_ERROR_CNT_S;
-
-	/* Clear the count by writing to the stats register */
-	wr32(hw, GLV_REPC(vsi_num), 0);
-
-	cur_stats->rx_no_desc += no_desc;
-	cur_stats->rx_errors += error_cnt;
-}
-
 /**
  * ice_sched_query_elem - query element information from HW
  * @hw: pointer to the HW struct
@@ -4711,21 +3281,6 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	return status;
 }
 
-/**
- * ice_is_phy_caps_an_enabled - check if PHY capabilities autoneg is enabled
- * @caps: get PHY capability data
- */
-bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps)
-{
-	if (caps->caps & ICE_AQC_PHY_AN_MODE ||
-	    caps->low_power_ctrl_an & (ICE_AQC_PHY_AN_EN_CLAUSE28 |
-				       ICE_AQC_PHY_AN_EN_CLAUSE73 |
-				       ICE_AQC_PHY_AN_EN_CLAUSE37))
-		return true;
-
-	return false;
-}
-
 /**
  * ice_aq_set_lldp_mib - Set the LLDP MIB
  * @hw: pointer to the HW struct
@@ -4758,50 +3313,3 @@ ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
 
 	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
 }
-
-/**
- * ice_fw_supports_lldp_fltr - check NVM version supports lldp_fltr_ctrl
- * @hw: pointer to HW struct
- */
-bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw)
-{
-	if (hw->mac_type != ICE_MAC_E810)
-		return false;
-
-	if (hw->api_maj_ver == ICE_FW_API_LLDP_FLTR_MAJ) {
-		if (hw->api_min_ver > ICE_FW_API_LLDP_FLTR_MIN)
-			return true;
-		if (hw->api_min_ver == ICE_FW_API_LLDP_FLTR_MIN &&
-		    hw->api_patch >= ICE_FW_API_LLDP_FLTR_PATCH)
-			return true;
-	} else if (hw->api_maj_ver > ICE_FW_API_LLDP_FLTR_MAJ) {
-		return true;
-	}
-	return false;
-}
-
-/**
- * ice_lldp_fltr_add_remove - add or remove a LLDP Rx switch filter
- * @hw: pointer to HW struct
- * @vsi_num: absolute HW index for VSI
- * @add: boolean for if adding or removing a filter
- */
-enum ice_status
-ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add)
-{
-	struct ice_aqc_lldp_filter_ctrl *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.lldp_filter_ctrl;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_filter_ctrl);
-
-	if (add)
-		cmd->cmd_flags = ICE_AQC_LLDP_FILTER_ACTION_ADD;
-	else
-		cmd->cmd_flags = ICE_AQC_LLDP_FILTER_ACTION_DELETE;
-
-	cmd->vsi_num = CPU_TO_LE16(vsi_num);
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 8c16c7a024..1cf03e52e7 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -21,7 +21,6 @@ enum ice_fw_modes {
 enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw);
 void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw);
 enum ice_status ice_init_hw(struct ice_hw *hw);
-void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
 enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
 
@@ -32,8 +31,6 @@ void ice_destroy_all_ctrlq(struct ice_hw *hw);
 enum ice_status
 ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
-enum ice_status
-ice_get_link_status(struct ice_port_info *pi, bool *link_up);
 enum ice_status ice_update_link_info(struct ice_port_info *pi);
 enum ice_status
 ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
@@ -55,8 +52,6 @@ void ice_clear_pxe_mode(struct ice_hw *hw);
 
 enum ice_status ice_get_caps(struct ice_hw *hw);
 
-void ice_set_safe_mode_caps(struct ice_hw *hw);
-
 /* Define a macro that will align a pointer to point to the next memory address
  * that falls on the given power of 2 (i.e., 2, 4, 8, 16, 32, 64...). For
  * example, given the variable pointer = 0x1006, then after the following call:
@@ -72,18 +67,6 @@ enum ice_status
 ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
 		  u32 rxq_index);
 enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
-enum ice_status
-ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
-enum ice_status
-ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
-			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
-			 u32 tx_cmpltnq_index);
-enum ice_status
-ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
-enum ice_status
-ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
-			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
-			  u32 tx_drbell_q_index);
 
 enum ice_status
 ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params);
@@ -99,13 +82,6 @@ enum ice_status
 ice_aq_add_lan_txq(struct ice_hw *hw, u8 count,
 		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
 		   struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move,
-			  bool is_tc_change, bool subseq_call, bool flush_pipe,
-			  u8 timeout, u32 *blocked_cgds,
-			  struct ice_aqc_move_txqs_data *buf, u16 buf_size,
-			  u8 *txqs_moved, struct ice_sq_cd *cd);
-
 bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
 enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
 void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
@@ -126,9 +102,6 @@ enum ice_status
 ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
 		    struct ice_aqc_get_phy_caps_data *caps,
 		    struct ice_sq_cd *cd);
-void
-ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
-		    u16 link_speeds_bitmap);
 enum ice_status
 ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 			struct ice_sq_cd *cd);
@@ -141,27 +114,11 @@ bool ice_fw_supports_link_override(struct ice_hw *hw);
 enum ice_status
 ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 			      struct ice_port_info *pi);
-bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps);
-
-enum ice_fc_mode ice_caps_to_fc_mode(u8 caps);
-enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options);
-enum ice_status
-ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
-	   bool ena_auto_link_update);
-bool
-ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
-			struct ice_aqc_set_phy_cfg_data *cfg);
 void
 ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
 			 struct ice_aqc_get_phy_caps_data *caps,
 			 struct ice_aqc_set_phy_cfg_data *cfg);
 enum ice_status
-ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
-		enum ice_fec_mode fec);
-enum ice_status
-ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
-			   struct ice_sq_cd *cd);
-enum ice_status
 ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
@@ -170,19 +127,6 @@ enum ice_status
 ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
 		      struct ice_sq_cd *cd);
 enum ice_status
-ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
-
-enum ice_status
-ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
-		       struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
-		  u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
-		  bool write, struct ice_sq_cd *cd);
-
-enum ice_status
-ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info);
-enum ice_status
 ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 		u16 *q_handle, u16 *q_ids, u32 *q_teids,
 		enum ice_disq_rst_src rst_src, u16 vmvf_num,
@@ -194,19 +138,8 @@ enum ice_status
 ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 		u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
 		struct ice_sq_cd *cd);
-enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
-void ice_replay_post(struct ice_hw *hw);
 struct ice_q_ctx *
 ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
-void
-ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
-		  u64 *prev_stat, u64 *cur_stat);
-void
-ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
-		  u64 *prev_stat, u64 *cur_stat);
-void
-ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
-		     struct ice_eth_stats *cur_stats);
 enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw);
 void ice_print_rollback_msg(struct ice_hw *hw);
 enum ice_status
@@ -215,7 +148,4 @@ ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
 enum ice_status
 ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
 		    struct ice_sq_cd *cd);
-bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw);
-enum ice_status
-ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add);
 #endif /* _ICE_COMMON_H_ */
diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 351038528b..09b5d89bc0 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -109,32 +109,6 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
-/**
- * ice_aq_start_lldp
- * @hw: pointer to the HW struct
- * @persist: True if Start of LLDP Agent needs to be persistent across reboots
- * @cd: pointer to command details structure or NULL
- *
- * Start the embedded LLDP Agent on all ports. (0x0A06)
- */
-enum ice_status
-ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_lldp_start *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.lldp_start;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_start);
-
-	cmd->command = ICE_AQ_LLDP_AGENT_START;
-
-	if (persist)
-		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
 /**
  * ice_get_dcbx_status
  * @hw: pointer to the HW struct
@@ -672,49 +646,6 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 	return ret;
 }
 
-/**
- * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
- * @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBX Agent needs to be started
- *		      False if DCBX Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBX agent status
- *		       True if DCBX Agent is active
- *		       False if DCBX Agent is stopped
- * @cd: pointer to command details structure or NULL
- *
- * Start/Stop the embedded dcbx Agent. In case that this wrapper function
- * returns ICE_SUCCESS, caller will need to check if FW returns back the same
- * value as stated in dcbx_agent_status, and react accordingly. (0x0A09)
- */
-enum ice_status
-ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
-		       bool *dcbx_agent_status, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_lldp_stop_start_specific_agent *cmd;
-	enum ice_status status;
-	struct ice_aq_desc desc;
-	u16 opcode;
-
-	cmd = &desc.params.lldp_agent_ctrl;
-
-	opcode = ice_aqc_opc_lldp_stop_start_specific_agent;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
-
-	if (start_dcbx_agent)
-		cmd->command = ICE_AQC_START_STOP_AGENT_START_DCBX;
-
-	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-
-	*dcbx_agent_status = false;
-
-	if (status == ICE_SUCCESS &&
-	    cmd->command == ICE_AQC_START_STOP_AGENT_START_DCBX)
-		*dcbx_agent_status = true;
-
-	return status;
-}
-
 /**
  * ice_aq_get_cee_dcb_cfg
  * @hw: pointer to the HW struct
@@ -969,34 +900,6 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 	return ret;
 }
 
-/**
- * ice_cfg_lldp_mib_change
- * @hw: pointer to the HW struct
- * @ena_mib: enable/disable MIB change event
- *
- * Configure (disable/enable) MIB
- */
-enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
-{
-	struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
-	enum ice_status ret;
-
-	if (!hw->func_caps.common_cap.dcb)
-		return ICE_ERR_NOT_SUPPORTED;
-
-	/* Get DCBX status */
-	qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
-
-	if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS)
-		return ICE_ERR_NOT_READY;
-
-	ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
-	if (!ret)
-		qos_cfg->is_sw_lldp = !ena_mib;
-
-	return ret;
-}
-
 /**
  * ice_add_ieee_ets_common_tlv
  * @buf: Data buffer to be populated with ice_dcb_ets_cfg data
@@ -1269,45 +1172,6 @@ void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg)
 	*miblen = offset;
 }
 
-/**
- * ice_set_dcb_cfg - Set the local LLDP MIB to FW
- * @pi: port information structure
- *
- * Set DCB configuration to the Firmware
- */
-enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
-{
-	u8 mib_type, *lldpmib = NULL;
-	struct ice_dcbx_cfg *dcbcfg;
-	enum ice_status ret;
-	struct ice_hw *hw;
-	u16 miblen;
-
-	if (!pi)
-		return ICE_ERR_PARAM;
-
-	hw = pi->hw;
-
-	/* update the HW local config */
-	dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
-	/* Allocate the LLDPDU */
-	lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
-	if (!lldpmib)
-		return ICE_ERR_NO_MEMORY;
-
-	mib_type = SET_LOCAL_MIB_TYPE_LOCAL_MIB;
-	if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
-		mib_type |= SET_LOCAL_MIB_TYPE_CEE_NON_WILLING;
-
-	ice_dcb_cfg_to_lldp(lldpmib, &miblen, dcbcfg);
-	ret = ice_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, miblen,
-				  NULL);
-
-	ice_free(hw, lldpmib);
-
-	return ret;
-}
-
 /**
  * ice_aq_query_port_ets - query port ETS configuration
  * @pi: port information structure
@@ -1400,28 +1264,3 @@ ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
 	}
 	return status;
 }
-
-/**
- * ice_query_port_ets - query port ETS configuration
- * @pi: port information structure
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * query current port ETS configuration and update the
- * SW DB with the TC changes
- */
-enum ice_status
-ice_query_port_ets(struct ice_port_info *pi,
-		   struct ice_aqc_port_ets_elem *buf, u16 buf_size,
-		   struct ice_sq_cd *cd)
-{
-	enum ice_status status;
-
-	ice_acquire_lock(&pi->sched_lock);
-	status = ice_aq_query_port_ets(pi, buf, buf_size, cd);
-	if (!status)
-		status = ice_update_port_tc_tree_cfg(pi, buf);
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index 8f0e09d50a..157845d592 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -186,14 +186,9 @@ enum ice_status
 ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
 enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
-enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
 enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
 void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
 enum ice_status
-ice_query_port_ets(struct ice_port_info *pi,
-		   struct ice_aqc_port_ets_elem *buf, u16 buf_size,
-		   struct ice_sq_cd *cmd_details);
-enum ice_status
 ice_aq_query_port_ets(struct ice_port_info *pi,
 		      struct ice_aqc_port_ets_elem *buf, u16 buf_size,
 		      struct ice_sq_cd *cd);
@@ -204,12 +199,6 @@ enum ice_status
 ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd);
 enum ice_status
-ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
-		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
-enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib);
-enum ice_status
 ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
 			   struct ice_sq_cd *cd);
 #endif /* _ICE_DCB_H_ */
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index aeff7af55d..dfc46ade5d 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -816,20 +816,6 @@ ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr)
 				  cntr_id);
 }
 
-/**
- * ice_free_fd_guar_item - Free flow director guaranteed entries
- * @hw: pointer to the hardware structure
- * @cntr_id: counter index that needs to be freed
- * @num_fltr: number of filters to be freed
- */
-enum ice_status
-ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr)
-{
-	return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES,
-				 ICE_AQC_RES_TYPE_FLAG_DEDICATED, num_fltr,
-				 cntr_id);
-}
-
 /**
  * ice_alloc_fd_shrd_item - allocate resource for flow director shared entries
  * @hw: pointer to the hardware structure
@@ -844,31 +830,6 @@ ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr)
 				  cntr_id);
 }
 
-/**
- * ice_free_fd_shrd_item - Free flow director shared entries
- * @hw: pointer to the hardware structure
- * @cntr_id: counter index that needs to be freed
- * @num_fltr: number of filters to be freed
- */
-enum ice_status
-ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr)
-{
-	return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES,
-				 ICE_AQC_RES_TYPE_FLAG_DEDICATED, num_fltr,
-				 cntr_id);
-}
-
-/**
- * ice_get_fdir_cnt_all - get the number of Flow Director filters
- * @hw: hardware data structure
- *
- * Returns the number of filters available on device
- */
-int ice_get_fdir_cnt_all(struct ice_hw *hw)
-{
-	return hw->func_caps.fd_fltr_guar + hw->func_caps.fd_fltr_best_effort;
-}
-
 /**
  * ice_pkt_insert_ipv6_addr - insert a be32 IPv6 address into a memory buffer.
  * @pkt: packet buffer
@@ -1254,226 +1215,3 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
 
 	return ICE_SUCCESS;
 }
-
-/**
- * ice_fdir_get_prgm_pkt - generate a training packet
- * @input: flow director filter data structure
- * @pkt: pointer to return filter packet
- * @frag: generate a fragment packet
- */
-enum ice_status
-ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag)
-{
-	return ice_fdir_get_gen_prgm_pkt(NULL, input, pkt, frag, false);
-}
-
-/**
- * ice_fdir_has_frag - does flow type have 2 ptypes
- * @flow: flow ptype
- *
- * returns true is there is a fragment packet for this ptype
- */
-bool ice_fdir_has_frag(enum ice_fltr_ptype flow)
-{
-	if (flow == ICE_FLTR_PTYPE_NONF_IPV4_OTHER)
-		return true;
-	else
-		return false;
-}
-
-/**
- * ice_fdir_find_by_idx - find filter with idx
- * @hw: pointer to hardware structure
- * @fltr_idx: index to find.
- *
- * Returns pointer to filter if found or null
- */
-struct ice_fdir_fltr *
-ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx)
-{
-	struct ice_fdir_fltr *rule;
-
-	LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
-			    fltr_node) {
-		/* rule ID found in the list */
-		if (fltr_idx == rule->fltr_id)
-			return rule;
-		if (fltr_idx < rule->fltr_id)
-			break;
-	}
-	return NULL;
-}
-
-/**
- * ice_fdir_list_add_fltr - add a new node to the flow director filter list
- * @hw: hardware structure
- * @fltr: filter node to add to structure
- */
-void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *fltr)
-{
-	struct ice_fdir_fltr *rule, *parent = NULL;
-
-	LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
-			    fltr_node) {
-		/* rule ID found or pass its spot in the list */
-		if (rule->fltr_id >= fltr->fltr_id)
-			break;
-		parent = rule;
-	}
-
-	if (parent)
-		LIST_ADD_AFTER(&fltr->fltr_node, &parent->fltr_node);
-	else
-		LIST_ADD(&fltr->fltr_node, &hw->fdir_list_head);
-}
-
-/**
- * ice_fdir_update_cntrs - increment / decrement filter counter
- * @hw: pointer to hardware structure
- * @flow: filter flow type
- * @acl_fltr: true indicates an ACL filter
- * @add: true implies filters added
- */
-void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
-		      bool acl_fltr, bool add)
-{
-	int incr;
-
-	incr = add ? 1 : -1;
-	hw->fdir_active_fltr += incr;
-	if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) {
-		ice_debug(hw, ICE_DBG_SW, "Unknown filter type %d\n", flow);
-	} else {
-		if (acl_fltr)
-			hw->acl_fltr_cnt[flow] += incr;
-		else
-			hw->fdir_fltr_cnt[flow] += incr;
-	}
-}
-
-/**
- * ice_cmp_ipv6_addr - compare 2 IP v6 addresses
- * @a: IP v6 address
- * @b: IP v6 address
- *
- * Returns 0 on equal, returns non-0 if different
- */
-static int ice_cmp_ipv6_addr(__be32 *a, __be32 *b)
-{
-	return memcmp(a, b, 4 * sizeof(__be32));
-}
-
-/**
- * ice_fdir_comp_rules - compare 2 filters
- * @a: a Flow Director filter data structure
- * @b: a Flow Director filter data structure
- * @v6: bool true if v6 filter
- *
- * Returns true if the filters match
- */
-static bool
-ice_fdir_comp_rules(struct ice_fdir_fltr *a,  struct ice_fdir_fltr *b, bool v6)
-{
-	enum ice_fltr_ptype flow_type = a->flow_type;
-
-	/* The calling function already checks that the two filters have the
-	 * same flow_type.
-	 */
-	if (!v6) {
-		if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP) {
-			if (a->ip.v4.dst_ip == b->ip.v4.dst_ip &&
-			    a->ip.v4.src_ip == b->ip.v4.src_ip &&
-			    a->ip.v4.dst_port == b->ip.v4.dst_port &&
-			    a->ip.v4.src_port == b->ip.v4.src_port)
-				return true;
-		} else if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER) {
-			if (a->ip.v4.dst_ip == b->ip.v4.dst_ip &&
-			    a->ip.v4.src_ip == b->ip.v4.src_ip &&
-			    a->ip.v4.l4_header == b->ip.v4.l4_header &&
-			    a->ip.v4.proto == b->ip.v4.proto &&
-			    a->ip.v4.ip_ver == b->ip.v4.ip_ver &&
-			    a->ip.v4.tos == b->ip.v4.tos)
-				return true;
-		}
-	} else {
-		if (flow_type == ICE_FLTR_PTYPE_NONF_IPV6_UDP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV6_TCP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV6_SCTP) {
-			if (a->ip.v6.dst_port == b->ip.v6.dst_port &&
-			    a->ip.v6.src_port == b->ip.v6.src_port &&
-			    !ice_cmp_ipv6_addr(a->ip.v6.dst_ip,
-					       b->ip.v6.dst_ip) &&
-			    !ice_cmp_ipv6_addr(a->ip.v6.src_ip,
-					       b->ip.v6.src_ip))
-				return true;
-		} else if (flow_type == ICE_FLTR_PTYPE_NONF_IPV6_OTHER) {
-			if (a->ip.v6.dst_port == b->ip.v6.dst_port &&
-			    a->ip.v6.src_port == b->ip.v6.src_port)
-				return true;
-		}
-	}
-
-	return false;
-}
-
-/**
- * ice_fdir_is_dup_fltr - test if filter is already in list for PF
- * @hw: hardware data structure
- * @input: Flow Director filter data structure
- *
- * Returns true if the filter is found in the list
- */
-bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input)
-{
-	struct ice_fdir_fltr *rule;
-	bool ret = false;
-
-	LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
-			    fltr_node) {
-		enum ice_fltr_ptype flow_type;
-
-		if (rule->flow_type != input->flow_type)
-			continue;
-
-		flow_type = input->flow_type;
-		if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP ||
-		    flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER)
-			ret = ice_fdir_comp_rules(rule, input, false);
-		else
-			ret = ice_fdir_comp_rules(rule, input, true);
-		if (ret) {
-			if (rule->fltr_id == input->fltr_id &&
-			    rule->q_index != input->q_index)
-				ret = false;
-			else
-				break;
-		}
-	}
-
-	return ret;
-}
-
-/**
- * ice_clear_pf_fd_table - admin command to clear FD table for PF
- * @hw: hardware data structure
- *
- * Clears FD table entries for a PF by issuing admin command (direct, 0x0B06)
- */
-enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw)
-{
-	struct ice_aqc_clear_fd_table *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.clear_fd_table;
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_fd_table);
-	cmd->clear_type = CL_FD_VM_VF_TYPE_PF_IDX;
-	/* vsi_index must be 0 to clear FD table for a PF */
-	cmd->vsi_index = CPU_TO_LE16(0);
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index d363de385d..1f0f5bda7d 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -234,27 +234,11 @@ enum ice_status ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id);
 enum ice_status
 ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
 enum ice_status
-ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr);
-enum ice_status
 ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
-enum ice_status
-ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr);
-enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw);
 void
 ice_fdir_get_prgm_desc(struct ice_hw *hw, struct ice_fdir_fltr *input,
 		       struct ice_fltr_desc *fdesc, bool add);
 enum ice_status
 ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
 			  u8 *pkt, bool frag, bool tun);
-enum ice_status
-ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag);
-int ice_get_fdir_cnt_all(struct ice_hw *hw);
-bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
-bool ice_fdir_has_frag(enum ice_fltr_ptype flow);
-struct ice_fdir_fltr *
-ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx);
-void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
-		      bool acl_fltr, bool add);
-void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
 #endif /* _ICE_FDIR_H_ */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 7594df1696..aec2c63c30 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1950,54 +1950,6 @@ static bool ice_tunnel_port_in_use_hlpr(struct ice_hw *hw, u16 port, u16 *index)
 	return false;
 }
 
-/**
- * ice_tunnel_port_in_use
- * @hw: pointer to the HW structure
- * @port: port to search for
- * @index: optionally returns index
- *
- * Returns whether a port is already in use as a tunnel, and optionally its
- * index
- */
-bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index)
-{
-	bool res;
-
-	ice_acquire_lock(&hw->tnl_lock);
-	res = ice_tunnel_port_in_use_hlpr(hw, port, index);
-	ice_release_lock(&hw->tnl_lock);
-
-	return res;
-}
-
-/**
- * ice_tunnel_get_type
- * @hw: pointer to the HW structure
- * @port: port to search for
- * @type: returns tunnel index
- *
- * For a given port number, will return the type of tunnel.
- */
-bool
-ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type)
-{
-	bool res = false;
-	u16 i;
-
-	ice_acquire_lock(&hw->tnl_lock);
-
-	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
-		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
-			*type = hw->tnl.tbl[i].type;
-			res = true;
-			break;
-		}
-
-	ice_release_lock(&hw->tnl_lock);
-
-	return res;
-}
-
 /**
  * ice_find_free_tunnel_entry
  * @hw: pointer to the HW structure
@@ -3797,61 +3749,6 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
-/**
- * ice_clear_hw_tbls - clear HW tables and flow profiles
- * @hw: pointer to the hardware structure
- */
-void ice_clear_hw_tbls(struct ice_hw *hw)
-{
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
-		struct ice_prof_tcam *prof = &hw->blk[i].prof;
-		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
-		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
-		struct ice_es *es = &hw->blk[i].es;
-
-		if (hw->blk[i].is_list_init) {
-			ice_free_prof_map(hw, i);
-			ice_free_flow_profs(hw, i);
-		}
-
-		ice_free_vsig_tbl(hw, (enum ice_block)i);
-
-		ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->ptg_tbl, 0,
-			   ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->vsig_tbl, 0,
-			   xlt2->count * sizeof(*xlt2->vsig_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
-			   ICE_NONDMA_MEM);
-		ice_memset(prof_redir->t, 0,
-			   prof_redir->count * sizeof(*prof_redir->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(es->t, 0, es->count * sizeof(*es->t) * es->fvw,
-			   ICE_NONDMA_MEM);
-		ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->written, 0, es->count * sizeof(*es->written),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->mask_ena, 0, es->count * sizeof(*es->mask_ena),
-			   ICE_NONDMA_MEM);
-	}
-}
-
 /**
  * ice_init_hw_tbls - init hardware table memory
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 214c7a2837..257351adfe 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -44,9 +44,6 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type,
 enum ice_status
 ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
 enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
-bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
-bool
-ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type);
 
 /* XLT2/VSI group functions */
 enum ice_status
@@ -71,7 +68,6 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
-void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 1b36c2b897..312e9b1ba4 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1576,26 +1576,6 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
 	return prof;
 }
 
-/**
- * ice_flow_find_prof - Look up a profile matching headers and matched fields
- * @hw: pointer to the HW struct
- * @blk: classification stage
- * @dir: flow direction
- * @segs: array of one or more packet segments that describe the flow
- * @segs_cnt: number of packet segments provided
- */
-u64
-ice_flow_find_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
-		   struct ice_flow_seg_info *segs, u8 segs_cnt)
-{
-	struct ice_flow_prof *p;
-
-	p = ice_flow_find_prof_conds(hw, blk, dir, segs, segs_cnt,
-				     ICE_MAX_VSI, ICE_FLOW_FIND_PROF_CHK_FLDS);
-
-	return p ? p->id : ICE_FLOW_PROF_ID_INVAL;
-}
-
 /**
  * ice_flow_find_prof_id - Look up a profile with given profile ID
  * @hw: pointer to the HW struct
@@ -2087,34 +2067,6 @@ ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof)
 	return status;
 }
 
-/**
- * ice_flow_assoc_vsig_vsi - associate a VSI with VSIG
- * @hw: pointer to the hardware structure
- * @blk: classification stage
- * @vsi_handle: software VSI handle
- * @vsig: target VSI group
- *
- * Assumption: the caller has already verified that the VSI to
- * be added has the same characteristics as the VSIG and will
- * thereby have access to all resources added to that VSIG.
- */
-enum ice_status
-ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
-			u16 vsig)
-{
-	enum ice_status status;
-
-	if (!ice_is_vsi_valid(hw, vsi_handle) || blk >= ICE_BLK_COUNT)
-		return ICE_ERR_PARAM;
-
-	ice_acquire_lock(&hw->fl_profs_locks[blk]);
-	status = ice_add_vsi_flow(hw, blk, ice_get_hw_vsi_num(hw, vsi_handle),
-				  vsig);
-	ice_release_lock(&hw->fl_profs_locks[blk]);
-
-	return status;
-}
-
 /**
  * ice_flow_assoc_prof - associate a VSI with a flow profile
  * @hw: pointer to the hardware structure
@@ -2256,44 +2208,6 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 	return status;
 }
 
-/**
- * ice_flow_find_entry - look for a flow entry using its unique ID
- * @hw: pointer to the HW struct
- * @blk: classification stage
- * @entry_id: unique ID to identify this flow entry
- *
- * This function looks for the flow entry with the specified unique ID in all
- * flow profiles of the specified classification stage. If the entry is found,
- * and it returns the handle to the flow entry. Otherwise, it returns
- * ICE_FLOW_ENTRY_ID_INVAL.
- */
-u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id)
-{
-	struct ice_flow_entry *found = NULL;
-	struct ice_flow_prof *p;
-
-	ice_acquire_lock(&hw->fl_profs_locks[blk]);
-
-	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
-		struct ice_flow_entry *e;
-
-		ice_acquire_lock(&p->entries_lock);
-		LIST_FOR_EACH_ENTRY(e, &p->entries, ice_flow_entry, l_entry)
-			if (e->id == entry_id) {
-				found = e;
-				break;
-			}
-		ice_release_lock(&p->entries_lock);
-
-		if (found)
-			break;
-	}
-
-	ice_release_lock(&hw->fl_profs_locks[blk]);
-
-	return found ? ICE_FLOW_ENTRY_HNDL(found) : ICE_FLOW_ENTRY_HANDLE_INVAL;
-}
-
 /**
  * ice_flow_acl_check_actions - Checks the ACL rule's actions
  * @hw: pointer to the hardware structure
@@ -3162,71 +3076,6 @@ ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
 	ice_flow_set_fld_ext(seg, fld, t, val_loc, mask_loc, last_loc);
 }
 
-/**
- * ice_flow_set_fld_prefix - sets locations of prefix field from entry's buf
- * @seg: packet segment the field being set belongs to
- * @fld: field to be set
- * @val_loc: if not ICE_FLOW_FLD_OFF_INVAL, location of the value to match from
- *           entry's input buffer
- * @pref_loc: location of prefix value from entry's input buffer
- * @pref_sz: size of the location holding the prefix value
- *
- * This function specifies the locations, in the form of byte offsets from the
- * start of the input buffer for a flow entry, from where the value to match
- * and the IPv4 prefix value can be extracted. These locations are then stored
- * in the flow profile. When adding flow entries to the associated flow profile,
- * these locations can be used to quickly extract the values to create the
- * content of a match entry. This function should only be used for fixed-size
- * data structures.
- */
-void
-ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
-			u16 val_loc, u16 pref_loc, u8 pref_sz)
-{
-	/* For this type of field, the "mask" location is for the prefix value's
-	 * location and the "last" location is for the size of the location of
-	 * the prefix value.
-	 */
-	ice_flow_set_fld_ext(seg, fld, ICE_FLOW_FLD_TYPE_PREFIX, val_loc,
-			     pref_loc, (u16)pref_sz);
-}
-
-/**
- * ice_flow_add_fld_raw - sets locations of a raw field from entry's input buf
- * @seg: packet segment the field being set belongs to
- * @off: offset of the raw field from the beginning of the segment in bytes
- * @len: length of the raw pattern to be matched
- * @val_loc: location of the value to match from entry's input buffer
- * @mask_loc: location of mask value from entry's input buffer
- *
- * This function specifies the offset of the raw field to be match from the
- * beginning of the specified packet segment, and the locations, in the form of
- * byte offsets from the start of the input buffer for a flow entry, from where
- * the value to match and the mask value to be extracted. These locations are
- * then stored in the flow profile. When adding flow entries to the associated
- * flow profile, these locations can be used to quickly extract the values to
- * create the content of a match entry. This function should only be used for
- * fixed-size data structures.
- */
-void
-ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
-		     u16 val_loc, u16 mask_loc)
-{
-	if (seg->raws_cnt < ICE_FLOW_SEG_RAW_FLD_MAX) {
-		seg->raws[seg->raws_cnt].off = off;
-		seg->raws[seg->raws_cnt].info.type = ICE_FLOW_FLD_TYPE_SIZE;
-		seg->raws[seg->raws_cnt].info.src.val = val_loc;
-		seg->raws[seg->raws_cnt].info.src.mask = mask_loc;
-		/* The "last" field is used to store the length of the field */
-		seg->raws[seg->raws_cnt].info.src.last = len;
-	}
-
-	/* Overflows of "raws" will be handled as an error condition later in
-	 * the flow when this information is processed.
-	 */
-	seg->raws_cnt++;
-}
-
 #define ICE_FLOW_RSS_SEG_HDR_L2_MASKS \
 (ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_VLAN)
 
@@ -3293,31 +3142,6 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt,
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_rem_vsi_rss_list - remove VSI from RSS list
- * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- *
- * Remove the VSI from all RSS configurations in the list.
- */
-void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle)
-{
-	struct ice_rss_cfg *r, *tmp;
-
-	if (LIST_EMPTY(&hw->rss_list_head))
-		return;
-
-	ice_acquire_lock(&hw->rss_locks);
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
-				 ice_rss_cfg, l_entry)
-		if (ice_test_and_clear_bit(vsi_handle, r->vsis))
-			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
-				LIST_DEL(&r->l_entry);
-				ice_free(hw, r);
-			}
-	ice_release_lock(&hw->rss_locks);
-}
-
 /**
  * ice_rem_vsi_rss_cfg - remove RSS configurations associated with VSI
  * @hw: pointer to the hardware structure
@@ -3880,34 +3704,3 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 
 	return status;
 }
-
-/**
- * ice_get_rss_cfg - returns hashed fields for the given header types
- * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- * @hdrs: protocol header type
- *
- * This function will return the match fields of the first instance of flow
- * profile having the given header types and containing input VSI
- */
-u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
-{
-	u64 rss_hash = ICE_HASH_INVALID;
-	struct ice_rss_cfg *r;
-
-	/* verify if the protocol header is non zero and VSI is valid */
-	if (hdrs == ICE_FLOW_SEG_HDR_NONE || !ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_HASH_INVALID;
-
-	ice_acquire_lock(&hw->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
-			    ice_rss_cfg, l_entry)
-		if (ice_is_bit_set(r->vsis, vsi_handle) &&
-		    r->hash.addl_hdrs == hdrs) {
-			rss_hash = r->hash.hash_flds;
-			break;
-		}
-	ice_release_lock(&hw->rss_locks);
-
-	return rss_hash;
-}
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 2a9ae66454..2675202240 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -504,9 +504,6 @@ struct ice_flow_action {
 	} data;
 };
 
-u64
-ice_flow_find_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
-		   struct ice_flow_seg_info *segs, u8 segs_cnt);
 enum ice_status
 ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
 		  u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt,
@@ -518,13 +515,9 @@ enum ice_status
 ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk,
 		    struct ice_flow_prof *prof, u16 vsi_handle);
 enum ice_status
-ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
-			u16 vsig);
-enum ice_status
 ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		     u8 *hw_prof);
 
-u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id);
 enum ice_status
 ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		   u64 entry_id, u16 vsi, enum ice_flow_priority prio,
@@ -535,13 +528,6 @@ ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_h);
 void
 ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
 		 u16 val_loc, u16 mask_loc, u16 last_loc, bool range);
-void
-ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
-			u16 val_loc, u16 prefix_loc, u8 prefix_sz);
-void
-ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
-		     u16 val_loc, u16 mask_loc);
-void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
 ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
@@ -552,5 +538,4 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
 enum ice_status
 ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
 		const struct ice_rss_hash_cfg *cfg);
-u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs);
 #endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index 7b76af7b6f..75ff992b9c 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -145,39 +145,6 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
- * @words: (in) number of words to read; (out) number of words actually read
- * @data: words read from the Shadow RAM
- *
- * Reads 16 bit words (data buf) from the Shadow RAM. Ownership of the NVM is
- * taken before reading the buffer and later released.
- */
-static enum ice_status
-ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
-{
-	u32 bytes = *words * 2, i;
-	enum ice_status status;
-
-	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
-	/* ice_read_flat_nvm takes into account the 4KB AdminQ and Shadow RAM
-	 * sector restrictions necessary when reading from the NVM.
-	 */
-	status = ice_read_flat_nvm(hw, offset * 2, &bytes, (u8 *)data, true);
-
-	/* Report the number of words successfully read */
-	*words = bytes / 2;
-
-	/* Byte swap the words up to the amount we actually read */
-	for (i = 0; i < *words; i++)
-		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
-
-	return status;
-}
-
 /**
  * ice_acquire_nvm - Generic request for acquiring the NVM ownership
  * @hw: pointer to the HW structure
@@ -400,65 +367,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
 	return ICE_ERR_DOES_NOT_EXIST;
 }
 
-/**
- * ice_read_pba_string - Reads part number string from NVM
- * @hw: pointer to hardware structure
- * @pba_num: stores the part number string from the NVM
- * @pba_num_size: part number string buffer length
- *
- * Reads the part number string from the NVM.
- */
-enum ice_status
-ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
-	u16 pba_tlv, pba_tlv_len;
-	enum ice_status status;
-	u16 pba_word, pba_size;
-	u16 i;
-
-	status = ice_get_pfa_module_tlv(hw, &pba_tlv, &pba_tlv_len,
-					ICE_SR_PBA_BLOCK_PTR);
-	if (status != ICE_SUCCESS) {
-		ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block TLV.\n");
-		return status;
-	}
-
-	/* pba_size is the next word */
-	status = ice_read_sr_word(hw, (pba_tlv + 2), &pba_size);
-	if (status != ICE_SUCCESS) {
-		ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Section size.\n");
-		return status;
-	}
-
-	if (pba_tlv_len < pba_size) {
-		ice_debug(hw, ICE_DBG_INIT, "Invalid PBA Block TLV size.\n");
-		return ICE_ERR_INVAL_SIZE;
-	}
-
-	/* Subtract one to get PBA word count (PBA Size word is included in
-	 * total size)
-	 */
-	pba_size--;
-	if (pba_num_size < (((u32)pba_size * 2) + 1)) {
-		ice_debug(hw, ICE_DBG_INIT, "Buffer too small for PBA data.\n");
-		return ICE_ERR_PARAM;
-	}
-
-	for (i = 0; i < pba_size; i++) {
-		status = ice_read_sr_word(hw, (pba_tlv + 2 + 1) + i, &pba_word);
-		if (status != ICE_SUCCESS) {
-			ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block word %d.\n", i);
-			return status;
-		}
-
-		pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
-		pba_num[(i * 2) + 1] = pba_word & 0xFF;
-	}
-	pba_num[(pba_size * 2)] = '\0';
-
-	return status;
-}
-
 /**
  * ice_get_nvm_srev - Read the security revision from the NVM CSS header
  * @hw: pointer to the HW struct
@@ -884,62 +792,6 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
- * @words: (in) number of words to read; (out) number of words actually read
- * @data: words read from the Shadow RAM
- *
- * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
- * method. The buf read is preceded by the NVM ownership take
- * and followed by the release.
- */
-enum ice_status
-ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
-{
-	enum ice_status status;
-
-	status = ice_acquire_nvm(hw, ICE_RES_READ);
-	if (!status) {
-		status = ice_read_sr_buf_aq(hw, offset, words, data);
-		ice_release_nvm(hw);
-	}
-
-	return status;
-}
-
-/**
- * ice_nvm_validate_checksum
- * @hw: pointer to the HW struct
- *
- * Verify NVM PFA checksum validity (0x0706)
- */
-enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
-{
-	struct ice_aqc_nvm_checksum *cmd;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-
-	status = ice_acquire_nvm(hw, ICE_RES_READ);
-	if (status)
-		return status;
-
-	cmd = &desc.params.nvm_checksum;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
-	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
-
-	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-	ice_release_nvm(hw);
-
-	if (!status)
-		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
-			status = ICE_ERR_NVM_CHECKSUM;
-
-	return status;
-}
-
 /**
  * ice_nvm_access_get_features - Return the NVM access features structure
  * @cmd: NVM access command to process
@@ -1129,55 +981,3 @@ ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
 
 	return ICE_SUCCESS;
 }
-
-/**
- * ice_handle_nvm_access - Handle an NVM access request
- * @hw: pointer to the HW struct
- * @cmd: NVM access command info
- * @data: pointer to read or return data
- *
- * Process an NVM access request. Read the command structure information and
- * determine if it is valid. If not, report an error indicating the command
- * was invalid.
- *
- * For valid commands, perform the necessary function, copying the data into
- * the provided data buffer.
- */
-enum ice_status
-ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
-		      union ice_nvm_access_data *data)
-{
-	u32 module, flags, adapter_info;
-
-	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
-	/* Extended flags are currently reserved and must be zero */
-	if ((cmd->config & ICE_NVM_CFG_EXT_FLAGS_M) != 0)
-		return ICE_ERR_PARAM;
-
-	/* Adapter info must match the HW device ID */
-	adapter_info = ice_nvm_access_get_adapter(cmd);
-	if (adapter_info != hw->device_id)
-		return ICE_ERR_PARAM;
-
-	switch (cmd->command) {
-	case ICE_NVM_CMD_READ:
-		module = ice_nvm_access_get_module(cmd);
-		flags = ice_nvm_access_get_flags(cmd);
-
-		/* Getting the driver's NVM features structure shares the same
-		 * command type as reading a register. Read the config field
-		 * to determine if this is a request to get features.
-		 */
-		if (module == ICE_NVM_GET_FEATURES_MODULE &&
-		    flags == ICE_NVM_GET_FEATURES_FLAGS &&
-		    cmd->offset == 0)
-			return ice_nvm_access_get_features(cmd, data);
-		else
-			return ice_nvm_access_read(hw, cmd, data);
-	case ICE_NVM_CMD_WRITE:
-		return ice_nvm_access_write(hw, cmd, data);
-	default:
-		return ICE_ERR_PARAM;
-	}
-}
diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h
index 8e2eb4df1b..e46562f862 100644
--- a/drivers/net/ice/base/ice_nvm.h
+++ b/drivers/net/ice/base/ice_nvm.h
@@ -82,9 +82,6 @@ enum ice_status
 ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd,
 			    union ice_nvm_access_data *data);
 enum ice_status
-ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
-		      union ice_nvm_access_data *data);
-enum ice_status
 ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access);
 void ice_release_nvm(struct ice_hw *hw);
 enum ice_status
@@ -97,11 +94,6 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data,
 enum ice_status
 ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
 		       u16 module_type);
-enum ice_status
-ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size);
 enum ice_status ice_init_nvm(struct ice_hw *hw);
 enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
-enum ice_status
-ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
-enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
 #endif /* _ICE_NVM_H_ */
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index ac48bbe279..d7f0866dac 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -644,25 +644,6 @@ ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
 				 buf, buf_size, num_profiles_added, cd);
 }
 
-/**
- * ice_aq_query_rl_profile - query rate limiting profile(s)
- * @hw: pointer to the HW struct
- * @num_profiles: the number of profile(s) to query
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure
- *
- * Query RL profile (0x0411)
- */
-enum ice_status
-ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
-			struct ice_aqc_rl_profile_elem *buf, u16 buf_size,
-			struct ice_sq_cd *cd)
-{
-	return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
-				 num_profiles, buf, buf_size, NULL, cd);
-}
-
 /**
  * ice_aq_remove_rl_profile - removes RL profile(s)
  * @hw: pointer to the HW struct
@@ -839,32 +820,6 @@ void ice_sched_cleanup_all(struct ice_hw *hw)
 	hw->max_cgds = 0;
 }
 
-/**
- * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
- * @hw: pointer to the HW struct
- * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * Configure L2 Node CGD (0x0414)
- */
-enum ice_status
-ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
-		       struct ice_aqc_cfg_l2_node_cgd_elem *buf,
-		       u16 buf_size, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_cfg_l2_node_cgd *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.cfg_l2_node_cgd;
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
-	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
-	cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
-	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-}
-
 /**
  * ice_sched_add_elems - add nodes to HW and SW DB
  * @pi: port information structure
@@ -1959,137 +1914,6 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 	return status;
 }
 
-/**
- * ice_sched_rm_agg_vsi_entry - remove aggregator related VSI info entry
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function removes single aggregator VSI info entry from
- * aggregator list.
- */
-static void ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
-{
-	struct ice_sched_agg_info *agg_info;
-	struct ice_sched_agg_info *atmp;
-
-	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
-				 ice_sched_agg_info,
-				 list_entry) {
-		struct ice_sched_agg_vsi_info *agg_vsi_info;
-		struct ice_sched_agg_vsi_info *vtmp;
-
-		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
-					 &agg_info->agg_vsi_list,
-					 ice_sched_agg_vsi_info, list_entry)
-			if (agg_vsi_info->vsi_handle == vsi_handle) {
-				LIST_DEL(&agg_vsi_info->list_entry);
-				ice_free(pi->hw, agg_vsi_info);
-				return;
-			}
-	}
-}
-
-/**
- * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
- * @node: pointer to the sub-tree node
- *
- * This function checks for a leaf node presence in a given sub-tree node.
- */
-static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
-{
-	u8 i;
-
-	for (i = 0; i < node->num_children; i++)
-		if (ice_sched_is_leaf_node_present(node->children[i]))
-			return true;
-	/* check for a leaf node */
-	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
-}
-
-/**
- * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @owner: LAN or RDMA
- *
- * This function removes the VSI and its LAN or RDMA children nodes from the
- * scheduler tree.
- */
-static enum ice_status
-ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
-{
-	enum ice_status status = ICE_ERR_PARAM;
-	struct ice_vsi_ctx *vsi_ctx;
-	u8 i;
-
-	ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return status;
-	ice_acquire_lock(&pi->sched_lock);
-	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
-	if (!vsi_ctx)
-		goto exit_sched_rm_vsi_cfg;
-
-	ice_for_each_traffic_class(i) {
-		struct ice_sched_node *vsi_node, *tc_node;
-		u8 j = 0;
-
-		tc_node = ice_sched_get_tc_node(pi, i);
-		if (!tc_node)
-			continue;
-
-		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
-		if (!vsi_node)
-			continue;
-
-		if (ice_sched_is_leaf_node_present(vsi_node)) {
-			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", i);
-			status = ICE_ERR_IN_USE;
-			goto exit_sched_rm_vsi_cfg;
-		}
-		while (j < vsi_node->num_children) {
-			if (vsi_node->children[j]->owner == owner) {
-				ice_free_sched_node(pi, vsi_node->children[j]);
-
-				/* reset the counter again since the num
-				 * children will be updated after node removal
-				 */
-				j = 0;
-			} else {
-				j++;
-			}
-		}
-		/* remove the VSI if it has no children */
-		if (!vsi_node->num_children) {
-			ice_free_sched_node(pi, vsi_node);
-			vsi_ctx->sched.vsi_node[i] = NULL;
-
-			/* clean up aggregator related VSI info if any */
-			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
-		}
-		if (owner == ICE_SCHED_NODE_OWNER_LAN)
-			vsi_ctx->sched.max_lanq[i] = 0;
-	}
-	status = ICE_SUCCESS;
-
-exit_sched_rm_vsi_cfg:
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_rm_vsi_lan_cfg - remove VSI and its LAN children nodes
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function clears the VSI and its LAN children nodes from scheduler tree
- * for all TCs.
- */
-enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
-{
-	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
-}
-
 /**
  * ice_sched_is_tree_balanced - Check tree nodes are identical or not
  * @hw: pointer to the HW struct
@@ -2114,31 +1938,6 @@ bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
 	return ice_sched_check_node(hw, node);
 }
 
-/**
- * ice_aq_query_node_to_root - retrieve the tree topology for a given node TEID
- * @hw: pointer to the HW struct
- * @node_teid: node TEID
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * This function retrieves the tree topology from the firmware for a given
- * node TEID to the root node.
- */
-enum ice_status
-ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
-			  struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
-			  struct ice_sq_cd *cd)
-{
-	struct ice_aqc_query_node_to_root *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.query_node_to_root;
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
-	cmd->teid = CPU_TO_LE32(node_teid);
-	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-}
-
 /**
  * ice_get_agg_info - get the aggregator ID
  * @hw: pointer to the hardware structure
@@ -2526,29 +2325,6 @@ ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
 	return status;
 }
 
-/**
- * ice_save_agg_tc_bitmap - save aggregator TC bitmap
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc_bitmap: 8 bits TC bitmap
- *
- * Save aggregator TC bitmap. This function needs to be called with scheduler
- * lock held.
- */
-static enum ice_status
-ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
-		       ice_bitmap_t *tc_bitmap)
-{
-	struct ice_sched_agg_info *agg_info;
-
-	agg_info = ice_get_agg_info(pi->hw, agg_id);
-	if (!agg_info)
-		return ICE_ERR_PARAM;
-	ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
-		      ICE_MAX_TRAFFIC_CLASS);
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_sched_add_agg_cfg - create an aggregator node
  * @pi: port information structure
@@ -2701,32 +2477,6 @@ ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
 	return status;
 }
 
-/**
- * ice_cfg_agg - config aggregator node
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @agg_type: aggregator type queue, VSI, or aggregator group
- * @tc_bitmap: bits TC bitmap
- *
- * This function configures aggregator node(s).
- */
-enum ice_status
-ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
-	    u8 tc_bitmap)
-{
-	ice_bitmap_t bitmap = tc_bitmap;
-	enum ice_status status;
-
-	ice_acquire_lock(&pi->sched_lock);
-	status = ice_sched_cfg_agg(pi, agg_id, agg_type,
-				   (ice_bitmap_t *)&bitmap);
-	if (!status)
-		status = ice_save_agg_tc_bitmap(pi, agg_id,
-						(ice_bitmap_t *)&bitmap);
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
 /**
  * ice_get_agg_vsi_info - get the aggregator ID
  * @agg_info: aggregator info
@@ -2773,35 +2523,6 @@ ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
 	return NULL;
 }
 
-/**
- * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @vsi_handle: software VSI handle
- * @tc_bitmap: TC bitmap of enabled TC(s)
- *
- * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
- * lock held.
- */
-static enum ice_status
-ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
-			   ice_bitmap_t *tc_bitmap)
-{
-	struct ice_sched_agg_vsi_info *agg_vsi_info;
-	struct ice_sched_agg_info *agg_info;
-
-	agg_info = ice_get_agg_info(pi->hw, agg_id);
-	if (!agg_info)
-		return ICE_ERR_PARAM;
-	/* check if entry already exist */
-	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
-	if (!agg_vsi_info)
-		return ICE_ERR_PARAM;
-	ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
-		      ICE_MAX_TRAFFIC_CLASS);
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_sched_assoc_vsi_to_agg - associate/move VSI to new/default aggregator
  * @pi: port information structure
@@ -2959,124 +2680,75 @@ ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
 }
 
 /**
- * ice_move_vsi_to_agg - moves VSI to new or default aggregator
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @vsi_handle: software VSI handle
- * @tc_bitmap: TC bitmap of enabled TC(s)
- *
- * Move or associate VSI to a new or default aggregator node.
- */
-enum ice_status
-ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
-		    u8 tc_bitmap)
-{
-	ice_bitmap_t bitmap = tc_bitmap;
-	enum ice_status status;
-
-	ice_acquire_lock(&pi->sched_lock);
-	status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
-					    (ice_bitmap_t *)&bitmap);
-	if (!status)
-		status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
-						    (ice_bitmap_t *)&bitmap);
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_rm_agg_cfg - remove aggregator configuration
- * @pi: port information structure
- * @agg_id: aggregator ID
+ * ice_set_clear_cir_bw - set or clear CIR BW
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
  *
- * This function removes aggregator reference to VSI and delete aggregator ID
- * info. It removes the aggregator configuration completely.
+ * Save or clear CIR bandwidth (BW) in the passed param bw_t_info.
  */
-enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+static void ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
 {
-	struct ice_sched_agg_info *agg_info;
-	enum ice_status status = ICE_SUCCESS;
-	u8 tc;
-
-	ice_acquire_lock(&pi->sched_lock);
-	agg_info = ice_get_agg_info(pi->hw, agg_id);
-	if (!agg_info) {
-		status = ICE_ERR_DOES_NOT_EXIST;
-		goto exit_ice_rm_agg_cfg;
-	}
-
-	ice_for_each_traffic_class(tc) {
-		status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
-		if (status)
-			goto exit_ice_rm_agg_cfg;
-	}
-
-	if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
-		status = ICE_ERR_IN_USE;
-		goto exit_ice_rm_agg_cfg;
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = 0;
+	} else {
+		/* Save type of BW information */
+		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = bw;
 	}
-
-	/* Safe to delete entry now */
-	LIST_DEL(&agg_info->list_entry);
-	ice_free(pi->hw, agg_info);
-
-	/* Remove unused RL profile IDs from HW and SW DB */
-	ice_sched_rm_unused_rl_prof(pi->hw);
-
-exit_ice_rm_agg_cfg:
-	ice_release_lock(&pi->sched_lock);
-	return status;
 }
 
 /**
- * ice_set_clear_cir_bw_alloc - set or clear CIR BW alloc information
+ * ice_set_clear_eir_bw - set or clear EIR BW
  * @bw_t_info: bandwidth type information structure
- * @bw_alloc: Bandwidth allocation information
+ * @bw: bandwidth in Kbps - Kilo bits per sec
  *
- * Save or clear CIR BW alloc information (bw_alloc) in the passed param
- * bw_t_info.
+ * Save or clear EIR bandwidth (BW) in the passed param bw_t_info.
  */
-static void
-ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+static void ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
 {
-	bw_t_info->cir_bw.bw_alloc = bw_alloc;
-	if (bw_t_info->cir_bw.bw_alloc)
-		ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
-	else
-		ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+	} else {
+		/* save EIR BW information */
+		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = bw;
+	}
 }
 
 /**
- * ice_set_clear_eir_bw_alloc - set or clear EIR BW alloc information
+ * ice_set_clear_shared_bw - set or clear shared BW
  * @bw_t_info: bandwidth type information structure
- * @bw_alloc: Bandwidth allocation information
+ * @bw: bandwidth in Kbps - Kilo bits per sec
  *
- * Save or clear EIR BW alloc information (bw_alloc) in the passed param
- * bw_t_info.
+ * Save or clear shared bandwidth (BW) in the passed param bw_t_info.
  */
-static void
-ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
 {
-	bw_t_info->eir_bw.bw_alloc = bw_alloc;
-	if (bw_t_info->eir_bw.bw_alloc)
-		ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
-	else
-		ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+	} else {
+		/* save shared BW information */
+		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = bw;
+	}
 }
 
 /**
- * ice_sched_save_vsi_bw_alloc - save VSI node's BW alloc information
+ * ice_sched_save_vsi_bw - save VSI node's BW information
  * @pi: port information structure
  * @vsi_handle: sw VSI handle
  * @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: Bandwidth allocation information
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
  *
- * Save BW alloc information of VSI type node for post replay use.
+ * Save BW information of VSI type node for post replay use.
  */
 static enum ice_status
-ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			    enum ice_rl_type rl_type, u16 bw_alloc)
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
 {
 	struct ice_vsi_ctx *vsi_ctx;
 
@@ -3087,100 +2759,7 @@ ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		return ICE_ERR_PARAM;
 	switch (rl_type) {
 	case ICE_MIN_BW:
-		ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
-					   bw_alloc);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
-					   bw_alloc);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_set_clear_cir_bw - set or clear CIR BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear CIR bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
-	if (bw == ICE_SCHED_DFLT_BW) {
-		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
-		bw_t_info->cir_bw.bw = 0;
-	} else {
-		/* Save type of BW information */
-		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
-		bw_t_info->cir_bw.bw = bw;
-	}
-}
-
-/**
- * ice_set_clear_eir_bw - set or clear EIR BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear EIR bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
-	if (bw == ICE_SCHED_DFLT_BW) {
-		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
-		bw_t_info->eir_bw.bw = 0;
-	} else {
-		/* save EIR BW information */
-		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
-		bw_t_info->eir_bw.bw = bw;
-	}
-}
-
-/**
- * ice_set_clear_shared_bw - set or clear shared BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear shared bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
-	if (bw == ICE_SCHED_DFLT_BW) {
-		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
-		bw_t_info->shared_bw = 0;
-	} else {
-		/* save shared BW information */
-		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
-		bw_t_info->shared_bw = bw;
-	}
-}
-
-/**
- * ice_sched_save_vsi_bw - save VSI node's BW information
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @rl_type: rate limit type min, max, or shared
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save BW information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		      enum ice_rl_type rl_type, u32 bw)
-{
-	struct ice_vsi_ctx *vsi_ctx;
-
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return ICE_ERR_PARAM;
-	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
-	if (!vsi_ctx)
-		return ICE_ERR_PARAM;
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
 		break;
 	case ICE_MAX_BW:
 		ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
@@ -3194,82 +2773,6 @@ ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_set_clear_prio - set or clear priority information
- * @bw_t_info: bandwidth type information structure
- * @prio: priority to save
- *
- * Save or clear priority (prio) in the passed param bw_t_info.
- */
-static void ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
-{
-	bw_t_info->generic = prio;
-	if (bw_t_info->generic)
-		ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
-	else
-		ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
-}
-
-/**
- * ice_sched_save_vsi_prio - save VSI node's priority information
- * @pi: port information structure
- * @vsi_handle: Software VSI handle
- * @tc: traffic class
- * @prio: priority to save
- *
- * Save priority information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			u8 prio)
-{
-	struct ice_vsi_ctx *vsi_ctx;
-
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return ICE_ERR_PARAM;
-	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
-	if (!vsi_ctx)
-		return ICE_ERR_PARAM;
-	if (tc >= ICE_MAX_TRAFFIC_CLASS)
-		return ICE_ERR_PARAM;
-	ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_save_agg_bw_alloc - save aggregator node's BW alloc information
- * @pi: port information structure
- * @agg_id: node aggregator ID
- * @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: bandwidth alloc information
- *
- * Save BW alloc information of AGG type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-			    enum ice_rl_type rl_type, u16 bw_alloc)
-{
-	struct ice_sched_agg_info *agg_info;
-
-	agg_info = ice_get_agg_info(pi->hw, agg_id);
-	if (!agg_info)
-		return ICE_ERR_PARAM;
-	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
-		return ICE_ERR_PARAM;
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_sched_save_agg_bw - save aggregator node's BW information
  * @pi: port information structure
@@ -3284,490 +2787,27 @@ static enum ice_status
 ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
 		      enum ice_rl_type rl_type, u32 bw)
 {
-	struct ice_sched_agg_info *agg_info;
-
-	agg_info = ice_get_agg_info(pi->hw, agg_id);
-	if (!agg_info)
-		return ICE_ERR_PARAM;
-	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
-		return ICE_ERR_PARAM;
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
-		break;
-	case ICE_SHARED_BW:
-		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_cfg_vsi_bw_lmt_per_tc - configure VSI BW limit per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @tc: traffic class
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of VSI scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			  enum ice_rl_type rl_type, u32 bw)
-{
-	enum ice_status status;
-
-	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
-						  ICE_AGG_TYPE_VSI,
-						  tc, rl_type, bw);
-	if (!status) {
-		ice_acquire_lock(&pi->sched_lock);
-		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
-		ice_release_lock(&pi->sched_lock);
-	}
-	return status;
-}
-
-/**
- * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI BW limit per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @tc: traffic class
- * @rl_type: min or max
- *
- * This function configures default BW limit of VSI scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			       enum ice_rl_type rl_type)
-{
-	enum ice_status status;
-
-	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
-						  ICE_AGG_TYPE_VSI,
-						  tc, rl_type,
-						  ICE_SCHED_DFLT_BW);
-	if (!status) {
-		ice_acquire_lock(&pi->sched_lock);
-		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
-					       ICE_SCHED_DFLT_BW);
-		ice_release_lock(&pi->sched_lock);
-	}
-	return status;
-}
-
-/**
- * ice_cfg_agg_bw_lmt_per_tc - configure aggregator BW limit per TC
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function applies BW limit to aggregator scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-			  enum ice_rl_type rl_type, u32 bw)
-{
-	enum ice_status status;
-
-	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
-						  tc, rl_type, bw);
-	if (!status) {
-		ice_acquire_lock(&pi->sched_lock);
-		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
-		ice_release_lock(&pi->sched_lock);
-	}
-	return status;
-}
-
-/**
- * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator BW default limit per TC
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @rl_type: min or max
- *
- * This function applies default BW limit to aggregator scheduling node based
- * on TC information.
- */
-enum ice_status
-ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-			       enum ice_rl_type rl_type)
-{
-	enum ice_status status;
-
-	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
-						  tc, rl_type,
-						  ICE_SCHED_DFLT_BW);
-	if (!status) {
-		ice_acquire_lock(&pi->sched_lock);
-		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
-					       ICE_SCHED_DFLT_BW);
-		ice_release_lock(&pi->sched_lock);
-	}
-	return status;
-}
-
-/**
- * ice_cfg_vsi_bw_shared_lmt - configure VSI BW shared limit
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * Configure shared rate limiter(SRL) of all VSI type nodes across all traffic
- * classes for VSI matching handle.
- */
-enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
-			  u32 max_bw, u32 shared_bw)
-{
-	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, min_bw, max_bw,
-					       shared_bw);
-}
-
-/**
- * ice_cfg_vsi_bw_no_shared_lmt - configure VSI BW for no shared limiter
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function removes the shared rate limiter(SRL) of all VSI type nodes
- * across all traffic classes for VSI matching handle.
- */
-enum ice_status
-ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
-{
-	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
-					       ICE_SCHED_DFLT_BW,
-					       ICE_SCHED_DFLT_BW,
-					       ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt - configure aggregator BW shared limit
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
-			  u32 max_bw, u32 shared_bw)
-{
-	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, min_bw, max_bw,
-					       shared_bw);
-}
-
-/**
- * ice_cfg_agg_bw_no_shared_lmt - configure aggregator BW for no shared limiter
- * @pi: port information structure
- * @agg_id: aggregator ID
- *
- * This function removes the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
-{
-	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW,
-					       ICE_SCHED_DFLT_BW,
-					       ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-				 u32 min_bw, u32 max_bw, u32 shared_bw)
-{
-	return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc, min_bw,
-						      max_bw, shared_bw);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc)
-{
-	return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc,
-						      ICE_SCHED_DFLT_BW,
-						      ICE_SCHED_DFLT_BW,
-						      ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_config_vsi_queue_priority - config VSI queue priority of node
- * @pi: port information structure
- * @num_qs: number of VSI queues
- * @q_ids: queue IDs array
- * @q_prio: queue priority array
- *
- * This function configures the queue node priority (Sibling Priority) of the
- * passed in VSI's queue(s) for a given traffic class (TC).
- */
-enum ice_status
-ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
-		       u8 *q_prio)
-{
-	enum ice_status status = ICE_ERR_PARAM;
-	u16 i;
-
-	ice_acquire_lock(&pi->sched_lock);
-
-	for (i = 0; i < num_qs; i++) {
-		struct ice_sched_node *node;
-
-		node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
-		if (!node || node->info.data.elem_type !=
-		    ICE_AQC_ELEM_TYPE_LEAF) {
-			status = ICE_ERR_PARAM;
-			break;
-		}
-		/* Configure Priority */
-		status = ice_sched_cfg_sibl_node_prio(pi, node, q_prio[i]);
-		if (status)
-			break;
-	}
-
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
- * @pi: port information structure
- * @agg_id: Aggregator ID
- * @num_vsis: number of VSI(s)
- * @vsi_handle_arr: array of software VSI handles
- * @node_prio: pointer to node priority
- * @tc: traffic class
- *
- * This function configures the node priority (Sibling Priority) of the
- * passed in VSI's for a given traffic class (TC) of an Aggregator ID.
- */
-enum ice_status
-ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
-				u16 num_vsis, u16 *vsi_handle_arr,
-				u8 *node_prio, u8 tc)
-{
-	struct ice_sched_agg_vsi_info *agg_vsi_info;
-	struct ice_sched_node *tc_node, *agg_node;
-	enum ice_status status = ICE_ERR_PARAM;
-	struct ice_sched_agg_info *agg_info;
-	bool agg_id_present = false;
-	struct ice_hw *hw = pi->hw;
-	u16 i;
-
-	ice_acquire_lock(&pi->sched_lock);
-	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
-			    list_entry)
-		if (agg_info->agg_id == agg_id) {
-			agg_id_present = true;
-			break;
-		}
-	if (!agg_id_present)
-		goto exit_agg_priority_per_tc;
-
-	tc_node = ice_sched_get_tc_node(pi, tc);
-	if (!tc_node)
-		goto exit_agg_priority_per_tc;
-
-	agg_node = ice_sched_get_agg_node(pi, tc_node, agg_id);
-	if (!agg_node)
-		goto exit_agg_priority_per_tc;
-
-	if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
-		goto exit_agg_priority_per_tc;
-
-	for (i = 0; i < num_vsis; i++) {
-		struct ice_sched_node *vsi_node;
-		bool vsi_handle_valid = false;
-		u16 vsi_handle;
-
-		status = ICE_ERR_PARAM;
-		vsi_handle = vsi_handle_arr[i];
-		if (!ice_is_vsi_valid(hw, vsi_handle))
-			goto exit_agg_priority_per_tc;
-		/* Verify child nodes before applying settings */
-		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
-				    ice_sched_agg_vsi_info, list_entry)
-			if (agg_vsi_info->vsi_handle == vsi_handle) {
-				/* cppcheck-suppress unreadVariable */
-				vsi_handle_valid = true;
-				break;
-			}
-
-		if (!vsi_handle_valid)
-			goto exit_agg_priority_per_tc;
-
-		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
-		if (!vsi_node)
-			goto exit_agg_priority_per_tc;
-
-		if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
-			/* Configure Priority */
-			status = ice_sched_cfg_sibl_node_prio(pi, vsi_node,
-							      node_prio[i]);
-			if (status)
-				break;
-			status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
-							 node_prio[i]);
-			if (status)
-				break;
-		}
-	}
-
-exit_agg_priority_per_tc:
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_vsi_bw_alloc - config VSI BW alloc per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @ena_tcmap: enabled TC map
- * @rl_type: Rate limit type CIR/EIR
- * @bw_alloc: Array of BW alloc
- *
- * This function configures the BW allocation of the passed in VSI's
- * node(s) for enabled traffic class.
- */
-enum ice_status
-ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
-		     enum ice_rl_type rl_type, u8 *bw_alloc)
-{
-	enum ice_status status = ICE_SUCCESS;
-	u8 tc;
-
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return ICE_ERR_PARAM;
-
-	ice_acquire_lock(&pi->sched_lock);
-
-	/* Return success if no nodes are present across TC */
-	ice_for_each_traffic_class(tc) {
-		struct ice_sched_node *tc_node, *vsi_node;
-
-		if (!ice_is_tc_ena(ena_tcmap, tc))
-			continue;
-
-		tc_node = ice_sched_get_tc_node(pi, tc);
-		if (!tc_node)
-			continue;
-
-		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
-		if (!vsi_node)
-			continue;
-
-		status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
-						     bw_alloc[tc]);
-		if (status)
-			break;
-		status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
-						     rl_type, bw_alloc[tc]);
-		if (status)
-			break;
-	}
-
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_agg_bw_alloc - config aggregator BW alloc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @ena_tcmap: enabled TC map
- * @rl_type: rate limit type CIR/EIR
- * @bw_alloc: array of BW alloc
- *
- * This function configures the BW allocation of passed in aggregator for
- * enabled traffic class(s).
- */
-enum ice_status
-ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
-		     enum ice_rl_type rl_type, u8 *bw_alloc)
-{
-	struct ice_sched_agg_info *agg_info;
-	bool agg_id_present = false;
-	enum ice_status status = ICE_SUCCESS;
-	struct ice_hw *hw = pi->hw;
-	u8 tc;
-
-	ice_acquire_lock(&pi->sched_lock);
-	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
-			    list_entry)
-		if (agg_info->agg_id == agg_id) {
-			agg_id_present = true;
-			break;
-		}
-	if (!agg_id_present) {
-		status = ICE_ERR_PARAM;
-		goto exit_cfg_agg_bw_alloc;
-	}
-
-	/* Return success if no nodes are present across TC */
-	ice_for_each_traffic_class(tc) {
-		struct ice_sched_node *tc_node, *agg_node;
-
-		if (!ice_is_tc_ena(ena_tcmap, tc))
-			continue;
-
-		tc_node = ice_sched_get_tc_node(pi, tc);
-		if (!tc_node)
-			continue;
-
-		agg_node = ice_sched_get_agg_node(pi, tc_node, agg_id);
-		if (!agg_node)
-			continue;
+	struct ice_sched_agg_info *agg_info;
 
-		status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
-						     bw_alloc[tc]);
-		if (status)
-			break;
-		status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
-						     bw_alloc[tc]);
-		if (status)
-			break;
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
 	}
-
-exit_cfg_agg_bw_alloc:
-	ice_release_lock(&pi->sched_lock);
-	return status;
+	return ICE_SUCCESS;
 }
 
 /**
@@ -4328,362 +3368,6 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
 	return ICE_ERR_CFG;
 }
 
-/**
- * ice_sched_save_q_bw - save queue node's BW information
- * @q_ctx: queue context structure
- * @rl_type: rate limit type min, max, or shared
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save BW information of queue type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw)
-{
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw(&q_ctx->bw_t_info, bw);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw(&q_ctx->bw_t_info, bw);
-		break;
-	case ICE_SHARED_BW:
-		ice_set_clear_shared_bw(&q_ctx->bw_t_info, bw);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_q_bw_lmt - sets queue BW limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- * @bw: bandwidth in Kbps
- *
- * This function sets BW limit of queue scheduling node.
- */
-static enum ice_status
-ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		       u16 q_handle, enum ice_rl_type rl_type, u32 bw)
-{
-	enum ice_status status = ICE_ERR_PARAM;
-	struct ice_sched_node *node;
-	struct ice_q_ctx *q_ctx;
-
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return ICE_ERR_PARAM;
-	ice_acquire_lock(&pi->sched_lock);
-	q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
-	if (!q_ctx)
-		goto exit_q_bw_lmt;
-	node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
-	if (!node) {
-		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
-		goto exit_q_bw_lmt;
-	}
-
-	/* Return error if it is not a leaf node */
-	if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
-		goto exit_q_bw_lmt;
-
-	/* SRL bandwidth layer selection */
-	if (rl_type == ICE_SHARED_BW) {
-		u8 sel_layer; /* selected layer */
-
-		sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
-							node->tx_sched_layer);
-		if (sel_layer >= pi->hw->num_tx_sched_layers) {
-			status = ICE_ERR_PARAM;
-			goto exit_q_bw_lmt;
-		}
-		status = ice_sched_validate_srl_node(node, sel_layer);
-		if (status)
-			goto exit_q_bw_lmt;
-	}
-
-	if (bw == ICE_SCHED_DFLT_BW)
-		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
-	else
-		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
-
-	if (!status)
-		status = ice_sched_save_q_bw(q_ctx, rl_type, bw);
-
-exit_q_bw_lmt:
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_q_bw_lmt - configure queue BW limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of queue scheduling node.
- */
-enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		 u16 q_handle, enum ice_rl_type rl_type, u32 bw)
-{
-	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
-				      bw);
-}
-
-/**
- * ice_cfg_q_bw_dflt_lmt - configure queue BW default limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- *
- * This function configures BW default limit of queue scheduling node.
- */
-enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		      u16 q_handle, enum ice_rl_type rl_type)
-{
-	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
-				      ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_sched_save_tc_node_bw - save TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function saves the modified values of bandwidth settings for later
- * replay purpose (restore) after reset.
- */
-static enum ice_status
-ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
-			  enum ice_rl_type rl_type, u32 bw)
-{
-	if (tc >= ICE_MAX_TRAFFIC_CLASS)
-		return ICE_ERR_PARAM;
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw(&pi->tc_node_bw_t_info[tc], bw);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw(&pi->tc_node_bw_t_info[tc], bw);
-		break;
-	case ICE_SHARED_BW:
-		ice_set_clear_shared_bw(&pi->tc_node_bw_t_info[tc], bw);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_tc_node_bw_lmt - sets TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures bandwidth limit of TC node.
- */
-static enum ice_status
-ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
-			     enum ice_rl_type rl_type, u32 bw)
-{
-	enum ice_status status = ICE_ERR_PARAM;
-	struct ice_sched_node *tc_node;
-
-	if (tc >= ICE_MAX_TRAFFIC_CLASS)
-		return status;
-	ice_acquire_lock(&pi->sched_lock);
-	tc_node = ice_sched_get_tc_node(pi, tc);
-	if (!tc_node)
-		goto exit_set_tc_node_bw;
-	if (bw == ICE_SCHED_DFLT_BW)
-		status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
-	else
-		status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
-	if (!status)
-		status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
-
-exit_set_tc_node_bw:
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_tc_node_bw_lmt - configure TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of TC node.
- * Note: The minimum guaranteed reservation is done via DCBX.
- */
-enum ice_status
-ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
-		       enum ice_rl_type rl_type, u32 bw)
-{
-	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
-}
-
-/**
- * ice_cfg_tc_node_bw_dflt_lmt - configure TC node BW default limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- *
- * This function configures BW default limit of TC node.
- */
-enum ice_status
-ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
-			    enum ice_rl_type rl_type)
-{
-	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_sched_save_tc_node_bw_alloc - save TC node's BW alloc information
- * @pi: port information structure
- * @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: Bandwidth allocation information
- *
- * Save BW alloc information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
-				enum ice_rl_type rl_type, u16 bw_alloc)
-{
-	if (tc >= ICE_MAX_TRAFFIC_CLASS)
-		return ICE_ERR_PARAM;
-	switch (rl_type) {
-	case ICE_MIN_BW:
-		ice_set_clear_cir_bw_alloc(&pi->tc_node_bw_t_info[tc],
-					   bw_alloc);
-		break;
-	case ICE_MAX_BW:
-		ice_set_clear_eir_bw_alloc(&pi->tc_node_bw_t_info[tc],
-					   bw_alloc);
-		break;
-	default:
-		return ICE_ERR_PARAM;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_tc_node_bw_alloc - set TC node BW alloc
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw_alloc: bandwidth alloc
- *
- * This function configures bandwidth alloc of TC node, also saves the
- * changed settings for replay purpose, and return success if it succeeds
- * in modifying bandwidth alloc setting.
- */
-static enum ice_status
-ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
-			       enum ice_rl_type rl_type, u8 bw_alloc)
-{
-	enum ice_status status = ICE_ERR_PARAM;
-	struct ice_sched_node *tc_node;
-
-	if (tc >= ICE_MAX_TRAFFIC_CLASS)
-		return status;
-	ice_acquire_lock(&pi->sched_lock);
-	tc_node = ice_sched_get_tc_node(pi, tc);
-	if (!tc_node)
-		goto exit_set_tc_node_bw_alloc;
-	status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
-					     bw_alloc);
-	if (status)
-		goto exit_set_tc_node_bw_alloc;
-	status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
-
-exit_set_tc_node_bw_alloc:
-	ice_release_lock(&pi->sched_lock);
-	return status;
-}
-
-/**
- * ice_cfg_tc_node_bw_alloc - configure TC node BW alloc
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw_alloc: bandwidth alloc
- *
- * This function configures BW limit of TC node.
- * Note: The minimum guaranteed reservation is done via DCBX.
- */
-enum ice_status
-ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
-			 enum ice_rl_type rl_type, u8 bw_alloc)
-{
-	return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
-}
-
-/**
- * ice_sched_set_agg_bw_dflt_lmt - set aggregator node's BW limit to default
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function retrieves the aggregator ID based on VSI ID and TC,
- * and sets node's BW limit to default. This function needs to be
- * called with the scheduler lock held.
- */
-enum ice_status
-ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
-{
-	struct ice_vsi_ctx *vsi_ctx;
-	enum ice_status status = ICE_SUCCESS;
-	u8 tc;
-
-	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
-		return ICE_ERR_PARAM;
-	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
-	if (!vsi_ctx)
-		return ICE_ERR_PARAM;
-
-	ice_for_each_traffic_class(tc) {
-		struct ice_sched_node *node;
-
-		node = vsi_ctx->sched.ag_node[tc];
-		if (!node)
-			continue;
-
-		/* Set min profile to default */
-		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
-		if (status)
-			break;
-
-		/* Set max profile to default */
-		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
-		if (status)
-			break;
-
-		/* Remove shared profile, if there is one */
-		status = ice_sched_set_node_bw_dflt_lmt(pi, node,
-							ICE_SHARED_BW);
-		if (status)
-			break;
-	}
-
-	return status;
-}
-
 /**
  * ice_sched_get_node_by_id_type - get node from ID type
  * @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 8b275637a4..cd8b0c065a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -74,14 +74,6 @@ struct ice_sched_agg_info {
 
 /* FW AQ command calls */
 enum ice_status
-ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
-			struct ice_aqc_rl_profile_elem *buf, u16 buf_size,
-			struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
-		       struct ice_aqc_cfg_l2_node_cgd_elem *buf, u16 buf_size,
-		       struct ice_sq_cd *cd);
-enum ice_status
 ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
 			 struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
 			 u16 *elems_ret, struct ice_sq_cd *cd);
@@ -110,83 +102,16 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 enum ice_status
 ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		  u8 owner, bool enable);
-enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
 struct ice_sched_node *
 ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle);
 bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
-enum ice_status
-ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
-			  struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
-			  struct ice_sq_cd *cd);
 
 /* Tx scheduler rate limiter functions */
-enum ice_status
-ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
-	    enum ice_agg_type agg_type, u8 tc_bitmap);
-enum ice_status
-ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
-		    u8 tc_bitmap);
-enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
-enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		 u16 q_handle, enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-		      u16 q_handle, enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
-		       enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
-			    enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			  enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
-			       enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-			  enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-			       enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
-			  u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
-enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
-			  u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
-enum ice_status
-ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
-				 u32 min_bw, u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
-				    u8 tc);
-enum ice_status
-ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
-		       u8 *q_prio);
-enum ice_status
-ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
-		     enum ice_rl_type rl_type, u8 *bw_alloc);
-enum ice_status
-ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
-				u16 num_vsis, u16 *vsi_handle_arr,
-				u8 *node_prio, u8 tc);
-enum ice_status
-ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
-		     enum ice_rl_type rl_type, u8 *bw_alloc);
 bool
 ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
 			       struct ice_sched_node *node);
 enum ice_status
-ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
-enum ice_status
 ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
 				 enum ice_agg_type agg_type, u8 tc,
 				 enum ice_rl_type rl_type, u32 bw);
@@ -203,9 +128,6 @@ ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
 enum ice_status
 ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi,
 			     struct ice_sched_node *node, u8 priority);
-enum ice_status
-ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
-			 enum ice_rl_type rl_type, u8 bw_alloc);
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
 void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
 void ice_sched_replay_agg(struct ice_hw *hw);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index dc55d7e3ce..45ebf3c136 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -1848,219 +1848,6 @@ ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp_elem *buf,
 	return status;
 }
 
-/**
- * ice_alloc_rss_global_lut - allocate a RSS global LUT
- * @hw: pointer to the HW struct
- * @shared_res: true to allocate as a shared resource and false to allocate as a dedicated resource
- * @global_lut_id: output parameter for the RSS global LUT's ID
- */
-enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id)
-{
-	struct ice_aqc_alloc_free_res_elem *sw_buf;
-	enum ice_status status;
-	u16 buf_len;
-
-	buf_len = ice_struct_size(sw_buf, elem, 1);
-	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
-	if (!sw_buf)
-		return ICE_ERR_NO_MEMORY;
-
-	sw_buf->num_elems = CPU_TO_LE16(1);
-	sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH |
-				       (shared_res ? ICE_AQC_RES_TYPE_FLAG_SHARED :
-				       ICE_AQC_RES_TYPE_FLAG_DEDICATED));
-
-	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, ice_aqc_opc_alloc_res, NULL);
-	if (status) {
-		ice_debug(hw, ICE_DBG_RES, "Failed to allocate %s RSS global LUT, status %d\n",
-			  shared_res ? "shared" : "dedicated", status);
-		goto ice_alloc_global_lut_exit;
-	}
-
-	*global_lut_id = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
-
-ice_alloc_global_lut_exit:
-	ice_free(hw, sw_buf);
-	return status;
-}
-
-/**
- * ice_free_global_lut - free a RSS global LUT
- * @hw: pointer to the HW struct
- * @global_lut_id: ID of the RSS global LUT to free
- */
-enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id)
-{
-	struct ice_aqc_alloc_free_res_elem *sw_buf;
-	u16 buf_len, num_elems = 1;
-	enum ice_status status;
-
-	buf_len = ice_struct_size(sw_buf, elem, num_elems);
-	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
-	if (!sw_buf)
-		return ICE_ERR_NO_MEMORY;
-
-	sw_buf->num_elems = CPU_TO_LE16(num_elems);
-	sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH);
-	sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(global_lut_id);
-
-	status = ice_aq_alloc_free_res(hw, num_elems, sw_buf, buf_len, ice_aqc_opc_free_res, NULL);
-	if (status)
-		ice_debug(hw, ICE_DBG_RES, "Failed to free RSS global LUT %d, status %d\n",
-			  global_lut_id, status);
-
-	ice_free(hw, sw_buf);
-	return status;
-}
-
-/**
- * ice_alloc_sw - allocate resources specific to switch
- * @hw: pointer to the HW struct
- * @ena_stats: true to turn on VEB stats
- * @shared_res: true for shared resource, false for dedicated resource
- * @sw_id: switch ID returned
- * @counter_id: VEB counter ID returned
- *
- * allocates switch resources (SWID and VEB counter) (0x0208)
- */
-enum ice_status
-ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id,
-	     u16 *counter_id)
-{
-	struct ice_aqc_alloc_free_res_elem *sw_buf;
-	struct ice_aqc_res_elem *sw_ele;
-	enum ice_status status;
-	u16 buf_len;
-
-	buf_len = ice_struct_size(sw_buf, elem, 1);
-	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
-	if (!sw_buf)
-		return ICE_ERR_NO_MEMORY;
-
-	/* Prepare buffer for switch ID.
-	 * The number of resource entries in buffer is passed as 1 since only a
-	 * single switch/VEB instance is allocated, and hence a single sw_id
-	 * is requested.
-	 */
-	sw_buf->num_elems = CPU_TO_LE16(1);
-	sw_buf->res_type =
-		CPU_TO_LE16(ICE_AQC_RES_TYPE_SWID |
-			    (shared_res ? ICE_AQC_RES_TYPE_FLAG_SHARED :
-			    ICE_AQC_RES_TYPE_FLAG_DEDICATED));
-
-	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
-				       ice_aqc_opc_alloc_res, NULL);
-
-	if (status)
-		goto ice_alloc_sw_exit;
-
-	sw_ele = &sw_buf->elem[0];
-	*sw_id = LE16_TO_CPU(sw_ele->e.sw_resp);
-
-	if (ena_stats) {
-		/* Prepare buffer for VEB Counter */
-		enum ice_adminq_opc opc = ice_aqc_opc_alloc_res;
-		struct ice_aqc_alloc_free_res_elem *counter_buf;
-		struct ice_aqc_res_elem *counter_ele;
-
-		counter_buf = (struct ice_aqc_alloc_free_res_elem *)
-				ice_malloc(hw, buf_len);
-		if (!counter_buf) {
-			status = ICE_ERR_NO_MEMORY;
-			goto ice_alloc_sw_exit;
-		}
-
-		/* The number of resource entries in buffer is passed as 1 since
-		 * only a single switch/VEB instance is allocated, and hence a
-		 * single VEB counter is requested.
-		 */
-		counter_buf->num_elems = CPU_TO_LE16(1);
-		counter_buf->res_type =
-			CPU_TO_LE16(ICE_AQC_RES_TYPE_VEB_COUNTER |
-				    ICE_AQC_RES_TYPE_FLAG_DEDICATED);
-		status = ice_aq_alloc_free_res(hw, 1, counter_buf, buf_len,
-					       opc, NULL);
-
-		if (status) {
-			ice_free(hw, counter_buf);
-			goto ice_alloc_sw_exit;
-		}
-		counter_ele = &counter_buf->elem[0];
-		*counter_id = LE16_TO_CPU(counter_ele->e.sw_resp);
-		ice_free(hw, counter_buf);
-	}
-
-ice_alloc_sw_exit:
-	ice_free(hw, sw_buf);
-	return status;
-}
-
-/**
- * ice_free_sw - free resources specific to switch
- * @hw: pointer to the HW struct
- * @sw_id: switch ID returned
- * @counter_id: VEB counter ID returned
- *
- * free switch resources (SWID and VEB counter) (0x0209)
- *
- * NOTE: This function frees multiple resources. It continues
- * releasing other resources even after it encounters error.
- * The error code returned is the last error it encountered.
- */
-enum ice_status ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id)
-{
-	struct ice_aqc_alloc_free_res_elem *sw_buf, *counter_buf;
-	enum ice_status status, ret_status;
-	u16 buf_len;
-
-	buf_len = ice_struct_size(sw_buf, elem, 1);
-	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
-	if (!sw_buf)
-		return ICE_ERR_NO_MEMORY;
-
-	/* Prepare buffer to free for switch ID res.
-	 * The number of resource entries in buffer is passed as 1 since only a
-	 * single switch/VEB instance is freed, and hence a single sw_id
-	 * is released.
-	 */
-	sw_buf->num_elems = CPU_TO_LE16(1);
-	sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_SWID);
-	sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(sw_id);
-
-	ret_status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
-					   ice_aqc_opc_free_res, NULL);
-
-	if (ret_status)
-		ice_debug(hw, ICE_DBG_SW, "CQ CMD Buffer:\n");
-
-	/* Prepare buffer to free for VEB Counter resource */
-	counter_buf = (struct ice_aqc_alloc_free_res_elem *)
-			ice_malloc(hw, buf_len);
-	if (!counter_buf) {
-		ice_free(hw, sw_buf);
-		return ICE_ERR_NO_MEMORY;
-	}
-
-	/* The number of resource entries in buffer is passed as 1 since only a
-	 * single switch/VEB instance is freed, and hence a single VEB counter
-	 * is released
-	 */
-	counter_buf->num_elems = CPU_TO_LE16(1);
-	counter_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VEB_COUNTER);
-	counter_buf->elem[0].e.sw_resp = CPU_TO_LE16(counter_id);
-
-	status = ice_aq_alloc_free_res(hw, 1, counter_buf, buf_len,
-				       ice_aqc_opc_free_res, NULL);
-	if (status) {
-		ice_debug(hw, ICE_DBG_SW, "VEB counter resource could not be freed\n");
-		ret_status = status;
-	}
-
-	ice_free(hw, counter_buf);
-	ice_free(hw, sw_buf);
-	return ret_status;
-}
-
 /**
  * ice_aq_add_vsi
  * @hw: pointer to the HW struct
@@ -2366,173 +2153,6 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 	return ice_aq_update_vsi(hw, vsi_ctx, cd);
 }
 
-/**
- * ice_aq_get_vsi_params
- * @hw: pointer to the HW struct
- * @vsi_ctx: pointer to a VSI context struct
- * @cd: pointer to command details structure or NULL
- *
- * Get VSI context info from hardware (0x0212)
- */
-enum ice_status
-ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
-		      struct ice_sq_cd *cd)
-{
-	struct ice_aqc_add_get_update_free_vsi *cmd;
-	struct ice_aqc_get_vsi_resp *resp;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-
-	cmd = &desc.params.vsi_cmd;
-	resp = &desc.params.get_vsi_resp;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_vsi_params);
-
-	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
-
-	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
-				 sizeof(vsi_ctx->info), cd);
-	if (!status) {
-		vsi_ctx->vsi_num = LE16_TO_CPU(resp->vsi_num) &
-					ICE_AQ_VSI_NUM_M;
-		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
-		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
-	}
-
-	return status;
-}
-
-/**
- * ice_aq_add_update_mir_rule - add/update a mirror rule
- * @hw: pointer to the HW struct
- * @rule_type: Rule Type
- * @dest_vsi: VSI number to which packets will be mirrored
- * @count: length of the list
- * @mr_buf: buffer for list of mirrored VSI numbers
- * @cd: pointer to command details structure or NULL
- * @rule_id: Rule ID
- *
- * Add/Update Mirror Rule (0x260).
- */
-enum ice_status
-ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
-			   u16 count, struct ice_mir_rule_buf *mr_buf,
-			   struct ice_sq_cd *cd, u16 *rule_id)
-{
-	struct ice_aqc_add_update_mir_rule *cmd;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-	__le16 *mr_list = NULL;
-	u16 buf_size = 0;
-
-	switch (rule_type) {
-	case ICE_AQC_RULE_TYPE_VPORT_INGRESS:
-	case ICE_AQC_RULE_TYPE_VPORT_EGRESS:
-		/* Make sure count and mr_buf are set for these rule_types */
-		if (!(count && mr_buf))
-			return ICE_ERR_PARAM;
-
-		buf_size = count * sizeof(__le16);
-		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
-		if (!mr_list)
-			return ICE_ERR_NO_MEMORY;
-		break;
-	case ICE_AQC_RULE_TYPE_PPORT_INGRESS:
-	case ICE_AQC_RULE_TYPE_PPORT_EGRESS:
-		/* Make sure count and mr_buf are not set for these
-		 * rule_types
-		 */
-		if (count || mr_buf)
-			return ICE_ERR_PARAM;
-		break;
-	default:
-		ice_debug(hw, ICE_DBG_SW, "Error due to unsupported rule_type %u\n", rule_type);
-		return ICE_ERR_OUT_OF_RANGE;
-	}
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_update_mir_rule);
-
-	/* Pre-process 'mr_buf' items for add/update of virtual port
-	 * ingress/egress mirroring (but not physical port ingress/egress
-	 * mirroring)
-	 */
-	if (mr_buf) {
-		int i;
-
-		for (i = 0; i < count; i++) {
-			u16 id;
-
-			id = mr_buf[i].vsi_idx & ICE_AQC_RULE_MIRRORED_VSI_M;
-
-			/* Validate specified VSI number, make sure it is less
-			 * than ICE_MAX_VSI, if not return with error.
-			 */
-			if (id >= ICE_MAX_VSI) {
-				ice_debug(hw, ICE_DBG_SW, "Error VSI index (%u) out-of-range\n",
-					  id);
-				ice_free(hw, mr_list);
-				return ICE_ERR_OUT_OF_RANGE;
-			}
-
-			/* add VSI to mirror rule */
-			if (mr_buf[i].add)
-				mr_list[i] =
-					CPU_TO_LE16(id | ICE_AQC_RULE_ACT_M);
-			else /* remove VSI from mirror rule */
-				mr_list[i] = CPU_TO_LE16(id);
-		}
-	}
-
-	cmd = &desc.params.add_update_rule;
-	if ((*rule_id) != ICE_INVAL_MIRROR_RULE_ID)
-		cmd->rule_id = CPU_TO_LE16(((*rule_id) & ICE_AQC_RULE_ID_M) |
-					   ICE_AQC_RULE_ID_VALID_M);
-	cmd->rule_type = CPU_TO_LE16(rule_type & ICE_AQC_RULE_TYPE_M);
-	cmd->num_entries = CPU_TO_LE16(count);
-	cmd->dest = CPU_TO_LE16(dest_vsi);
-
-	status = ice_aq_send_cmd(hw, &desc, mr_list, buf_size, cd);
-	if (!status)
-		*rule_id = LE16_TO_CPU(cmd->rule_id) & ICE_AQC_RULE_ID_M;
-
-	ice_free(hw, mr_list);
-
-	return status;
-}
-
-/**
- * ice_aq_delete_mir_rule - delete a mirror rule
- * @hw: pointer to the HW struct
- * @rule_id: Mirror rule ID (to be deleted)
- * @keep_allocd: if set, the VSI stays part of the PF allocated res,
- *		 otherwise it is returned to the shared pool
- * @cd: pointer to command details structure or NULL
- *
- * Delete Mirror Rule (0x261).
- */
-enum ice_status
-ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd,
-		       struct ice_sq_cd *cd)
-{
-	struct ice_aqc_delete_mir_rule *cmd;
-	struct ice_aq_desc desc;
-
-	/* rule_id should be in the range 0...63 */
-	if (rule_id >= ICE_MAX_NUM_MIRROR_RULES)
-		return ICE_ERR_OUT_OF_RANGE;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_del_mir_rule);
-
-	cmd = &desc.params.del_rule;
-	rule_id |= ICE_AQC_RULE_ID_VALID_M;
-	cmd->rule_id = CPU_TO_LE16(rule_id);
-
-	if (keep_allocd)
-		cmd->flags = CPU_TO_LE16(ICE_AQC_FLAG_KEEP_ALLOCD_M);
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
 /**
  * ice_aq_alloc_free_vsi_list
  * @hw: pointer to the HW struct
@@ -2591,68 +2211,6 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
 	return status;
 }
 
-/**
- * ice_aq_set_storm_ctrl - Sets storm control configuration
- * @hw: pointer to the HW struct
- * @bcast_thresh: represents the upper threshold for broadcast storm control
- * @mcast_thresh: represents the upper threshold for multicast storm control
- * @ctl_bitmask: storm control knobs
- *
- * Sets the storm control configuration (0x0280)
- */
-enum ice_status
-ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh,
-		      u32 ctl_bitmask)
-{
-	struct ice_aqc_storm_cfg *cmd;
-	struct ice_aq_desc desc;
-
-	cmd = &desc.params.storm_conf;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_storm_cfg);
-
-	cmd->bcast_thresh_size = CPU_TO_LE32(bcast_thresh & ICE_AQ_THRESHOLD_M);
-	cmd->mcast_thresh_size = CPU_TO_LE32(mcast_thresh & ICE_AQ_THRESHOLD_M);
-	cmd->storm_ctrl_ctrl = CPU_TO_LE32(ctl_bitmask);
-
-	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
-
-/**
- * ice_aq_get_storm_ctrl - gets storm control configuration
- * @hw: pointer to the HW struct
- * @bcast_thresh: represents the upper threshold for broadcast storm control
- * @mcast_thresh: represents the upper threshold for multicast storm control
- * @ctl_bitmask: storm control knobs
- *
- * Gets the storm control configuration (0x0281)
- */
-enum ice_status
-ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh,
-		      u32 *ctl_bitmask)
-{
-	enum ice_status status;
-	struct ice_aq_desc desc;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_storm_cfg);
-
-	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-	if (!status) {
-		struct ice_aqc_storm_cfg *resp = &desc.params.storm_conf;
-
-		if (bcast_thresh)
-			*bcast_thresh = LE32_TO_CPU(resp->bcast_thresh_size) &
-				ICE_AQ_THRESHOLD_M;
-		if (mcast_thresh)
-			*mcast_thresh = LE32_TO_CPU(resp->mcast_thresh_size) &
-				ICE_AQ_THRESHOLD_M;
-		if (ctl_bitmask)
-			*ctl_bitmask = LE32_TO_CPU(resp->storm_ctrl_ctrl);
-	}
-
-	return status;
-}
-
 /**
  * ice_aq_sw_rules - add/update/remove switch rules
  * @hw: pointer to the HW struct
@@ -3261,119 +2819,31 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
 }
 
 /**
- * ice_add_counter_act - add/update filter rule with counter action
+ * ice_create_vsi_list_map
  * @hw: pointer to the hardware structure
- * @m_ent: the management entry for which counter needs to be added
- * @counter_id: VLAN counter ID returned as part of allocate resource
- * @l_id: large action resource ID
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list ID generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list ID to VSI mapping
+ * using the given VSI list ID
  */
-static enum ice_status
-ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
-		    u16 counter_id, u16 l_id)
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
 {
-	struct ice_aqc_sw_rules_elem *lg_act;
-	struct ice_aqc_sw_rules_elem *rx_tx;
-	enum ice_status status;
-	/* 2 actions will be added while adding a large action counter */
-	const int num_acts = 2;
-	u16 lg_act_size;
-	u16 rules_size;
-	u16 f_rule_id;
-	u32 act;
-	u16 id;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
 
-	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
-		return ICE_ERR_PARAM;
+	v_map = (struct ice_vsi_list_map_info *)ice_malloc(hw, sizeof(*v_map));
+	if (!v_map)
+		return NULL;
 
-	/* Create two back-to-back switch rules and submit them to the HW using
-	 * one memory buffer:
-	 * 1. Large Action
-	 * 2. Look up Tx Rx
-	 */
-	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_acts);
-	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
-	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
-	if (!lg_act)
-		return ICE_ERR_NO_MEMORY;
-
-	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
-
-	/* Fill in the first switch rule i.e. large action */
-	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
-	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
-	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_acts);
-
-	/* First action VSI forwarding or VSI list forwarding depending on how
-	 * many VSIs
-	 */
-	id = (m_ent->vsi_count > 1) ?  m_ent->fltr_info.fwd_id.vsi_list_id :
-		m_ent->fltr_info.fwd_id.hw_vsi_id;
-
-	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
-	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
-		ICE_LG_ACT_VSI_LIST_ID_M;
-	if (m_ent->vsi_count > 1)
-		act |= ICE_LG_ACT_VSI_LIST;
-	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
-
-	/* Second action counter ID */
-	act = ICE_LG_ACT_STAT_COUNT;
-	act |= (counter_id << ICE_LG_ACT_STAT_COUNT_S) &
-		ICE_LG_ACT_STAT_COUNT_M;
-	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
-
-	/* call the fill switch rule to fill the lookup Tx Rx structure */
-	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
-			 ice_aqc_opc_update_sw_rules);
-
-	act = ICE_SINGLE_ACT_PTR;
-	act |= (l_id << ICE_SINGLE_ACT_PTR_VAL_S) & ICE_SINGLE_ACT_PTR_VAL_M;
-	rx_tx->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
-
-	/* Use the filter rule ID of the previously created rule with single
-	 * act. Once the update happens, hardware will treat this as large
-	 * action
-	 */
-	f_rule_id = m_ent->fltr_info.fltr_rule_id;
-	rx_tx->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_rule_id);
-
-	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
-				 ice_aqc_opc_update_sw_rules, NULL);
-	if (!status) {
-		m_ent->lg_act_idx = l_id;
-		m_ent->counter_index = counter_id;
-	}
-
-	ice_free(hw, lg_act);
-	return status;
-}
-
-/**
- * ice_create_vsi_list_map
- * @hw: pointer to the hardware structure
- * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
- * @num_vsi: number of VSI handles in the array
- * @vsi_list_id: VSI list ID generated as part of allocate resource
- *
- * Helper function to create a new entry of VSI list ID to VSI mapping
- * using the given VSI list ID
- */
-static struct ice_vsi_list_map_info *
-ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
-			u16 vsi_list_id)
-{
-	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_vsi_list_map_info *v_map;
-	int i;
-
-	v_map = (struct ice_vsi_list_map_info *)ice_malloc(hw, sizeof(*v_map));
-	if (!v_map)
-		return NULL;
-
-	v_map->vsi_list_id = vsi_list_id;
-	v_map->ref_cnt = 1;
-	for (i = 0; i < num_vsi; i++)
-		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
 
 	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
 	return v_map;
@@ -3564,48 +3034,6 @@ ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
 	return status;
 }
 
-/**
- * ice_update_sw_rule_bridge_mode
- * @hw: pointer to the HW struct
- *
- * Updates unicast switch filter rules based on VEB/VEPA mode
- */
-enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
-{
-	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_fltr_mgmt_list_entry *fm_entry;
-	enum ice_status status = ICE_SUCCESS;
-	struct LIST_HEAD_TYPE *rule_head;
-	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
-
-	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
-	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
-
-	ice_acquire_lock(rule_lock);
-	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		struct ice_fltr_info *fi = &fm_entry->fltr_info;
-		u8 *addr = fi->l_data.mac.mac_addr;
-
-		/* Update unicast Tx rules to reflect the selected
-		 * VEB/VEPA mode
-		 */
-		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
-		    (fi->fltr_act == ICE_FWD_TO_VSI ||
-		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
-		     fi->fltr_act == ICE_FWD_TO_Q ||
-		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
-			status = ice_update_pkt_fwd_rule(hw, fi);
-			if (status)
-				break;
-		}
-	}
-
-	ice_release_lock(rule_lock);
-
-	return status;
-}
-
 /**
  * ice_add_update_vsi_list
  * @hw: pointer to the hardware structure
@@ -4049,88 +3477,6 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list,
 	return status;
 }
 
-/**
- * ice_aq_get_res_alloc - get allocated resources
- * @hw: pointer to the HW struct
- * @num_entries: pointer to u16 to store the number of resource entries returned
- * @buf: pointer to buffer
- * @buf_size: size of buf
- * @cd: pointer to command details structure or NULL
- *
- * The caller-supplied buffer must be large enough to store the resource
- * information for all resource types. Each resource type is an
- * ice_aqc_get_res_resp_elem structure.
- */
-enum ice_status
-ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries,
-		     struct ice_aqc_get_res_resp_elem *buf, u16 buf_size,
-		     struct ice_sq_cd *cd)
-{
-	struct ice_aqc_get_res_alloc *resp;
-	enum ice_status status;
-	struct ice_aq_desc desc;
-
-	if (!buf)
-		return ICE_ERR_BAD_PTR;
-
-	if (buf_size < ICE_AQ_GET_RES_ALLOC_BUF_LEN)
-		return ICE_ERR_INVAL_SIZE;
-
-	resp = &desc.params.get_res;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_res_alloc);
-	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-
-	if (!status && num_entries)
-		*num_entries = LE16_TO_CPU(resp->resp_elem_num);
-
-	return status;
-}
-
-/**
- * ice_aq_get_res_descs - get allocated resource descriptors
- * @hw: pointer to the hardware structure
- * @num_entries: number of resource entries in buffer
- * @buf: structure to hold response data buffer
- * @buf_size: size of buffer
- * @res_type: resource type
- * @res_shared: is resource shared
- * @desc_id: input - first desc ID to start; output - next desc ID
- * @cd: pointer to command details structure or NULL
- */
-enum ice_status
-ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
-		     struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type,
-		     bool res_shared, u16 *desc_id, struct ice_sq_cd *cd)
-{
-	struct ice_aqc_get_allocd_res_desc *cmd;
-	struct ice_aq_desc desc;
-	enum ice_status status;
-
-	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
-	cmd = &desc.params.get_res_desc;
-
-	if (!buf)
-		return ICE_ERR_PARAM;
-
-	if (buf_size != (num_entries * sizeof(*buf)))
-		return ICE_ERR_PARAM;
-
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_allocd_res_desc);
-
-	cmd->ops.cmd.res = CPU_TO_LE16(((res_type << ICE_AQC_RES_TYPE_S) &
-					 ICE_AQC_RES_TYPE_M) | (res_shared ?
-					ICE_AQC_RES_TYPE_FLAG_SHARED : 0));
-	cmd->ops.cmd.first_desc = CPU_TO_LE16(*desc_id);
-
-	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-	if (!status)
-		*desc_id = LE16_TO_CPU(cmd->ops.resp.next_desc);
-
-	return status;
-}
-
 /**
  * ice_add_mac_rule - Add a MAC address based filter rule
  * @hw: pointer to the hardware structure
@@ -4499,63 +3845,6 @@ enum ice_status ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	return ice_add_vlan_rule(hw, v_list, hw->switch_info);
 }
 
-/**
- * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
- * @hw: pointer to the hardware structure
- * @mv_list: list of MAC and VLAN filters
- * @sw: pointer to switch info struct for which function add rule
- * @lport: logic port number on which function add rule
- *
- * If the VSI on which the MAC-VLAN pair has to be added has Rx and Tx VLAN
- * pruning bits enabled, then it is the responsibility of the caller to make
- * sure to add a VLAN only filter on the same VSI. Packets belonging to that
- * VLAN won't be received on that VSI otherwise.
- */
-static enum ice_status
-ice_add_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list,
-		      struct ice_switch_info *sw, u8 lport)
-{
-	struct ice_fltr_list_entry *mv_list_itr;
-	struct ice_sw_recipe *recp_list;
-
-	if (!mv_list || !hw)
-		return ICE_ERR_PARAM;
-
-	recp_list = &sw->recp_list[ICE_SW_LKUP_MAC_VLAN];
-	LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
-			    list_entry) {
-		enum ice_sw_lkup_type l_type =
-			mv_list_itr->fltr_info.lkup_type;
-
-		if (l_type != ICE_SW_LKUP_MAC_VLAN)
-			return ICE_ERR_PARAM;
-		mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
-		mv_list_itr->status =
-			ice_add_rule_internal(hw, recp_list, lport,
-					      mv_list_itr);
-		if (mv_list_itr->status)
-			return mv_list_itr->status;
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_add_mac_vlan - Add a MAC VLAN address based filter rule
- * @hw: pointer to the hardware structure
- * @mv_list: list of MAC VLAN addresses and forwarding information
- *
- * Function add MAC VLAN rule for logical port from HW struct
- */
-enum ice_status
-ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
-{
-	if (!mv_list || !hw)
-		return ICE_ERR_PARAM;
-
-	return ice_add_mac_vlan_rule(hw, mv_list, hw->switch_info,
-				     hw->port_info->lport);
-}
-
 /**
  * ice_add_eth_mac_rule - Add ethertype and MAC based filter rule
  * @hw: pointer to the hardware structure
@@ -4700,118 +3989,6 @@ ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
 	}
 }
 
-/**
- * ice_rem_all_sw_rules_info
- * @hw: pointer to the hardware structure
- */
-void ice_rem_all_sw_rules_info(struct ice_hw *hw)
-{
-	struct ice_switch_info *sw = hw->switch_info;
-	u8 i;
-
-	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
-		struct LIST_HEAD_TYPE *rule_head;
-
-		rule_head = &sw->recp_list[i].filt_rules;
-		if (!sw->recp_list[i].adv_rule)
-			ice_rem_sw_rule_info(hw, rule_head);
-		else
-			ice_rem_adv_rule_info(hw, rule_head);
-		if (sw->recp_list[i].adv_rule &&
-		    LIST_EMPTY(&sw->recp_list[i].filt_rules))
-			sw->recp_list[i].adv_rule = false;
-	}
-}
-
-/**
- * ice_cfg_dflt_vsi - change state of VSI to set/clear default
- * @pi: pointer to the port_info structure
- * @vsi_handle: VSI handle to set as default
- * @set: true to add the above mentioned switch rule, false to remove it
- * @direction: ICE_FLTR_RX or ICE_FLTR_TX
- *
- * add filter rule to set/unset given VSI as default VSI for the switch
- * (represented by swid)
- */
-enum ice_status
-ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
-		 u8 direction)
-{
-	struct ice_aqc_sw_rules_elem *s_rule;
-	struct ice_fltr_info f_info;
-	struct ice_hw *hw = pi->hw;
-	enum ice_adminq_opc opcode;
-	enum ice_status status;
-	u16 s_rule_size;
-	u16 hw_vsi_id;
-
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_ERR_PARAM;
-	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
-
-	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
-		ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
-
-	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
-	if (!s_rule)
-		return ICE_ERR_NO_MEMORY;
-
-	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
-
-	f_info.lkup_type = ICE_SW_LKUP_DFLT;
-	f_info.flag = direction;
-	f_info.fltr_act = ICE_FWD_TO_VSI;
-	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
-
-	if (f_info.flag & ICE_FLTR_RX) {
-		f_info.src = pi->lport;
-		f_info.src_id = ICE_SRC_ID_LPORT;
-		if (!set)
-			f_info.fltr_rule_id =
-				pi->dflt_rx_vsi_rule_id;
-	} else if (f_info.flag & ICE_FLTR_TX) {
-		f_info.src_id = ICE_SRC_ID_VSI;
-		f_info.src = hw_vsi_id;
-		if (!set)
-			f_info.fltr_rule_id =
-				pi->dflt_tx_vsi_rule_id;
-	}
-
-	if (set)
-		opcode = ice_aqc_opc_add_sw_rules;
-	else
-		opcode = ice_aqc_opc_remove_sw_rules;
-
-	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
-
-	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
-	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
-		goto out;
-	if (set) {
-		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
-
-		if (f_info.flag & ICE_FLTR_TX) {
-			pi->dflt_tx_vsi_num = hw_vsi_id;
-			pi->dflt_tx_vsi_rule_id = index;
-		} else if (f_info.flag & ICE_FLTR_RX) {
-			pi->dflt_rx_vsi_num = hw_vsi_id;
-			pi->dflt_rx_vsi_rule_id = index;
-		}
-	} else {
-		if (f_info.flag & ICE_FLTR_TX) {
-			pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
-			pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
-		} else if (f_info.flag & ICE_FLTR_RX) {
-			pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
-			pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
-		}
-	}
-
-out:
-	ice_free(hw, s_rule);
-	return status;
-}
-
 /**
  * ice_find_ucast_rule_entry - Search for a unicast MAC filter rule entry
  * @list_head: head of rule list
@@ -5063,47 +4240,6 @@ ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_add_to_vsi_fltr_list - Add VSI filters to the list
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @lkup_list_head: pointer to the list that has certain lookup type filters
- * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
- *
- * Locates all filters in lkup_list_head that are used by the given VSI,
- * and adds COPIES of those entries to vsi_list_head (intended to be used
- * to remove the listed filters).
- * Note that this means all entries in vsi_list_head must be explicitly
- * deallocated by the caller when done with list.
- */
-static enum ice_status
-ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
-			 struct LIST_HEAD_TYPE *lkup_list_head,
-			 struct LIST_HEAD_TYPE *vsi_list_head)
-{
-	struct ice_fltr_mgmt_list_entry *fm_entry;
-	enum ice_status status = ICE_SUCCESS;
-
-	/* check to make sure VSI ID is valid and within boundary */
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_ERR_PARAM;
-
-	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
-			    ice_fltr_mgmt_list_entry, list_entry) {
-		struct ice_fltr_info *fi;
-
-		fi = &fm_entry->fltr_info;
-		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
-			continue;
-
-		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
-							vsi_list_head, fi);
-		if (status)
-			return status;
-	}
-	return status;
-}
-
 /**
  * ice_determine_promisc_mask
  * @fi: filter info to parse
@@ -5137,116 +4273,6 @@ static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
 	return promisc_mask;
 }
 
-/**
- * _ice_get_vsi_promisc - get promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- * @sw: pointer to switch info struct for which function add rule
- */
-static enum ice_status
-_ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-		     u16 *vid, struct ice_switch_info *sw)
-{
-	struct ice_fltr_mgmt_list_entry *itr;
-	struct LIST_HEAD_TYPE *rule_head;
-	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
-
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_ERR_PARAM;
-
-	*vid = 0;
-	*promisc_mask = 0;
-	rule_head = &sw->recp_list[ICE_SW_LKUP_PROMISC].filt_rules;
-	rule_lock = &sw->recp_list[ICE_SW_LKUP_PROMISC].filt_rule_lock;
-
-	ice_acquire_lock(rule_lock);
-	LIST_FOR_EACH_ENTRY(itr, rule_head,
-			    ice_fltr_mgmt_list_entry, list_entry) {
-		/* Continue if this filter doesn't apply to this VSI or the
-		 * VSI ID is not in the VSI map for this filter
-		 */
-		if (!ice_vsi_uses_fltr(itr, vsi_handle))
-			continue;
-
-		*promisc_mask |= ice_determine_promisc_mask(&itr->fltr_info);
-	}
-	ice_release_lock(rule_lock);
-
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_get_vsi_promisc - get promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- */
-enum ice_status
-ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-		    u16 *vid)
-{
-	return _ice_get_vsi_promisc(hw, vsi_handle, promisc_mask,
-				    vid, hw->switch_info);
-}
-
-/**
- * ice_get_vsi_vlan_promisc - get VLAN promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- * @sw: pointer to switch info struct for which function add rule
- */
-static enum ice_status
-_ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-			  u16 *vid, struct ice_switch_info *sw)
-{
-	struct ice_fltr_mgmt_list_entry *itr;
-	struct LIST_HEAD_TYPE *rule_head;
-	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
-
-	if (!ice_is_vsi_valid(hw, vsi_handle))
-		return ICE_ERR_PARAM;
-
-	*vid = 0;
-	*promisc_mask = 0;
-	rule_head = &sw->recp_list[ICE_SW_LKUP_PROMISC_VLAN].filt_rules;
-	rule_lock = &sw->recp_list[ICE_SW_LKUP_PROMISC_VLAN].filt_rule_lock;
-
-	ice_acquire_lock(rule_lock);
-	LIST_FOR_EACH_ENTRY(itr, rule_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		/* Continue if this filter doesn't apply to this VSI or the
-		 * VSI ID is not in the VSI map for this filter
-		 */
-		if (!ice_vsi_uses_fltr(itr, vsi_handle))
-			continue;
-
-		*promisc_mask |= ice_determine_promisc_mask(&itr->fltr_info);
-	}
-	ice_release_lock(rule_lock);
-
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_get_vsi_vlan_promisc - get VLAN promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- */
-enum ice_status
-ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-			 u16 *vid)
-{
-	return _ice_get_vsi_vlan_promisc(hw, vsi_handle, promisc_mask,
-					 vid, hw->switch_info);
-}
-
 /**
  * ice_remove_promisc - Remove promisc based filter rules
  * @hw: pointer to the hardware structure
@@ -5460,219 +4486,42 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
 		new_fltr.flag = 0;
 		if (is_tx_fltr) {
 			new_fltr.flag |= ICE_FLTR_TX;
-			new_fltr.src = hw_vsi_id;
-		} else {
-			new_fltr.flag |= ICE_FLTR_RX;
-			new_fltr.src = lport;
-		}
-
-		new_fltr.fltr_act = ICE_FWD_TO_VSI;
-		new_fltr.vsi_handle = vsi_handle;
-		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
-		f_list_entry.fltr_info = new_fltr;
-		recp_list = &sw->recp_list[recipe_id];
-
-		status = ice_add_rule_internal(hw, recp_list, lport,
-					       &f_list_entry);
-		if (status != ICE_SUCCESS)
-			goto set_promisc_exit;
-	}
-
-set_promisc_exit:
-	return status;
-}
-
-/**
- * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @vid: VLAN ID to set VLAN promiscuous
- */
-enum ice_status
-ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
-		    u16 vid)
-{
-	return _ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid,
-				    hw->port_info->lport,
-				    hw->switch_info);
-}
-
-/**
- * _ice_set_vlan_vsi_promisc
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @rm_vlan_promisc: Clear VLANs VSI promisc mode
- * @lport: logical port number to configure promisc mode
- * @sw: pointer to switch info struct for which function add rule
- *
- * Configure VSI with all associated VLANs to given promiscuous mode(s)
- */
-static enum ice_status
-_ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
-			  bool rm_vlan_promisc, u8 lport,
-			  struct ice_switch_info *sw)
-{
-	struct ice_fltr_list_entry *list_itr, *tmp;
-	struct LIST_HEAD_TYPE vsi_list_head;
-	struct LIST_HEAD_TYPE *vlan_head;
-	struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
-	enum ice_status status;
-	u16 vlan_id;
-
-	INIT_LIST_HEAD(&vsi_list_head);
-	vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
-	vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
-	ice_acquire_lock(vlan_lock);
-	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
-					  &vsi_list_head);
-	ice_release_lock(vlan_lock);
-	if (status)
-		goto free_fltr_list;
-
-	LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
-			    list_entry) {
-		vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
-		if (rm_vlan_promisc)
-			status =  _ice_clear_vsi_promisc(hw, vsi_handle,
-							 promisc_mask,
-							 vlan_id, sw);
-		else
-			status =  _ice_set_vsi_promisc(hw, vsi_handle,
-						       promisc_mask, vlan_id,
-						       lport, sw);
-		if (status)
-			break;
-	}
-
-free_fltr_list:
-	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
-				 ice_fltr_list_entry, list_entry) {
-		LIST_DEL(&list_itr->list_entry);
-		ice_free(hw, list_itr);
-	}
-	return status;
-}
-
-/**
- * ice_set_vlan_vsi_promisc
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @rm_vlan_promisc: Clear VLANs VSI promisc mode
- *
- * Configure VSI with all associated VLANs to given promiscuous mode(s)
- */
-enum ice_status
-ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
-			 bool rm_vlan_promisc)
-{
-	return _ice_set_vlan_vsi_promisc(hw, vsi_handle, promisc_mask,
-					 rm_vlan_promisc, hw->port_info->lport,
-					 hw->switch_info);
-}
-
-/**
- * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @recp_list: recipe list from which function remove fltr
- * @lkup: switch rule filter lookup type
- */
-static void
-ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
-			 struct ice_sw_recipe *recp_list,
-			 enum ice_sw_lkup_type lkup)
-{
-	struct ice_fltr_list_entry *fm_entry;
-	struct LIST_HEAD_TYPE remove_list_head;
-	struct LIST_HEAD_TYPE *rule_head;
-	struct ice_fltr_list_entry *tmp;
-	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
-	enum ice_status status;
-
-	INIT_LIST_HEAD(&remove_list_head);
-	rule_lock = &recp_list[lkup].filt_rule_lock;
-	rule_head = &recp_list[lkup].filt_rules;
-	ice_acquire_lock(rule_lock);
-	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
-					  &remove_list_head);
-	ice_release_lock(rule_lock);
-	if (status)
-		return;
+			new_fltr.src = hw_vsi_id;
+		} else {
+			new_fltr.flag |= ICE_FLTR_RX;
+			new_fltr.src = lport;
+		}
 
-	switch (lkup) {
-	case ICE_SW_LKUP_MAC:
-		ice_remove_mac_rule(hw, &remove_list_head, &recp_list[lkup]);
-		break;
-	case ICE_SW_LKUP_VLAN:
-		ice_remove_vlan_rule(hw, &remove_list_head, &recp_list[lkup]);
-		break;
-	case ICE_SW_LKUP_PROMISC:
-	case ICE_SW_LKUP_PROMISC_VLAN:
-		ice_remove_promisc(hw, lkup, &remove_list_head);
-		break;
-	case ICE_SW_LKUP_MAC_VLAN:
-		ice_remove_mac_vlan(hw, &remove_list_head);
-		break;
-	case ICE_SW_LKUP_ETHERTYPE:
-	case ICE_SW_LKUP_ETHERTYPE_MAC:
-		ice_remove_eth_mac(hw, &remove_list_head);
-		break;
-	case ICE_SW_LKUP_DFLT:
-		ice_debug(hw, ICE_DBG_SW, "Remove filters for this lookup type hasn't been implemented yet\n");
-		break;
-	case ICE_SW_LKUP_LAST:
-		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
-		break;
-	}
+		new_fltr.fltr_act = ICE_FWD_TO_VSI;
+		new_fltr.vsi_handle = vsi_handle;
+		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+		f_list_entry.fltr_info = new_fltr;
+		recp_list = &sw->recp_list[recipe_id];
 
-	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
-				 ice_fltr_list_entry, list_entry) {
-		LIST_DEL(&fm_entry->list_entry);
-		ice_free(hw, fm_entry);
+		status = ice_add_rule_internal(hw, recp_list, lport,
+					       &f_list_entry);
+		if (status != ICE_SUCCESS)
+			goto set_promisc_exit;
 	}
-}
-
-/**
- * ice_remove_vsi_fltr_rule - Remove all filters for a VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @sw: pointer to switch info struct
- */
-static void
-ice_remove_vsi_fltr_rule(struct ice_hw *hw, u16 vsi_handle,
-			 struct ice_switch_info *sw)
-{
-	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_MAC);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_MAC_VLAN);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_PROMISC);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_VLAN);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_DFLT);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_ETHERTYPE);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_ETHERTYPE_MAC);
-	ice_remove_vsi_lkup_fltr(hw, vsi_handle,
-				 sw->recp_list, ICE_SW_LKUP_PROMISC_VLAN);
+set_promisc_exit:
+	return status;
 }
 
 /**
- * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
  * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
  */
-void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		    u16 vid)
 {
-	ice_remove_vsi_fltr_rule(hw, vsi_handle, hw->switch_info);
+	return _ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid,
+				    hw->port_info->lport,
+				    hw->switch_info);
 }
 
 /**
@@ -5761,260 +4610,6 @@ enum ice_status ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id)
 				  counter_id);
 }
 
-/**
- * ice_free_vlan_res_counter - Free counter resource for VLAN type
- * @hw: pointer to the hardware structure
- * @counter_id: counter index to be freed
- */
-enum ice_status ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id)
-{
-	return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_VLAN_COUNTER,
-				 ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1,
-				 counter_id);
-}
-
-/**
- * ice_alloc_res_lg_act - add large action resource
- * @hw: pointer to the hardware structure
- * @l_id: large action ID to fill it in
- * @num_acts: number of actions to hold with a large action entry
- */
-static enum ice_status
-ice_alloc_res_lg_act(struct ice_hw *hw, u16 *l_id, u16 num_acts)
-{
-	struct ice_aqc_alloc_free_res_elem *sw_buf;
-	enum ice_status status;
-	u16 buf_len;
-
-	if (num_acts > ICE_MAX_LG_ACT || num_acts == 0)
-		return ICE_ERR_PARAM;
-
-	/* Allocate resource for large action */
-	buf_len = ice_struct_size(sw_buf, elem, 1);
-	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
-	if (!sw_buf)
-		return ICE_ERR_NO_MEMORY;
-
-	sw_buf->num_elems = CPU_TO_LE16(1);
-
-	/* If num_acts is 1, use ICE_AQC_RES_TYPE_WIDE_TABLE_1.
-	 * If num_acts is 2, use ICE_AQC_RES_TYPE_WIDE_TABLE_3.
-	 * If num_acts is greater than 2, then use
-	 * ICE_AQC_RES_TYPE_WIDE_TABLE_4.
-	 * The num_acts cannot exceed 4. This was ensured at the
-	 * beginning of the function.
-	 */
-	if (num_acts == 1)
-		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_1);
-	else if (num_acts == 2)
-		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_2);
-	else
-		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_4);
-
-	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
-				       ice_aqc_opc_alloc_res, NULL);
-	if (!status)
-		*l_id = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
-
-	ice_free(hw, sw_buf);
-	return status;
-}
-
-/**
- * ice_add_mac_with_sw_marker - add filter with sw marker
- * @hw: pointer to the hardware structure
- * @f_info: filter info structure containing the MAC filter information
- * @sw_marker: sw marker to tag the Rx descriptor with
- */
-enum ice_status
-ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
-			   u16 sw_marker)
-{
-	struct ice_fltr_mgmt_list_entry *m_entry;
-	struct ice_fltr_list_entry fl_info;
-	struct ice_sw_recipe *recp_list;
-	struct LIST_HEAD_TYPE l_head;
-	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
-	enum ice_status ret;
-	bool entry_exists;
-	u16 lg_act_id;
-
-	if (f_info->fltr_act != ICE_FWD_TO_VSI)
-		return ICE_ERR_PARAM;
-
-	if (f_info->lkup_type != ICE_SW_LKUP_MAC)
-		return ICE_ERR_PARAM;
-
-	if (sw_marker == ICE_INVAL_SW_MARKER_ID)
-		return ICE_ERR_PARAM;
-
-	if (!ice_is_vsi_valid(hw, f_info->vsi_handle))
-		return ICE_ERR_PARAM;
-	f_info->fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, f_info->vsi_handle);
-
-	/* Add filter if it doesn't exist so then the adding of large
-	 * action always results in update
-	 */
-
-	INIT_LIST_HEAD(&l_head);
-	fl_info.fltr_info = *f_info;
-	LIST_ADD(&fl_info.list_entry, &l_head);
-
-	entry_exists = false;
-	ret = ice_add_mac_rule(hw, &l_head, hw->switch_info,
-			       hw->port_info->lport);
-	if (ret == ICE_ERR_ALREADY_EXISTS)
-		entry_exists = true;
-	else if (ret)
-		return ret;
-
-	recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC];
-	rule_lock = &recp_list->filt_rule_lock;
-	ice_acquire_lock(rule_lock);
-	/* Get the book keeping entry for the filter */
-	m_entry = ice_find_rule_entry(&recp_list->filt_rules, f_info);
-	if (!m_entry)
-		goto exit_error;
-
-	/* If counter action was enabled for this rule then don't enable
-	 * sw marker large action
-	 */
-	if (m_entry->counter_index != ICE_INVAL_COUNTER_ID) {
-		ret = ICE_ERR_PARAM;
-		goto exit_error;
-	}
-
-	/* if same marker was added before */
-	if (m_entry->sw_marker_id == sw_marker) {
-		ret = ICE_ERR_ALREADY_EXISTS;
-		goto exit_error;
-	}
-
-	/* Allocate a hardware table entry to hold large act. Three actions
-	 * for marker based large action
-	 */
-	ret = ice_alloc_res_lg_act(hw, &lg_act_id, 3);
-	if (ret)
-		goto exit_error;
-
-	if (lg_act_id == ICE_INVAL_LG_ACT_INDEX)
-		goto exit_error;
-
-	/* Update the switch rule to add the marker action */
-	ret = ice_add_marker_act(hw, m_entry, sw_marker, lg_act_id);
-	if (!ret) {
-		ice_release_lock(rule_lock);
-		return ret;
-	}
-
-exit_error:
-	ice_release_lock(rule_lock);
-	/* only remove entry if it did not exist previously */
-	if (!entry_exists)
-		ret = ice_remove_mac(hw, &l_head);
-
-	return ret;
-}
-
-/**
- * ice_add_mac_with_counter - add filter with counter enabled
- * @hw: pointer to the hardware structure
- * @f_info: pointer to filter info structure containing the MAC filter
- *          information
- */
-enum ice_status
-ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info)
-{
-	struct ice_fltr_mgmt_list_entry *m_entry;
-	struct ice_fltr_list_entry fl_info;
-	struct ice_sw_recipe *recp_list;
-	struct LIST_HEAD_TYPE l_head;
-	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
-	enum ice_status ret;
-	bool entry_exist;
-	u16 counter_id;
-	u16 lg_act_id;
-
-	if (f_info->fltr_act != ICE_FWD_TO_VSI)
-		return ICE_ERR_PARAM;
-
-	if (f_info->lkup_type != ICE_SW_LKUP_MAC)
-		return ICE_ERR_PARAM;
-
-	if (!ice_is_vsi_valid(hw, f_info->vsi_handle))
-		return ICE_ERR_PARAM;
-	f_info->fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, f_info->vsi_handle);
-	recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC];
-
-	entry_exist = false;
-
-	rule_lock = &recp_list->filt_rule_lock;
-
-	/* Add filter if it doesn't exist so then the adding of large
-	 * action always results in update
-	 */
-	INIT_LIST_HEAD(&l_head);
-
-	fl_info.fltr_info = *f_info;
-	LIST_ADD(&fl_info.list_entry, &l_head);
-
-	ret = ice_add_mac_rule(hw, &l_head, hw->switch_info,
-			       hw->port_info->lport);
-	if (ret == ICE_ERR_ALREADY_EXISTS)
-		entry_exist = true;
-	else if (ret)
-		return ret;
-
-	ice_acquire_lock(rule_lock);
-	m_entry = ice_find_rule_entry(&recp_list->filt_rules, f_info);
-	if (!m_entry) {
-		ret = ICE_ERR_BAD_PTR;
-		goto exit_error;
-	}
-
-	/* Don't enable counter for a filter for which sw marker was enabled */
-	if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID) {
-		ret = ICE_ERR_PARAM;
-		goto exit_error;
-	}
-
-	/* If a counter was already enabled then don't need to add again */
-	if (m_entry->counter_index != ICE_INVAL_COUNTER_ID) {
-		ret = ICE_ERR_ALREADY_EXISTS;
-		goto exit_error;
-	}
-
-	/* Allocate a hardware table entry to VLAN counter */
-	ret = ice_alloc_vlan_res_counter(hw, &counter_id);
-	if (ret)
-		goto exit_error;
-
-	/* Allocate a hardware table entry to hold large act. Two actions for
-	 * counter based large action
-	 */
-	ret = ice_alloc_res_lg_act(hw, &lg_act_id, 2);
-	if (ret)
-		goto exit_error;
-
-	if (lg_act_id == ICE_INVAL_LG_ACT_INDEX)
-		goto exit_error;
-
-	/* Update the switch rule to add the counter action */
-	ret = ice_add_counter_act(hw, m_entry, counter_id, lg_act_id);
-	if (!ret) {
-		ice_release_lock(rule_lock);
-		return ret;
-	}
-
-exit_error:
-	ice_release_lock(rule_lock);
-	/* only remove entry if it did not exist previously */
-	if (!entry_exist)
-		ret = ice_remove_mac(hw, &l_head);
-
-	return ret;
-}
-
 /* This is mapping table entry that maps every word within a given protocol
  * structure to the real byte offset as per the specification of that
  * protocol header.
@@ -8374,155 +6969,6 @@ ice_rem_adv_rule_by_id(struct ice_hw *hw,
 	return ICE_ERR_DOES_NOT_EXIST;
 }
 
-/**
- * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
- *                       given VSI handle
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
- *
- * This function is used to remove all the rules for a given VSI and as soon
- * as removing a rule fails, it will return immediately with the error code,
- * else it will return ICE_SUCCESS
- */
-enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
-	struct ice_adv_fltr_mgmt_list_entry *list_itr, *tmp_entry;
-	struct ice_vsi_list_map_info *map_info;
-	struct LIST_HEAD_TYPE *list_head;
-	struct ice_adv_rule_info rinfo;
-	struct ice_switch_info *sw;
-	enum ice_status status;
-	u8 rid;
-
-	sw = hw->switch_info;
-	for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
-		if (!sw->recp_list[rid].recp_created)
-			continue;
-		if (!sw->recp_list[rid].adv_rule)
-			continue;
-
-		list_head = &sw->recp_list[rid].filt_rules;
-		LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp_entry, list_head,
-					 ice_adv_fltr_mgmt_list_entry,
-					 list_entry) {
-			rinfo = list_itr->rule_info;
-
-			if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST) {
-				map_info = list_itr->vsi_list_info;
-				if (!map_info)
-					continue;
-
-				if (!ice_is_bit_set(map_info->vsi_map,
-						    vsi_handle))
-					continue;
-			} else if (rinfo.sw_act.vsi_handle != vsi_handle) {
-				continue;
-			}
-
-			rinfo.sw_act.vsi_handle = vsi_handle;
-			status = ice_rem_adv_rule(hw, list_itr->lkups,
-						  list_itr->lkups_cnt, &rinfo);
-
-			if (status)
-				return status;
-		}
-	}
-	return ICE_SUCCESS;
-}
-
-/**
- * ice_replay_fltr - Replay all the filters stored by a specific list head
- * @hw: pointer to the hardware structure
- * @list_head: list for which filters needs to be replayed
- * @recp_id: Recipe ID for which rules need to be replayed
- */
-static enum ice_status
-ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head)
-{
-	struct ice_fltr_mgmt_list_entry *itr;
-	enum ice_status status = ICE_SUCCESS;
-	struct ice_sw_recipe *recp_list;
-	u8 lport = hw->port_info->lport;
-	struct LIST_HEAD_TYPE l_head;
-
-	if (LIST_EMPTY(list_head))
-		return status;
-
-	recp_list = &hw->switch_info->recp_list[recp_id];
-	/* Move entries from the given list_head to a temporary l_head so that
-	 * they can be replayed. Otherwise when trying to re-add the same
-	 * filter, the function will return already exists
-	 */
-	LIST_REPLACE_INIT(list_head, &l_head);
-
-	/* Mark the given list_head empty by reinitializing it so filters
-	 * could be added again by *handler
-	 */
-	LIST_FOR_EACH_ENTRY(itr, &l_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		struct ice_fltr_list_entry f_entry;
-		u16 vsi_handle;
-
-		f_entry.fltr_info = itr->fltr_info;
-		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN) {
-			status = ice_add_rule_internal(hw, recp_list, lport,
-						       &f_entry);
-			if (status != ICE_SUCCESS)
-				goto end;
-			continue;
-		}
-
-		/* Add a filter per VSI separately */
-		ice_for_each_set_bit(vsi_handle, itr->vsi_list_info->vsi_map,
-				     ICE_MAX_VSI) {
-			if (!ice_is_vsi_valid(hw, vsi_handle))
-				break;
-
-			ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
-			f_entry.fltr_info.vsi_handle = vsi_handle;
-			f_entry.fltr_info.fwd_id.hw_vsi_id =
-				ice_get_hw_vsi_num(hw, vsi_handle);
-			f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
-			if (recp_id == ICE_SW_LKUP_VLAN)
-				status = ice_add_vlan_internal(hw, recp_list,
-							       &f_entry);
-			else
-				status = ice_add_rule_internal(hw, recp_list,
-							       lport,
-							       &f_entry);
-			if (status != ICE_SUCCESS)
-				goto end;
-		}
-	}
-end:
-	/* Clear the filter management list */
-	ice_rem_sw_rule_info(hw, &l_head);
-	return status;
-}
-
-/**
- * ice_replay_all_fltr - replay all filters stored in bookkeeping lists
- * @hw: pointer to the hardware structure
- *
- * NOTE: This function does not clean up partially added filters on error.
- * It is up to caller of the function to issue a reset or fail early.
- */
-enum ice_status ice_replay_all_fltr(struct ice_hw *hw)
-{
-	struct ice_switch_info *sw = hw->switch_info;
-	enum ice_status status = ICE_SUCCESS;
-	u8 i;
-
-	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
-		struct LIST_HEAD_TYPE *head = &sw->recp_list[i].filt_rules;
-
-		status = ice_replay_fltr(hw, i, head);
-		if (status != ICE_SUCCESS)
-			return status;
-	}
-	return status;
-}
-
 /**
  * ice_replay_vsi_fltr - Replay filters for requested VSI
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index be9b74fd4c..680f8dad38 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -386,30 +386,12 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 	       struct ice_sq_cd *cd);
 struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
 void ice_clear_all_vsi_ctx(struct ice_hw *hw);
-enum ice_status
-ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
-		      struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
-			   u16 count, struct ice_mir_rule_buf *mr_buf,
-			   struct ice_sq_cd *cd, u16 *rule_id);
-enum ice_status
-ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd,
-		       struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh,
-		      u32 *ctl_bitmask);
-enum ice_status
-ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh,
-		      u32 ctl_bitmask);
 /* Switch config */
 enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
 
 enum ice_status
 ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
 enum ice_status
-ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
-enum ice_status
 ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
 		   u16 *counter_id);
 enum ice_status
@@ -417,27 +399,10 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
 		  u16 counter_id);
 
 /* Switch/bridge related commands */
-enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
-enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id);
-enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id);
-enum ice_status
-ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id,
-	     u16 *counter_id);
-enum ice_status
-ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id);
-enum ice_status
-ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries,
-		     struct ice_aqc_get_res_resp_elem *buf, u16 buf_size,
-		     struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
-		     struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type,
-		     bool res_shared, u16 *desc_id, struct ice_sq_cd *cd);
 enum ice_status
 ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
 enum ice_status
 ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
-void ice_rem_all_sw_rules_info(struct ice_hw *hw);
 enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
 enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
 enum ice_status
@@ -445,38 +410,15 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
-ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
-enum ice_status
 ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 
-enum ice_status
-ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
-			   u16 sw_marker);
-enum ice_status
-ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info);
-void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
-
 /* Promisc/defport setup for VSIs */
 enum ice_status
-ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
-		 u8 direction);
-enum ice_status
 ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
 		    u16 vid);
 enum ice_status
 ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
 		      u16 vid);
-enum ice_status
-ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
-			 bool rm_vlan_promisc);
-
-/* Get VSIs Promisc/defport settings */
-enum ice_status
-ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-		    u16 *vid);
-enum ice_status
-ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
-			 u16 *vid);
 
 enum ice_status
 ice_aq_add_recipe(struct ice_hw *hw,
@@ -501,16 +443,12 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
 		 struct ice_rule_query_data *added_entry);
 enum ice_status
-ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle);
-enum ice_status
 ice_rem_adv_rule_by_id(struct ice_hw *hw,
 		       struct ice_rule_query_data *remove_entry);
 enum ice_status
 ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo);
 
-enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
-
 enum ice_status
 ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list);
 u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
diff --git a/drivers/net/igc/base/igc_api.c b/drivers/net/igc/base/igc_api.c
index 2f8c0753cb..efa7a8dd2b 100644
--- a/drivers/net/igc/base/igc_api.c
+++ b/drivers/net/igc/base/igc_api.c
@@ -317,35 +317,6 @@ static s32 igc_get_i2c_ack(struct igc_hw *hw)
 	return status;
 }
 
-/**
- *  igc_set_i2c_bb - Enable I2C bit-bang
- *  @hw: pointer to the HW structure
- *
- *  Enable I2C bit-bang interface
- *
- **/
-s32 igc_set_i2c_bb(struct igc_hw *hw)
-{
-	s32 ret_val = IGC_SUCCESS;
-	u32 ctrl_ext, i2cparams;
-
-	DEBUGFUNC("igc_set_i2c_bb");
-
-	ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
-	ctrl_ext |= IGC_CTRL_I2C_ENA;
-	IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext);
-	IGC_WRITE_FLUSH(hw);
-
-	i2cparams = IGC_READ_REG(hw, IGC_I2CPARAMS);
-	i2cparams |= IGC_I2CBB_EN;
-	i2cparams |= IGC_I2C_DATA_OE_N;
-	i2cparams |= IGC_I2C_CLK_OE_N;
-	IGC_WRITE_REG(hw, IGC_I2CPARAMS, i2cparams);
-	IGC_WRITE_FLUSH(hw);
-
-	return ret_val;
-}
-
 /**
  *  igc_read_i2c_byte_generic - Reads 8 bit word over I2C
  *  @hw: pointer to hardware structure
@@ -622,32 +593,6 @@ s32 igc_init_phy_params(struct igc_hw *hw)
 	return ret_val;
 }
 
-/**
- *  igc_init_mbx_params - Initialize mailbox function pointers
- *  @hw: pointer to the HW structure
- *
- *  This function initializes the function pointers for the PHY
- *  set of functions.  Called by drivers or by igc_setup_init_funcs.
- **/
-s32 igc_init_mbx_params(struct igc_hw *hw)
-{
-	s32 ret_val = IGC_SUCCESS;
-
-	if (hw->mbx.ops.init_params) {
-		ret_val = hw->mbx.ops.init_params(hw);
-		if (ret_val) {
-			DEBUGOUT("Mailbox Initialization Error\n");
-			goto out;
-		}
-	} else {
-		DEBUGOUT("mbx.init_mbx_params was NULL\n");
-		ret_val =  -IGC_ERR_CONFIG;
-	}
-
-out:
-	return ret_val;
-}
-
 /**
  *  igc_set_mac_type - Sets MAC type
  *  @hw: pointer to the HW structure
@@ -998,34 +943,6 @@ s32 igc_get_bus_info(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_clear_vfta - Clear VLAN filter table
- *  @hw: pointer to the HW structure
- *
- *  This clears the VLAN filter table on the adapter. This is a function
- *  pointer entry point called by drivers.
- **/
-void igc_clear_vfta(struct igc_hw *hw)
-{
-	if (hw->mac.ops.clear_vfta)
-		hw->mac.ops.clear_vfta(hw);
-}
-
-/**
- *  igc_write_vfta - Write value to VLAN filter table
- *  @hw: pointer to the HW structure
- *  @offset: the 32-bit offset in which to write the value to.
- *  @value: the 32-bit value to write at location offset.
- *
- *  This writes a 32-bit value to a 32-bit offset in the VLAN filter
- *  table. This is a function pointer entry point called by drivers.
- **/
-void igc_write_vfta(struct igc_hw *hw, u32 offset, u32 value)
-{
-	if (hw->mac.ops.write_vfta)
-		hw->mac.ops.write_vfta(hw, offset, value);
-}
-
 /**
  *  igc_update_mc_addr_list - Update Multicast addresses
  *  @hw: pointer to the HW structure
@@ -1043,19 +960,6 @@ void igc_update_mc_addr_list(struct igc_hw *hw, u8 *mc_addr_list,
 						mc_addr_count);
 }
 
-/**
- *  igc_force_mac_fc - Force MAC flow control
- *  @hw: pointer to the HW structure
- *
- *  Force the MAC's flow control settings. Currently no func pointer exists
- *  and all implementations are handled in the generic version of this
- *  function.
- **/
-s32 igc_force_mac_fc(struct igc_hw *hw)
-{
-	return igc_force_mac_fc_generic(hw);
-}
-
 /**
  *  igc_check_for_link - Check/Store link connection
  *  @hw: pointer to the HW structure
@@ -1072,34 +976,6 @@ s32 igc_check_for_link(struct igc_hw *hw)
 	return -IGC_ERR_CONFIG;
 }
 
-/**
- *  igc_check_mng_mode - Check management mode
- *  @hw: pointer to the HW structure
- *
- *  This checks if the adapter has manageability enabled.
- *  This is a function pointer entry point called by drivers.
- **/
-bool igc_check_mng_mode(struct igc_hw *hw)
-{
-	if (hw->mac.ops.check_mng_mode)
-		return hw->mac.ops.check_mng_mode(hw);
-
-	return false;
-}
-
-/**
- *  igc_mng_write_dhcp_info - Writes DHCP info to host interface
- *  @hw: pointer to the HW structure
- *  @buffer: pointer to the host interface
- *  @length: size of the buffer
- *
- *  Writes the DHCP information to the host interface.
- **/
-s32 igc_mng_write_dhcp_info(struct igc_hw *hw, u8 *buffer, u16 length)
-{
-	return igc_mng_write_dhcp_info_generic(hw, buffer, length);
-}
-
 /**
  *  igc_reset_hw - Reset hardware
  *  @hw: pointer to the HW structure
@@ -1146,86 +1022,6 @@ s32 igc_setup_link(struct igc_hw *hw)
 	return -IGC_ERR_CONFIG;
 }
 
-/**
- *  igc_get_speed_and_duplex - Returns current speed and duplex
- *  @hw: pointer to the HW structure
- *  @speed: pointer to a 16-bit value to store the speed
- *  @duplex: pointer to a 16-bit value to store the duplex.
- *
- *  This returns the speed and duplex of the adapter in the two 'out'
- *  variables passed in. This is a function pointer entry point called
- *  by drivers.
- **/
-s32 igc_get_speed_and_duplex(struct igc_hw *hw, u16 *speed, u16 *duplex)
-{
-	if (hw->mac.ops.get_link_up_info)
-		return hw->mac.ops.get_link_up_info(hw, speed, duplex);
-
-	return -IGC_ERR_CONFIG;
-}
-
-/**
- *  igc_setup_led - Configures SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This prepares the SW controllable LED for use and saves the current state
- *  of the LED so it can be later restored. This is a function pointer entry
- *  point called by drivers.
- **/
-s32 igc_setup_led(struct igc_hw *hw)
-{
-	if (hw->mac.ops.setup_led)
-		return hw->mac.ops.setup_led(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_cleanup_led - Restores SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This restores the SW controllable LED to the value saved off by
- *  igc_setup_led. This is a function pointer entry point called by drivers.
- **/
-s32 igc_cleanup_led(struct igc_hw *hw)
-{
-	if (hw->mac.ops.cleanup_led)
-		return hw->mac.ops.cleanup_led(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_blink_led - Blink SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This starts the adapter LED blinking. Request the LED to be setup first
- *  and cleaned up after. This is a function pointer entry point called by
- *  drivers.
- **/
-s32 igc_blink_led(struct igc_hw *hw)
-{
-	if (hw->mac.ops.blink_led)
-		return hw->mac.ops.blink_led(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_id_led_init - store LED configurations in SW
- *  @hw: pointer to the HW structure
- *
- *  Initializes the LED config in SW. This is a function pointer entry point
- *  called by drivers.
- **/
-s32 igc_id_led_init(struct igc_hw *hw)
-{
-	if (hw->mac.ops.id_led_init)
-		return hw->mac.ops.id_led_init(hw);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_led_on - Turn on SW controllable LED
  *  @hw: pointer to the HW structure
@@ -1256,43 +1052,6 @@ s32 igc_led_off(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_reset_adaptive - Reset adaptive IFS
- *  @hw: pointer to the HW structure
- *
- *  Resets the adaptive IFS. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-void igc_reset_adaptive(struct igc_hw *hw)
-{
-	igc_reset_adaptive_generic(hw);
-}
-
-/**
- *  igc_update_adaptive - Update adaptive IFS
- *  @hw: pointer to the HW structure
- *
- *  Updates adapter IFS. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-void igc_update_adaptive(struct igc_hw *hw)
-{
-	igc_update_adaptive_generic(hw);
-}
-
-/**
- *  igc_disable_pcie_master - Disable PCI-Express master access
- *  @hw: pointer to the HW structure
- *
- *  Disables PCI-Express master access and verifies there are no pending
- *  requests. Currently no func pointer exists and all implementations are
- *  handled in the generic version of this function.
- **/
-s32 igc_disable_pcie_master(struct igc_hw *hw)
-{
-	return igc_disable_pcie_master_generic(hw);
-}
-
 /**
  *  igc_config_collision_dist - Configure collision distance
  *  @hw: pointer to the HW structure
@@ -1322,94 +1081,6 @@ int igc_rar_set(struct igc_hw *hw, u8 *addr, u32 index)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_validate_mdi_setting - Ensures valid MDI/MDIX SW state
- *  @hw: pointer to the HW structure
- *
- *  Ensures that the MDI/MDIX SW state is valid.
- **/
-s32 igc_validate_mdi_setting(struct igc_hw *hw)
-{
-	if (hw->mac.ops.validate_mdi_setting)
-		return hw->mac.ops.validate_mdi_setting(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_hash_mc_addr - Determines address location in multicast table
- *  @hw: pointer to the HW structure
- *  @mc_addr: Multicast address to hash.
- *
- *  This hashes an address to determine its location in the multicast
- *  table. Currently no func pointer exists and all implementations
- *  are handled in the generic version of this function.
- **/
-u32 igc_hash_mc_addr(struct igc_hw *hw, u8 *mc_addr)
-{
-	return igc_hash_mc_addr_generic(hw, mc_addr);
-}
-
-/**
- *  igc_enable_tx_pkt_filtering - Enable packet filtering on TX
- *  @hw: pointer to the HW structure
- *
- *  Enables packet filtering on transmit packets if manageability is enabled
- *  and host interface is enabled.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-bool igc_enable_tx_pkt_filtering(struct igc_hw *hw)
-{
-	return igc_enable_tx_pkt_filtering_generic(hw);
-}
-
-/**
- *  igc_mng_host_if_write - Writes to the manageability host interface
- *  @hw: pointer to the HW structure
- *  @buffer: pointer to the host interface buffer
- *  @length: size of the buffer
- *  @offset: location in the buffer to write to
- *  @sum: sum of the data (not checksum)
- *
- *  This function writes the buffer content at the offset given on the host if.
- *  It also does alignment considerations to do the writes in most efficient
- *  way.  Also fills up the sum of the buffer in *buffer parameter.
- **/
-s32 igc_mng_host_if_write(struct igc_hw *hw, u8 *buffer, u16 length,
-			    u16 offset, u8 *sum)
-{
-	return igc_mng_host_if_write_generic(hw, buffer, length, offset, sum);
-}
-
-/**
- *  igc_mng_write_cmd_header - Writes manageability command header
- *  @hw: pointer to the HW structure
- *  @hdr: pointer to the host interface command header
- *
- *  Writes the command header after does the checksum calculation.
- **/
-s32 igc_mng_write_cmd_header(struct igc_hw *hw,
-			       struct igc_host_mng_command_header *hdr)
-{
-	return igc_mng_write_cmd_header_generic(hw, hdr);
-}
-
-/**
- *  igc_mng_enable_host_if - Checks host interface is enabled
- *  @hw: pointer to the HW structure
- *
- *  Returns IGC_success upon success, else IGC_ERR_HOST_INTERFACE_COMMAND
- *
- *  This function checks whether the HOST IF is enabled for command operation
- *  and also checks whether the previous command is completed.  It busy waits
- *  in case of previous command is not completed.
- **/
-s32 igc_mng_enable_host_if(struct igc_hw *hw)
-{
-	return igc_mng_enable_host_if_generic(hw);
-}
-
 /**
  *  igc_check_reset_block - Verifies PHY can be reset
  *  @hw: pointer to the HW structure
@@ -1425,126 +1096,6 @@ s32 igc_check_reset_block(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_read_phy_reg - Reads PHY register
- *  @hw: pointer to the HW structure
- *  @offset: the register to read
- *  @data: the buffer to store the 16-bit read.
- *
- *  Reads the PHY register and returns the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	if (hw->phy.ops.read_reg)
-		return hw->phy.ops.read_reg(hw, offset, data);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_write_phy_reg - Writes PHY register
- *  @hw: pointer to the HW structure
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes the PHY register at offset with the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 igc_write_phy_reg(struct igc_hw *hw, u32 offset, u16 data)
-{
-	if (hw->phy.ops.write_reg)
-		return hw->phy.ops.write_reg(hw, offset, data);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_release_phy - Generic release PHY
- *  @hw: pointer to the HW structure
- *
- *  Return if silicon family does not require a semaphore when accessing the
- *  PHY.
- **/
-void igc_release_phy(struct igc_hw *hw)
-{
-	if (hw->phy.ops.release)
-		hw->phy.ops.release(hw);
-}
-
-/**
- *  igc_acquire_phy - Generic acquire PHY
- *  @hw: pointer to the HW structure
- *
- *  Return success if silicon family does not require a semaphore when
- *  accessing the PHY.
- **/
-s32 igc_acquire_phy(struct igc_hw *hw)
-{
-	if (hw->phy.ops.acquire)
-		return hw->phy.ops.acquire(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_cfg_on_link_up - Configure PHY upon link up
- *  @hw: pointer to the HW structure
- **/
-s32 igc_cfg_on_link_up(struct igc_hw *hw)
-{
-	if (hw->phy.ops.cfg_on_link_up)
-		return hw->phy.ops.cfg_on_link_up(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_read_kmrn_reg - Reads register using Kumeran interface
- *  @hw: pointer to the HW structure
- *  @offset: the register to read
- *  @data: the location to store the 16-bit value read.
- *
- *  Reads a register out of the Kumeran interface. Currently no func pointer
- *  exists and all implementations are handled in the generic version of
- *  this function.
- **/
-s32 igc_read_kmrn_reg(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return igc_read_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- *  igc_write_kmrn_reg - Writes register using Kumeran interface
- *  @hw: pointer to the HW structure
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes a register to the Kumeran interface. Currently no func pointer
- *  exists and all implementations are handled in the generic version of
- *  this function.
- **/
-s32 igc_write_kmrn_reg(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return igc_write_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- *  igc_get_cable_length - Retrieves cable length estimation
- *  @hw: pointer to the HW structure
- *
- *  This function estimates the cable length and stores them in
- *  hw->phy.min_length and hw->phy.max_length. This is a function pointer
- *  entry point called by drivers.
- **/
-s32 igc_get_cable_length(struct igc_hw *hw)
-{
-	if (hw->phy.ops.get_cable_length)
-		return hw->phy.ops.get_cable_length(hw);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_get_phy_info - Retrieves PHY information from registers
  *  @hw: pointer to the HW structure
@@ -1576,65 +1127,6 @@ s32 igc_phy_hw_reset(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_phy_commit - Soft PHY reset
- *  @hw: pointer to the HW structure
- *
- *  Performs a soft PHY reset on those that apply. This is a function pointer
- *  entry point called by drivers.
- **/
-s32 igc_phy_commit(struct igc_hw *hw)
-{
-	if (hw->phy.ops.commit)
-		return hw->phy.ops.commit(hw);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_set_d0_lplu_state - Sets low power link up state for D0
- *  @hw: pointer to the HW structure
- *  @active: boolean used to enable/disable lplu
- *
- *  Success returns 0, Failure returns 1
- *
- *  The low power link up (lplu) state is set to the power management level D0
- *  and SmartSpeed is disabled when active is true, else clear lplu for D0
- *  and enable Smartspeed.  LPLU and Smartspeed are mutually exclusive.  LPLU
- *  is used during Dx states where the power conservation is most important.
- *  During driver activity, SmartSpeed should be enabled so performance is
- *  maintained.  This is a function pointer entry point called by drivers.
- **/
-s32 igc_set_d0_lplu_state(struct igc_hw *hw, bool active)
-{
-	if (hw->phy.ops.set_d0_lplu_state)
-		return hw->phy.ops.set_d0_lplu_state(hw, active);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_set_d3_lplu_state - Sets low power link up state for D3
- *  @hw: pointer to the HW structure
- *  @active: boolean used to enable/disable lplu
- *
- *  Success returns 0, Failure returns 1
- *
- *  The low power link up (lplu) state is set to the power management level D3
- *  and SmartSpeed is disabled when active is true, else clear lplu for D3
- *  and enable Smartspeed.  LPLU and Smartspeed are mutually exclusive.  LPLU
- *  is used during Dx states where the power conservation is most important.
- *  During driver activity, SmartSpeed should be enabled so performance is
- *  maintained.  This is a function pointer entry point called by drivers.
- **/
-s32 igc_set_d3_lplu_state(struct igc_hw *hw, bool active)
-{
-	if (hw->phy.ops.set_d3_lplu_state)
-		return hw->phy.ops.set_d3_lplu_state(hw, active);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_read_mac_addr - Reads MAC address
  *  @hw: pointer to the HW structure
@@ -1651,52 +1143,6 @@ s32 igc_read_mac_addr(struct igc_hw *hw)
 	return igc_read_mac_addr_generic(hw);
 }
 
-/**
- *  igc_read_pba_string - Read device part number string
- *  @hw: pointer to the HW structure
- *  @pba_num: pointer to device part number
- *  @pba_num_size: size of part number buffer
- *
- *  Reads the product board assembly (PBA) number from the EEPROM and stores
- *  the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 igc_read_pba_string(struct igc_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
-	return igc_read_pba_string_generic(hw, pba_num, pba_num_size);
-}
-
-/**
- *  igc_read_pba_length - Read device part number string length
- *  @hw: pointer to the HW structure
- *  @pba_num_size: size of part number buffer
- *
- *  Reads the product board assembly (PBA) number length from the EEPROM and
- *  stores the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 igc_read_pba_length(struct igc_hw *hw, u32 *pba_num_size)
-{
-	return igc_read_pba_length_generic(hw, pba_num_size);
-}
-
-/**
- *  igc_read_pba_num - Read device part number
- *  @hw: pointer to the HW structure
- *  @pba_num: pointer to device part number
- *
- *  Reads the product board assembly (PBA) number from the EEPROM and stores
- *  the value in pba_num.
- *  Currently no func pointer exists and all implementations are handled in the
- *  generic version of this function.
- **/
-s32 igc_read_pba_num(struct igc_hw *hw, u32 *pba_num)
-{
-	return igc_read_pba_num_generic(hw, pba_num);
-}
-
 /**
  *  igc_validate_nvm_checksum - Verifies NVM (EEPROM) checksum
  *  @hw: pointer to the HW structure
@@ -1712,34 +1158,6 @@ s32 igc_validate_nvm_checksum(struct igc_hw *hw)
 	return -IGC_ERR_CONFIG;
 }
 
-/**
- *  igc_update_nvm_checksum - Updates NVM (EEPROM) checksum
- *  @hw: pointer to the HW structure
- *
- *  Updates the NVM checksum. Currently no func pointer exists and all
- *  implementations are handled in the generic version of this function.
- **/
-s32 igc_update_nvm_checksum(struct igc_hw *hw)
-{
-	if (hw->nvm.ops.update)
-		return hw->nvm.ops.update(hw);
-
-	return -IGC_ERR_CONFIG;
-}
-
-/**
- *  igc_reload_nvm - Reloads EEPROM
- *  @hw: pointer to the HW structure
- *
- *  Reloads the EEPROM by setting the "Reinitialize from EEPROM" bit in the
- *  extended control register.
- **/
-void igc_reload_nvm(struct igc_hw *hw)
-{
-	if (hw->nvm.ops.reload)
-		hw->nvm.ops.reload(hw);
-}
-
 /**
  *  igc_read_nvm - Reads NVM (EEPROM)
  *  @hw: pointer to the HW structure
@@ -1776,22 +1194,6 @@ s32 igc_write_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_write_8bit_ctrl_reg - Writes 8bit Control register
- *  @hw: pointer to the HW structure
- *  @reg: 32bit register offset
- *  @offset: the register to write
- *  @data: the value to write.
- *
- *  Writes the PHY register at offset with the value in data.
- *  This is a function pointer entry point called by drivers.
- **/
-s32 igc_write_8bit_ctrl_reg(struct igc_hw *hw, u32 reg, u32 offset,
-			      u8 data)
-{
-	return igc_write_8bit_ctrl_reg_generic(hw, reg, offset, data);
-}
-
 /**
  * igc_power_up_phy - Restores link in case of PHY power down
  * @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_api.h b/drivers/net/igc/base/igc_api.h
index 00681ee4f8..6bb22912dd 100644
--- a/drivers/net/igc/base/igc_api.h
+++ b/drivers/net/igc/base/igc_api.h
@@ -19,7 +19,6 @@
 #define IGC_I2C_T_SU_STO	4
 #define IGC_I2C_T_BUF		5
 
-s32 igc_set_i2c_bb(struct igc_hw *hw);
 s32 igc_read_i2c_byte_generic(struct igc_hw *hw, u8 byte_offset,
 				u8 dev_addr, u8 *data);
 s32 igc_write_i2c_byte_generic(struct igc_hw *hw, u8 byte_offset,
@@ -46,66 +45,26 @@ s32 igc_setup_init_funcs(struct igc_hw *hw, bool init_device);
 s32 igc_init_mac_params(struct igc_hw *hw);
 s32 igc_init_nvm_params(struct igc_hw *hw);
 s32 igc_init_phy_params(struct igc_hw *hw);
-s32 igc_init_mbx_params(struct igc_hw *hw);
 s32 igc_get_bus_info(struct igc_hw *hw);
-void igc_clear_vfta(struct igc_hw *hw);
-void igc_write_vfta(struct igc_hw *hw, u32 offset, u32 value);
-s32 igc_force_mac_fc(struct igc_hw *hw);
 s32 igc_check_for_link(struct igc_hw *hw);
 s32 igc_reset_hw(struct igc_hw *hw);
 s32 igc_init_hw(struct igc_hw *hw);
 s32 igc_setup_link(struct igc_hw *hw);
-s32 igc_get_speed_and_duplex(struct igc_hw *hw, u16 *speed, u16 *duplex);
-s32 igc_disable_pcie_master(struct igc_hw *hw);
 void igc_config_collision_dist(struct igc_hw *hw);
 int igc_rar_set(struct igc_hw *hw, u8 *addr, u32 index);
-u32 igc_hash_mc_addr(struct igc_hw *hw, u8 *mc_addr);
 void igc_update_mc_addr_list(struct igc_hw *hw, u8 *mc_addr_list,
 			       u32 mc_addr_count);
-s32 igc_setup_led(struct igc_hw *hw);
-s32 igc_cleanup_led(struct igc_hw *hw);
 s32 igc_check_reset_block(struct igc_hw *hw);
-s32 igc_blink_led(struct igc_hw *hw);
 s32 igc_led_on(struct igc_hw *hw);
 s32 igc_led_off(struct igc_hw *hw);
-s32 igc_id_led_init(struct igc_hw *hw);
-void igc_reset_adaptive(struct igc_hw *hw);
-void igc_update_adaptive(struct igc_hw *hw);
-s32 igc_get_cable_length(struct igc_hw *hw);
-s32 igc_validate_mdi_setting(struct igc_hw *hw);
-s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_phy_reg(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_8bit_ctrl_reg(struct igc_hw *hw, u32 reg, u32 offset,
-			      u8 data);
 s32 igc_get_phy_info(struct igc_hw *hw);
-void igc_release_phy(struct igc_hw *hw);
-s32 igc_acquire_phy(struct igc_hw *hw);
-s32 igc_cfg_on_link_up(struct igc_hw *hw);
 s32 igc_phy_hw_reset(struct igc_hw *hw);
-s32 igc_phy_commit(struct igc_hw *hw);
 void igc_power_up_phy(struct igc_hw *hw);
 void igc_power_down_phy(struct igc_hw *hw);
 s32 igc_read_mac_addr(struct igc_hw *hw);
-s32 igc_read_pba_num(struct igc_hw *hw, u32 *part_num);
-s32 igc_read_pba_string(struct igc_hw *hw, u8 *pba_num, u32 pba_num_size);
-s32 igc_read_pba_length(struct igc_hw *hw, u32 *pba_num_size);
-void igc_reload_nvm(struct igc_hw *hw);
-s32 igc_update_nvm_checksum(struct igc_hw *hw);
 s32 igc_validate_nvm_checksum(struct igc_hw *hw);
 s32 igc_read_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32 igc_read_kmrn_reg(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_kmrn_reg(struct igc_hw *hw, u32 offset, u16 data);
 s32 igc_write_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32 igc_set_d3_lplu_state(struct igc_hw *hw, bool active);
-s32 igc_set_d0_lplu_state(struct igc_hw *hw, bool active);
-bool igc_check_mng_mode(struct igc_hw *hw);
-bool igc_enable_tx_pkt_filtering(struct igc_hw *hw);
-s32 igc_mng_enable_host_if(struct igc_hw *hw);
-s32 igc_mng_host_if_write(struct igc_hw *hw, u8 *buffer, u16 length,
-			    u16 offset, u8 *sum);
-s32 igc_mng_write_cmd_header(struct igc_hw *hw,
-			       struct igc_host_mng_command_header *hdr);
-s32 igc_mng_write_dhcp_info(struct igc_hw *hw, u8 *buffer, u16 length);
 u32  igc_translate_register_82542(u32 reg);
 
 #endif /* _IGC_API_H_ */
diff --git a/drivers/net/igc/base/igc_base.c b/drivers/net/igc/base/igc_base.c
index 1e8b908902..55aca5ad63 100644
--- a/drivers/net/igc/base/igc_base.c
+++ b/drivers/net/igc/base/igc_base.c
@@ -110,81 +110,3 @@ void igc_power_down_phy_copper_base(struct igc_hw *hw)
 	if (!phy->ops.check_reset_block(hw))
 		igc_power_down_phy_copper(hw);
 }
-
-/**
- *  igc_rx_fifo_flush_base - Clean Rx FIFO after Rx enable
- *  @hw: pointer to the HW structure
- *
- *  After Rx enable, if manageability is enabled then there is likely some
- *  bad data at the start of the FIFO and possibly in the DMA FIFO.  This
- *  function clears the FIFOs and flushes any packets that came in as Rx was
- *  being enabled.
- **/
-void igc_rx_fifo_flush_base(struct igc_hw *hw)
-{
-	u32 rctl, rlpml, rxdctl[4], rfctl, temp_rctl, rx_enabled;
-	int i, ms_wait;
-
-	DEBUGFUNC("igc_rx_fifo_flush_base");
-
-	/* disable IPv6 options as per hardware errata */
-	rfctl = IGC_READ_REG(hw, IGC_RFCTL);
-	rfctl |= IGC_RFCTL_IPV6_EX_DIS;
-	IGC_WRITE_REG(hw, IGC_RFCTL, rfctl);
-
-	if (!(IGC_READ_REG(hw, IGC_MANC) & IGC_MANC_RCV_TCO_EN))
-		return;
-
-	/* Disable all Rx queues */
-	for (i = 0; i < 4; i++) {
-		rxdctl[i] = IGC_READ_REG(hw, IGC_RXDCTL(i));
-		IGC_WRITE_REG(hw, IGC_RXDCTL(i),
-				rxdctl[i] & ~IGC_RXDCTL_QUEUE_ENABLE);
-	}
-	/* Poll all queues to verify they have shut down */
-	for (ms_wait = 0; ms_wait < 10; ms_wait++) {
-		msec_delay(1);
-		rx_enabled = 0;
-		for (i = 0; i < 4; i++)
-			rx_enabled |= IGC_READ_REG(hw, IGC_RXDCTL(i));
-		if (!(rx_enabled & IGC_RXDCTL_QUEUE_ENABLE))
-			break;
-	}
-
-	if (ms_wait == 10)
-		DEBUGOUT("Queue disable timed out after 10ms\n");
-
-	/* Clear RLPML, RCTL.SBP, RFCTL.LEF, and set RCTL.LPE so that all
-	 * incoming packets are rejected.  Set enable and wait 2ms so that
-	 * any packet that was coming in as RCTL.EN was set is flushed
-	 */
-	IGC_WRITE_REG(hw, IGC_RFCTL, rfctl & ~IGC_RFCTL_LEF);
-
-	rlpml = IGC_READ_REG(hw, IGC_RLPML);
-	IGC_WRITE_REG(hw, IGC_RLPML, 0);
-
-	rctl = IGC_READ_REG(hw, IGC_RCTL);
-	temp_rctl = rctl & ~(IGC_RCTL_EN | IGC_RCTL_SBP);
-	temp_rctl |= IGC_RCTL_LPE;
-
-	IGC_WRITE_REG(hw, IGC_RCTL, temp_rctl);
-	IGC_WRITE_REG(hw, IGC_RCTL, temp_rctl | IGC_RCTL_EN);
-	IGC_WRITE_FLUSH(hw);
-	msec_delay(2);
-
-	/* Enable Rx queues that were previously enabled and restore our
-	 * previous state
-	 */
-	for (i = 0; i < 4; i++)
-		IGC_WRITE_REG(hw, IGC_RXDCTL(i), rxdctl[i]);
-	IGC_WRITE_REG(hw, IGC_RCTL, rctl);
-	IGC_WRITE_FLUSH(hw);
-
-	IGC_WRITE_REG(hw, IGC_RLPML, rlpml);
-	IGC_WRITE_REG(hw, IGC_RFCTL, rfctl);
-
-	/* Flush receive errors generated by workaround */
-	IGC_READ_REG(hw, IGC_ROC);
-	IGC_READ_REG(hw, IGC_RNBC);
-	IGC_READ_REG(hw, IGC_MPC);
-}
diff --git a/drivers/net/igc/base/igc_base.h b/drivers/net/igc/base/igc_base.h
index 5f342af7ee..19b549ae45 100644
--- a/drivers/net/igc/base/igc_base.h
+++ b/drivers/net/igc/base/igc_base.h
@@ -8,7 +8,6 @@
 /* forward declaration */
 s32 igc_init_hw_base(struct igc_hw *hw);
 void igc_power_down_phy_copper_base(struct igc_hw *hw);
-void igc_rx_fifo_flush_base(struct igc_hw *hw);
 s32 igc_acquire_phy_base(struct igc_hw *hw);
 void igc_release_phy_base(struct igc_hw *hw);
 
diff --git a/drivers/net/igc/base/igc_hw.h b/drivers/net/igc/base/igc_hw.h
index be38fafa5f..55d63b211c 100644
--- a/drivers/net/igc/base/igc_hw.h
+++ b/drivers/net/igc/base/igc_hw.h
@@ -1041,10 +1041,7 @@ struct igc_hw {
 #include "igc_base.h"
 
 /* These functions must be implemented by drivers */
-void igc_pci_clear_mwi(struct igc_hw *hw);
-void igc_pci_set_mwi(struct igc_hw *hw);
 s32  igc_read_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value);
-s32  igc_write_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value);
 void igc_read_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value);
 void igc_write_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value);
 
diff --git a/drivers/net/igc/base/igc_i225.c b/drivers/net/igc/base/igc_i225.c
index 060b2f8f93..01d2c7487d 100644
--- a/drivers/net/igc/base/igc_i225.c
+++ b/drivers/net/igc/base/igc_i225.c
@@ -590,102 +590,6 @@ static s32 __igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
 	return ret_val;
 }
 
-/* igc_read_invm_version_i225 - Reads iNVM version and image type
- * @hw: pointer to the HW structure
- * @invm_ver: version structure for the version read
- *
- * Reads iNVM version and image type.
- */
-s32 igc_read_invm_version_i225(struct igc_hw *hw,
-				 struct igc_fw_version *invm_ver)
-{
-	u32 *record = NULL;
-	u32 *next_record = NULL;
-	u32 i = 0;
-	u32 invm_dword = 0;
-	u32 invm_blocks = IGC_INVM_SIZE - (IGC_INVM_ULT_BYTES_SIZE /
-					     IGC_INVM_RECORD_SIZE_IN_BYTES);
-	u32 buffer[IGC_INVM_SIZE];
-	s32 status = -IGC_ERR_INVM_VALUE_NOT_FOUND;
-	u16 version = 0;
-
-	DEBUGFUNC("igc_read_invm_version_i225");
-
-	/* Read iNVM memory */
-	for (i = 0; i < IGC_INVM_SIZE; i++) {
-		invm_dword = IGC_READ_REG(hw, IGC_INVM_DATA_REG(i));
-		buffer[i] = invm_dword;
-	}
-
-	/* Read version number */
-	for (i = 1; i < invm_blocks; i++) {
-		record = &buffer[invm_blocks - i];
-		next_record = &buffer[invm_blocks - i + 1];
-
-		/* Check if we have first version location used */
-		if (i == 1 && (*record & IGC_INVM_VER_FIELD_ONE) == 0) {
-			version = 0;
-			status = IGC_SUCCESS;
-			break;
-		}
-		/* Check if we have second version location used */
-		else if ((i == 1) &&
-			 ((*record & IGC_INVM_VER_FIELD_TWO) == 0)) {
-			version = (*record & IGC_INVM_VER_FIELD_ONE) >> 3;
-			status = IGC_SUCCESS;
-			break;
-		}
-		/* Check if we have odd version location
-		 * used and it is the last one used
-		 */
-		else if ((((*record & IGC_INVM_VER_FIELD_ONE) == 0) &&
-			  ((*record & 0x3) == 0)) || (((*record & 0x3) != 0) &&
-			   (i != 1))) {
-			version = (*next_record & IGC_INVM_VER_FIELD_TWO)
-				  >> 13;
-			status = IGC_SUCCESS;
-			break;
-		}
-		/* Check if we have even version location
-		 * used and it is the last one used
-		 */
-		else if (((*record & IGC_INVM_VER_FIELD_TWO) == 0) &&
-			 ((*record & 0x3) == 0)) {
-			version = (*record & IGC_INVM_VER_FIELD_ONE) >> 3;
-			status = IGC_SUCCESS;
-			break;
-		}
-	}
-
-	if (status == IGC_SUCCESS) {
-		invm_ver->invm_major = (version & IGC_INVM_MAJOR_MASK)
-					>> IGC_INVM_MAJOR_SHIFT;
-		invm_ver->invm_minor = version & IGC_INVM_MINOR_MASK;
-	}
-	/* Read Image Type */
-	for (i = 1; i < invm_blocks; i++) {
-		record = &buffer[invm_blocks - i];
-		next_record = &buffer[invm_blocks - i + 1];
-
-		/* Check if we have image type in first location used */
-		if (i == 1 && (*record & IGC_INVM_IMGTYPE_FIELD) == 0) {
-			invm_ver->invm_img_type = 0;
-			status = IGC_SUCCESS;
-			break;
-		}
-		/* Check if we have image type in first location used */
-		else if ((((*record & 0x3) == 0) &&
-			  ((*record & IGC_INVM_IMGTYPE_FIELD) == 0)) ||
-			    ((((*record & 0x3) != 0) && (i != 1)))) {
-			invm_ver->invm_img_type =
-				(*next_record & IGC_INVM_IMGTYPE_FIELD) >> 23;
-			status = IGC_SUCCESS;
-			break;
-		}
-	}
-	return status;
-}
-
 /* igc_validate_nvm_checksum_i225 - Validate EEPROM checksum
  * @hw: pointer to the HW structure
  *
@@ -1313,66 +1217,3 @@ s32 igc_set_d3_lplu_state_i225(struct igc_hw *hw, bool active)
 	IGC_WRITE_REG(hw, IGC_I225_PHPM, data);
 	return IGC_SUCCESS;
 }
-
-/**
- *  igc_set_eee_i225 - Enable/disable EEE support
- *  @hw: pointer to the HW structure
- *  @adv2p5G: boolean flag enabling 2.5G EEE advertisement
- *  @adv1G: boolean flag enabling 1G EEE advertisement
- *  @adv100M: boolean flag enabling 100M EEE advertisement
- *
- *  Enable/disable EEE based on setting in dev_spec structure.
- *
- **/
-s32 igc_set_eee_i225(struct igc_hw *hw, bool adv2p5G, bool adv1G,
-		       bool adv100M)
-{
-	u32 ipcnfg, eeer;
-
-	DEBUGFUNC("igc_set_eee_i225");
-
-	if (hw->mac.type != igc_i225 ||
-	    hw->phy.media_type != igc_media_type_copper)
-		goto out;
-	ipcnfg = IGC_READ_REG(hw, IGC_IPCNFG);
-	eeer = IGC_READ_REG(hw, IGC_EEER);
-
-	/* enable or disable per user setting */
-	if (!(hw->dev_spec._i225.eee_disable)) {
-		u32 eee_su = IGC_READ_REG(hw, IGC_EEE_SU);
-
-		if (adv100M)
-			ipcnfg |= IGC_IPCNFG_EEE_100M_AN;
-		else
-			ipcnfg &= ~IGC_IPCNFG_EEE_100M_AN;
-
-		if (adv1G)
-			ipcnfg |= IGC_IPCNFG_EEE_1G_AN;
-		else
-			ipcnfg &= ~IGC_IPCNFG_EEE_1G_AN;
-
-		if (adv2p5G)
-			ipcnfg |= IGC_IPCNFG_EEE_2_5G_AN;
-		else
-			ipcnfg &= ~IGC_IPCNFG_EEE_2_5G_AN;
-
-		eeer |= (IGC_EEER_TX_LPI_EN | IGC_EEER_RX_LPI_EN |
-			IGC_EEER_LPI_FC);
-
-		/* This bit should not be set in normal operation. */
-		if (eee_su & IGC_EEE_SU_LPI_CLK_STP)
-			DEBUGOUT("LPI Clock Stop Bit should not be set!\n");
-	} else {
-		ipcnfg &= ~(IGC_IPCNFG_EEE_2_5G_AN | IGC_IPCNFG_EEE_1G_AN |
-			IGC_IPCNFG_EEE_100M_AN);
-		eeer &= ~(IGC_EEER_TX_LPI_EN | IGC_EEER_RX_LPI_EN |
-			IGC_EEER_LPI_FC);
-	}
-	IGC_WRITE_REG(hw, IGC_IPCNFG, ipcnfg);
-	IGC_WRITE_REG(hw, IGC_EEER, eeer);
-	IGC_READ_REG(hw, IGC_IPCNFG);
-	IGC_READ_REG(hw, IGC_EEER);
-out:
-
-	return IGC_SUCCESS;
-}
diff --git a/drivers/net/igc/base/igc_i225.h b/drivers/net/igc/base/igc_i225.h
index c61ece0e82..ff17a2a9c9 100644
--- a/drivers/net/igc/base/igc_i225.h
+++ b/drivers/net/igc/base/igc_i225.h
@@ -13,8 +13,6 @@ s32 igc_write_nvm_srwr_i225(struct igc_hw *hw, u16 offset,
 			      u16 words, u16 *data);
 s32 igc_read_nvm_srrd_i225(struct igc_hw *hw, u16 offset,
 			     u16 words, u16 *data);
-s32 igc_read_invm_version_i225(struct igc_hw *hw,
-				 struct igc_fw_version *invm_ver);
 s32 igc_set_flsw_flash_burst_counter_i225(struct igc_hw *hw,
 					    u32 burst_counter);
 s32 igc_write_erase_flash_command_i225(struct igc_hw *hw, u32 opcode,
@@ -26,8 +24,6 @@ s32 igc_init_hw_i225(struct igc_hw *hw);
 s32 igc_setup_copper_link_i225(struct igc_hw *hw);
 s32 igc_set_d0_lplu_state_i225(struct igc_hw *hw, bool active);
 s32 igc_set_d3_lplu_state_i225(struct igc_hw *hw, bool active);
-s32 igc_set_eee_i225(struct igc_hw *hw, bool adv2p5G, bool adv1G,
-		       bool adv100M);
 
 #define ID_LED_DEFAULT_I225		((ID_LED_OFF1_ON2  << 8) | \
 					 (ID_LED_DEF1_DEF2 <<  4) | \
diff --git a/drivers/net/igc/base/igc_mac.c b/drivers/net/igc/base/igc_mac.c
index 3cd6506e5e..cef85d0b17 100644
--- a/drivers/net/igc/base/igc_mac.c
+++ b/drivers/net/igc/base/igc_mac.c
@@ -122,121 +122,6 @@ void igc_null_write_vfta(struct igc_hw IGC_UNUSEDARG * hw,
 	UNREFERENCED_3PARAMETER(hw, a, b);
 }
 
-/**
- *  igc_null_rar_set - No-op function, return 0
- *  @hw: pointer to the HW structure
- *  @h: dummy variable
- *  @a: dummy variable
- **/
-int igc_null_rar_set(struct igc_hw IGC_UNUSEDARG * hw,
-			u8 IGC_UNUSEDARG * h, u32 IGC_UNUSEDARG a)
-{
-	DEBUGFUNC("igc_null_rar_set");
-	UNREFERENCED_3PARAMETER(hw, h, a);
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_get_bus_info_pci_generic - Get PCI(x) bus information
- *  @hw: pointer to the HW structure
- *
- *  Determines and stores the system bus information for a particular
- *  network interface.  The following bus information is determined and stored:
- *  bus speed, bus width, type (PCI/PCIx), and PCI(-x) function.
- **/
-s32 igc_get_bus_info_pci_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	struct igc_bus_info *bus = &hw->bus;
-	u32 status = IGC_READ_REG(hw, IGC_STATUS);
-	s32 ret_val = IGC_SUCCESS;
-
-	DEBUGFUNC("igc_get_bus_info_pci_generic");
-
-	/* PCI or PCI-X? */
-	bus->type = (status & IGC_STATUS_PCIX_MODE)
-			? igc_bus_type_pcix
-			: igc_bus_type_pci;
-
-	/* Bus speed */
-	if (bus->type == igc_bus_type_pci) {
-		bus->speed = (status & IGC_STATUS_PCI66)
-			     ? igc_bus_speed_66
-			     : igc_bus_speed_33;
-	} else {
-		switch (status & IGC_STATUS_PCIX_SPEED) {
-		case IGC_STATUS_PCIX_SPEED_66:
-			bus->speed = igc_bus_speed_66;
-			break;
-		case IGC_STATUS_PCIX_SPEED_100:
-			bus->speed = igc_bus_speed_100;
-			break;
-		case IGC_STATUS_PCIX_SPEED_133:
-			bus->speed = igc_bus_speed_133;
-			break;
-		default:
-			bus->speed = igc_bus_speed_reserved;
-			break;
-		}
-	}
-
-	/* Bus width */
-	bus->width = (status & IGC_STATUS_BUS64)
-		     ? igc_bus_width_64
-		     : igc_bus_width_32;
-
-	/* Which PCI(-X) function? */
-	mac->ops.set_lan_id(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_get_bus_info_pcie_generic - Get PCIe bus information
- *  @hw: pointer to the HW structure
- *
- *  Determines and stores the system bus information for a particular
- *  network interface.  The following bus information is determined and stored:
- *  bus speed, bus width, type (PCIe), and PCIe function.
- **/
-s32 igc_get_bus_info_pcie_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	struct igc_bus_info *bus = &hw->bus;
-	s32 ret_val;
-	u16 pcie_link_status;
-
-	DEBUGFUNC("igc_get_bus_info_pcie_generic");
-
-	bus->type = igc_bus_type_pci_express;
-
-	ret_val = igc_read_pcie_cap_reg(hw, PCIE_LINK_STATUS,
-					  &pcie_link_status);
-	if (ret_val) {
-		bus->width = igc_bus_width_unknown;
-		bus->speed = igc_bus_speed_unknown;
-	} else {
-		switch (pcie_link_status & PCIE_LINK_SPEED_MASK) {
-		case PCIE_LINK_SPEED_2500:
-			bus->speed = igc_bus_speed_2500;
-			break;
-		case PCIE_LINK_SPEED_5000:
-			bus->speed = igc_bus_speed_5000;
-			break;
-		default:
-			bus->speed = igc_bus_speed_unknown;
-			break;
-		}
-
-		bus->width = (enum igc_bus_width)((pcie_link_status &
-			      PCIE_LINK_WIDTH_MASK) >> PCIE_LINK_WIDTH_SHIFT);
-	}
-
-	mac->ops.set_lan_id(hw);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
  *
@@ -257,60 +142,6 @@ static void igc_set_lan_id_multi_port_pcie(struct igc_hw *hw)
 	bus->func = (reg & IGC_STATUS_FUNC_MASK) >> IGC_STATUS_FUNC_SHIFT;
 }
 
-/**
- *  igc_set_lan_id_multi_port_pci - Set LAN id for PCI multiple port devices
- *  @hw: pointer to the HW structure
- *
- *  Determines the LAN function id by reading PCI config space.
- **/
-void igc_set_lan_id_multi_port_pci(struct igc_hw *hw)
-{
-	struct igc_bus_info *bus = &hw->bus;
-	u16 pci_header_type;
-	u32 status;
-
-	igc_read_pci_cfg(hw, PCI_HEADER_TYPE_REGISTER, &pci_header_type);
-	if (pci_header_type & PCI_HEADER_TYPE_MULTIFUNC) {
-		status = IGC_READ_REG(hw, IGC_STATUS);
-		bus->func = (status & IGC_STATUS_FUNC_MASK)
-			    >> IGC_STATUS_FUNC_SHIFT;
-	} else {
-		bus->func = 0;
-	}
-}
-
-/**
- *  igc_set_lan_id_single_port - Set LAN id for a single port device
- *  @hw: pointer to the HW structure
- *
- *  Sets the LAN function id to zero for a single port device.
- **/
-void igc_set_lan_id_single_port(struct igc_hw *hw)
-{
-	struct igc_bus_info *bus = &hw->bus;
-
-	bus->func = 0;
-}
-
-/**
- *  igc_clear_vfta_generic - Clear VLAN filter table
- *  @hw: pointer to the HW structure
- *
- *  Clears the register array which contains the VLAN filter table by
- *  setting all the values to 0.
- **/
-void igc_clear_vfta_generic(struct igc_hw *hw)
-{
-	u32 offset;
-
-	DEBUGFUNC("igc_clear_vfta_generic");
-
-	for (offset = 0; offset < IGC_VLAN_FILTER_TBL_SIZE; offset++) {
-		IGC_WRITE_REG_ARRAY(hw, IGC_VFTA, offset, 0);
-		IGC_WRITE_FLUSH(hw);
-	}
-}
-
 /**
  *  igc_write_vfta_generic - Write value to VLAN filter table
  *  @hw: pointer to the HW structure
@@ -582,43 +413,6 @@ void igc_update_mc_addr_list_generic(struct igc_hw *hw,
 	IGC_WRITE_FLUSH(hw);
 }
 
-/**
- *  igc_pcix_mmrbc_workaround_generic - Fix incorrect MMRBC value
- *  @hw: pointer to the HW structure
- *
- *  In certain situations, a system BIOS may report that the PCIx maximum
- *  memory read byte count (MMRBC) value is higher than than the actual
- *  value. We check the PCIx command register with the current PCIx status
- *  register.
- **/
-void igc_pcix_mmrbc_workaround_generic(struct igc_hw *hw)
-{
-	u16 cmd_mmrbc;
-	u16 pcix_cmd;
-	u16 pcix_stat_hi_word;
-	u16 stat_mmrbc;
-
-	DEBUGFUNC("igc_pcix_mmrbc_workaround_generic");
-
-	/* Workaround for PCI-X issue when BIOS sets MMRBC incorrectly */
-	if (hw->bus.type != igc_bus_type_pcix)
-		return;
-
-	igc_read_pci_cfg(hw, PCIX_COMMAND_REGISTER, &pcix_cmd);
-	igc_read_pci_cfg(hw, PCIX_STATUS_REGISTER_HI, &pcix_stat_hi_word);
-	cmd_mmrbc = (pcix_cmd & PCIX_COMMAND_MMRBC_MASK) >>
-		     PCIX_COMMAND_MMRBC_SHIFT;
-	stat_mmrbc = (pcix_stat_hi_word & PCIX_STATUS_HI_MMRBC_MASK) >>
-		      PCIX_STATUS_HI_MMRBC_SHIFT;
-	if (stat_mmrbc == PCIX_STATUS_HI_MMRBC_4K)
-		stat_mmrbc = PCIX_STATUS_HI_MMRBC_2K;
-	if (cmd_mmrbc > stat_mmrbc) {
-		pcix_cmd &= ~PCIX_COMMAND_MMRBC_MASK;
-		pcix_cmd |= stat_mmrbc << PCIX_COMMAND_MMRBC_SHIFT;
-		igc_write_pci_cfg(hw, PCIX_COMMAND_REGISTER, &pcix_cmd);
-	}
-}
-
 /**
  *  igc_clear_hw_cntrs_base_generic - Clear base hardware counters
  *  @hw: pointer to the HW structure
@@ -668,296 +462,6 @@ void igc_clear_hw_cntrs_base_generic(struct igc_hw *hw)
 	IGC_READ_REG(hw, IGC_BPTC);
 }
 
-/**
- *  igc_check_for_copper_link_generic - Check for link (Copper)
- *  @hw: pointer to the HW structure
- *
- *  Checks to see of the link status of the hardware has changed.  If a
- *  change in link status has been detected, then we read the PHY registers
- *  to get the current speed/duplex if link exists.
- **/
-s32 igc_check_for_copper_link_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	s32 ret_val;
-	bool link;
-
-	DEBUGFUNC("igc_check_for_copper_link");
-
-	/* We only want to go out to the PHY registers to see if Auto-Neg
-	 * has completed and/or if our link status has changed.  The
-	 * get_link_status flag is set upon receiving a Link Status
-	 * Change or Rx Sequence Error interrupt.
-	 */
-	if (!mac->get_link_status)
-		return IGC_SUCCESS;
-
-	/* First we want to see if the MII Status Register reports
-	 * link.  If so, then we want to get the current speed/duplex
-	 * of the PHY.
-	 */
-	ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
-	if (ret_val)
-		return ret_val;
-
-	if (!link)
-		return IGC_SUCCESS; /* No link detected */
-
-	mac->get_link_status = false;
-
-	/* Check if there was DownShift, must be checked
-	 * immediately after link-up
-	 */
-	igc_check_downshift_generic(hw);
-
-	/* If we are forcing speed/duplex, then we simply return since
-	 * we have already determined whether we have link or not.
-	 */
-	if (!mac->autoneg)
-		return -IGC_ERR_CONFIG;
-
-	/* Auto-Neg is enabled.  Auto Speed Detection takes care
-	 * of MAC speed/duplex configuration.  So we only need to
-	 * configure Collision Distance in the MAC.
-	 */
-	mac->ops.config_collision_dist(hw);
-
-	/* Configure Flow Control now that Auto-Neg has completed.
-	 * First, we need to restore the desired flow control
-	 * settings because we may have had to re-autoneg with a
-	 * different link partner.
-	 */
-	ret_val = igc_config_fc_after_link_up_generic(hw);
-	if (ret_val)
-		DEBUGOUT("Error configuring flow control\n");
-
-	return ret_val;
-}
-
-/**
- *  igc_check_for_fiber_link_generic - Check for link (Fiber)
- *  @hw: pointer to the HW structure
- *
- *  Checks for link up on the hardware.  If link is not up and we have
- *  a signal, then we need to force link up.
- **/
-s32 igc_check_for_fiber_link_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	u32 rxcw;
-	u32 ctrl;
-	u32 status;
-	s32 ret_val;
-
-	DEBUGFUNC("igc_check_for_fiber_link_generic");
-
-	ctrl = IGC_READ_REG(hw, IGC_CTRL);
-	status = IGC_READ_REG(hw, IGC_STATUS);
-	rxcw = IGC_READ_REG(hw, IGC_RXCW);
-
-	/* If we don't have link (auto-negotiation failed or link partner
-	 * cannot auto-negotiate), the cable is plugged in (we have signal),
-	 * and our link partner is not trying to auto-negotiate with us (we
-	 * are receiving idles or data), we need to force link up. We also
-	 * need to give auto-negotiation time to complete, in case the cable
-	 * was just plugged in. The autoneg_failed flag does this.
-	 */
-	/* (ctrl & IGC_CTRL_SWDPIN1) == 1 == have signal */
-	if ((ctrl & IGC_CTRL_SWDPIN1) && !(status & IGC_STATUS_LU) &&
-	    !(rxcw & IGC_RXCW_C)) {
-		if (!mac->autoneg_failed) {
-			mac->autoneg_failed = true;
-			return IGC_SUCCESS;
-		}
-		DEBUGOUT("NOT Rx'ing /C/, disable AutoNeg and force link.\n");
-
-		/* Disable auto-negotiation in the TXCW register */
-		IGC_WRITE_REG(hw, IGC_TXCW, (mac->txcw & ~IGC_TXCW_ANE));
-
-		/* Force link-up and also force full-duplex. */
-		ctrl = IGC_READ_REG(hw, IGC_CTRL);
-		ctrl |= (IGC_CTRL_SLU | IGC_CTRL_FD);
-		IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-
-		/* Configure Flow Control after forcing link up. */
-		ret_val = igc_config_fc_after_link_up_generic(hw);
-		if (ret_val) {
-			DEBUGOUT("Error configuring flow control\n");
-			return ret_val;
-		}
-	} else if ((ctrl & IGC_CTRL_SLU) && (rxcw & IGC_RXCW_C)) {
-		/* If we are forcing link and we are receiving /C/ ordered
-		 * sets, re-enable auto-negotiation in the TXCW register
-		 * and disable forced link in the Device Control register
-		 * in an attempt to auto-negotiate with our link partner.
-		 */
-		DEBUGOUT("Rx'ing /C/, enable AutoNeg and stop forcing link.\n");
-		IGC_WRITE_REG(hw, IGC_TXCW, mac->txcw);
-		IGC_WRITE_REG(hw, IGC_CTRL, (ctrl & ~IGC_CTRL_SLU));
-
-		mac->serdes_has_link = true;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_check_for_serdes_link_generic - Check for link (Serdes)
- *  @hw: pointer to the HW structure
- *
- *  Checks for link up on the hardware.  If link is not up and we have
- *  a signal, then we need to force link up.
- **/
-s32 igc_check_for_serdes_link_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	u32 rxcw;
-	u32 ctrl;
-	u32 status;
-	s32 ret_val;
-
-	DEBUGFUNC("igc_check_for_serdes_link_generic");
-
-	ctrl = IGC_READ_REG(hw, IGC_CTRL);
-	status = IGC_READ_REG(hw, IGC_STATUS);
-	rxcw = IGC_READ_REG(hw, IGC_RXCW);
-
-	/* If we don't have link (auto-negotiation failed or link partner
-	 * cannot auto-negotiate), and our link partner is not trying to
-	 * auto-negotiate with us (we are receiving idles or data),
-	 * we need to force link up. We also need to give auto-negotiation
-	 * time to complete.
-	 */
-	/* (ctrl & IGC_CTRL_SWDPIN1) == 1 == have signal */
-	if (!(status & IGC_STATUS_LU) && !(rxcw & IGC_RXCW_C)) {
-		if (!mac->autoneg_failed) {
-			mac->autoneg_failed = true;
-			return IGC_SUCCESS;
-		}
-		DEBUGOUT("NOT Rx'ing /C/, disable AutoNeg and force link.\n");
-
-		/* Disable auto-negotiation in the TXCW register */
-		IGC_WRITE_REG(hw, IGC_TXCW, (mac->txcw & ~IGC_TXCW_ANE));
-
-		/* Force link-up and also force full-duplex. */
-		ctrl = IGC_READ_REG(hw, IGC_CTRL);
-		ctrl |= (IGC_CTRL_SLU | IGC_CTRL_FD);
-		IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-
-		/* Configure Flow Control after forcing link up. */
-		ret_val = igc_config_fc_after_link_up_generic(hw);
-		if (ret_val) {
-			DEBUGOUT("Error configuring flow control\n");
-			return ret_val;
-		}
-	} else if ((ctrl & IGC_CTRL_SLU) && (rxcw & IGC_RXCW_C)) {
-		/* If we are forcing link and we are receiving /C/ ordered
-		 * sets, re-enable auto-negotiation in the TXCW register
-		 * and disable forced link in the Device Control register
-		 * in an attempt to auto-negotiate with our link partner.
-		 */
-		DEBUGOUT("Rx'ing /C/, enable AutoNeg and stop forcing link.\n");
-		IGC_WRITE_REG(hw, IGC_TXCW, mac->txcw);
-		IGC_WRITE_REG(hw, IGC_CTRL, (ctrl & ~IGC_CTRL_SLU));
-
-		mac->serdes_has_link = true;
-	} else if (!(IGC_TXCW_ANE & IGC_READ_REG(hw, IGC_TXCW))) {
-		/* If we force link for non-auto-negotiation switch, check
-		 * link status based on MAC synchronization for internal
-		 * serdes media type.
-		 */
-		/* SYNCH bit and IV bit are sticky. */
-		usec_delay(10);
-		rxcw = IGC_READ_REG(hw, IGC_RXCW);
-		if (rxcw & IGC_RXCW_SYNCH) {
-			if (!(rxcw & IGC_RXCW_IV)) {
-				mac->serdes_has_link = true;
-				DEBUGOUT("SERDES: Link up - forced.\n");
-			}
-		} else {
-			mac->serdes_has_link = false;
-			DEBUGOUT("SERDES: Link down - force failed.\n");
-		}
-	}
-
-	if (IGC_TXCW_ANE & IGC_READ_REG(hw, IGC_TXCW)) {
-		status = IGC_READ_REG(hw, IGC_STATUS);
-		if (status & IGC_STATUS_LU) {
-			/* SYNCH bit and IV bit are sticky, so reread rxcw. */
-			usec_delay(10);
-			rxcw = IGC_READ_REG(hw, IGC_RXCW);
-			if (rxcw & IGC_RXCW_SYNCH) {
-				if (!(rxcw & IGC_RXCW_IV)) {
-					mac->serdes_has_link = true;
-					DEBUGOUT("SERDES: Link up - autoneg completed successfully.\n");
-				} else {
-					mac->serdes_has_link = false;
-					DEBUGOUT("SERDES: Link down - invalid codewords detected in autoneg.\n");
-				}
-			} else {
-				mac->serdes_has_link = false;
-				DEBUGOUT("SERDES: Link down - no sync.\n");
-			}
-		} else {
-			mac->serdes_has_link = false;
-			DEBUGOUT("SERDES: Link down - autoneg failed\n");
-		}
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_set_default_fc_generic - Set flow control default values
- *  @hw: pointer to the HW structure
- *
- *  Read the EEPROM for the default values for flow control and store the
- *  values.
- **/
-s32 igc_set_default_fc_generic(struct igc_hw *hw)
-{
-	s32 ret_val;
-	u16 nvm_data;
-	u16 nvm_offset = 0;
-
-	DEBUGFUNC("igc_set_default_fc_generic");
-
-	/* Read and store word 0x0F of the EEPROM. This word contains bits
-	 * that determine the hardware's default PAUSE (flow control) mode,
-	 * a bit that determines whether the HW defaults to enabling or
-	 * disabling auto-negotiation, and the direction of the
-	 * SW defined pins. If there is no SW over-ride of the flow
-	 * control setting, then the variable hw->fc will
-	 * be initialized based on a value in the EEPROM.
-	 */
-	if (hw->mac.type == igc_i350) {
-		nvm_offset = NVM_82580_LAN_FUNC_OFFSET(hw->bus.func);
-		ret_val = hw->nvm.ops.read(hw,
-					   NVM_INIT_CONTROL2_REG +
-					   nvm_offset,
-					   1, &nvm_data);
-	} else {
-		ret_val = hw->nvm.ops.read(hw,
-					   NVM_INIT_CONTROL2_REG,
-					   1, &nvm_data);
-	}
-
-	if (ret_val) {
-		DEBUGOUT("NVM Read Error\n");
-		return ret_val;
-	}
-
-	if (!(nvm_data & NVM_WORD0F_PAUSE_MASK))
-		hw->fc.requested_mode = igc_fc_none;
-	else if ((nvm_data & NVM_WORD0F_PAUSE_MASK) ==
-		 NVM_WORD0F_ASM_DIR)
-		hw->fc.requested_mode = igc_fc_tx_pause;
-	else
-		hw->fc.requested_mode = igc_fc_full;
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_setup_link_generic - Setup flow control and link settings
  *  @hw: pointer to the HW structure
@@ -1131,57 +635,6 @@ s32 igc_poll_fiber_serdes_link_generic(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_setup_fiber_serdes_link_generic - Setup link for fiber/serdes
- *  @hw: pointer to the HW structure
- *
- *  Configures collision distance and flow control for fiber and serdes
- *  links.  Upon successful setup, poll for link.
- **/
-s32 igc_setup_fiber_serdes_link_generic(struct igc_hw *hw)
-{
-	u32 ctrl;
-	s32 ret_val;
-
-	DEBUGFUNC("igc_setup_fiber_serdes_link_generic");
-
-	ctrl = IGC_READ_REG(hw, IGC_CTRL);
-
-	/* Take the link out of reset */
-	ctrl &= ~IGC_CTRL_LRST;
-
-	hw->mac.ops.config_collision_dist(hw);
-
-	ret_val = igc_commit_fc_settings_generic(hw);
-	if (ret_val)
-		return ret_val;
-
-	/* Since auto-negotiation is enabled, take the link out of reset (the
-	 * link will be in reset, because we previously reset the chip). This
-	 * will restart auto-negotiation.  If auto-negotiation is successful
-	 * then the link-up status bit will be set and the flow control enable
-	 * bits (RFCE and TFCE) will be set according to their negotiated value.
-	 */
-	DEBUGOUT("Auto-negotiation enabled\n");
-
-	IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-	IGC_WRITE_FLUSH(hw);
-	msec_delay(1);
-
-	/* For these adapters, the SW definable pin 1 is set when the optics
-	 * detect a signal.  If we have a signal, then poll for a "Link-Up"
-	 * indication.
-	 */
-	if (hw->phy.media_type == igc_media_type_internal_serdes ||
-	    (IGC_READ_REG(hw, IGC_CTRL) & IGC_CTRL_SWDPIN1)) {
-		ret_val = igc_poll_fiber_serdes_link_generic(hw);
-	} else {
-		DEBUGOUT("No signal detected\n");
-	}
-
-	return ret_val;
-}
-
 /**
  *  igc_config_collision_dist_generic - Configure collision distance
  *  @hw: pointer to the HW structure
@@ -1532,28 +985,6 @@ s32 igc_get_speed_and_duplex_copper_generic(struct igc_hw *hw, u16 *speed,
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_get_speed_and_duplex_fiber_generic - Retrieve current speed/duplex
- *  @hw: pointer to the HW structure
- *  @speed: stores the current speed
- *  @duplex: stores the current duplex
- *
- *  Sets the speed and duplex to gigabit full duplex (the only possible option)
- *  for fiber/serdes links.
- **/
-s32
-igc_get_speed_and_duplex_fiber_serdes_generic(struct igc_hw *hw,
-				u16 *speed, u16 *duplex)
-{
-	DEBUGFUNC("igc_get_speed_and_duplex_fiber_serdes_generic");
-	UNREFERENCED_1PARAMETER(hw);
-
-	*speed = SPEED_1000;
-	*duplex = FULL_DUPLEX;
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_get_hw_semaphore_generic - Acquire hardware semaphore
  *  @hw: pointer to the HW structure
@@ -1651,274 +1082,6 @@ s32 igc_get_auto_rd_done_generic(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_valid_led_default_generic - Verify a valid default LED config
- *  @hw: pointer to the HW structure
- *  @data: pointer to the NVM (EEPROM)
- *
- *  Read the EEPROM for the current default LED configuration.  If the
- *  LED configuration is not valid, set to a valid LED configuration.
- **/
-s32 igc_valid_led_default_generic(struct igc_hw *hw, u16 *data)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("igc_valid_led_default_generic");
-
-	ret_val = hw->nvm.ops.read(hw, NVM_ID_LED_SETTINGS, 1, data);
-	if (ret_val) {
-		DEBUGOUT("NVM Read Error\n");
-		return ret_val;
-	}
-
-	if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF)
-		*data = ID_LED_DEFAULT;
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_id_led_init_generic -
- *  @hw: pointer to the HW structure
- *
- **/
-s32 igc_id_led_init_generic(struct igc_hw *hw)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	s32 ret_val;
-	const u32 ledctl_mask = 0x000000FF;
-	const u32 ledctl_on = IGC_LEDCTL_MODE_LED_ON;
-	const u32 ledctl_off = IGC_LEDCTL_MODE_LED_OFF;
-	u16 data, i, temp;
-	const u16 led_mask = 0x0F;
-
-	DEBUGFUNC("igc_id_led_init_generic");
-
-	ret_val = hw->nvm.ops.valid_led_default(hw, &data);
-	if (ret_val)
-		return ret_val;
-
-	mac->ledctl_default = IGC_READ_REG(hw, IGC_LEDCTL);
-	mac->ledctl_mode1 = mac->ledctl_default;
-	mac->ledctl_mode2 = mac->ledctl_default;
-
-	for (i = 0; i < 4; i++) {
-		temp = (data >> (i << 2)) & led_mask;
-		switch (temp) {
-		case ID_LED_ON1_DEF2:
-		case ID_LED_ON1_ON2:
-		case ID_LED_ON1_OFF2:
-			mac->ledctl_mode1 &= ~(ledctl_mask << (i << 3));
-			mac->ledctl_mode1 |= ledctl_on << (i << 3);
-			break;
-		case ID_LED_OFF1_DEF2:
-		case ID_LED_OFF1_ON2:
-		case ID_LED_OFF1_OFF2:
-			mac->ledctl_mode1 &= ~(ledctl_mask << (i << 3));
-			mac->ledctl_mode1 |= ledctl_off << (i << 3);
-			break;
-		default:
-			/* Do nothing */
-			break;
-		}
-		switch (temp) {
-		case ID_LED_DEF1_ON2:
-		case ID_LED_ON1_ON2:
-		case ID_LED_OFF1_ON2:
-			mac->ledctl_mode2 &= ~(ledctl_mask << (i << 3));
-			mac->ledctl_mode2 |= ledctl_on << (i << 3);
-			break;
-		case ID_LED_DEF1_OFF2:
-		case ID_LED_ON1_OFF2:
-		case ID_LED_OFF1_OFF2:
-			mac->ledctl_mode2 &= ~(ledctl_mask << (i << 3));
-			mac->ledctl_mode2 |= ledctl_off << (i << 3);
-			break;
-		default:
-			/* Do nothing */
-			break;
-		}
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_setup_led_generic - Configures SW controllable LED
- *  @hw: pointer to the HW structure
- *
- *  This prepares the SW controllable LED for use and saves the current state
- *  of the LED so it can be later restored.
- **/
-s32 igc_setup_led_generic(struct igc_hw *hw)
-{
-	u32 ledctl;
-
-	DEBUGFUNC("igc_setup_led_generic");
-
-	if (hw->mac.ops.setup_led != igc_setup_led_generic)
-		return -IGC_ERR_CONFIG;
-
-	if (hw->phy.media_type == igc_media_type_fiber) {
-		ledctl = IGC_READ_REG(hw, IGC_LEDCTL);
-		hw->mac.ledctl_default = ledctl;
-		/* Turn off LED0 */
-		ledctl &= ~(IGC_LEDCTL_LED0_IVRT | IGC_LEDCTL_LED0_BLINK |
-			    IGC_LEDCTL_LED0_MODE_MASK);
-		ledctl |= (IGC_LEDCTL_MODE_LED_OFF <<
-			   IGC_LEDCTL_LED0_MODE_SHIFT);
-		IGC_WRITE_REG(hw, IGC_LEDCTL, ledctl);
-	} else if (hw->phy.media_type == igc_media_type_copper) {
-		IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode1);
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_cleanup_led_generic - Set LED config to default operation
- *  @hw: pointer to the HW structure
- *
- *  Remove the current LED configuration and set the LED configuration
- *  to the default value, saved from the EEPROM.
- **/
-s32 igc_cleanup_led_generic(struct igc_hw *hw)
-{
-	DEBUGFUNC("igc_cleanup_led_generic");
-
-	IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_default);
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_blink_led_generic - Blink LED
- *  @hw: pointer to the HW structure
- *
- *  Blink the LEDs which are set to be on.
- **/
-s32 igc_blink_led_generic(struct igc_hw *hw)
-{
-	u32 ledctl_blink = 0;
-	u32 i;
-
-	DEBUGFUNC("igc_blink_led_generic");
-
-	if (hw->phy.media_type == igc_media_type_fiber) {
-		/* always blink LED0 for PCI-E fiber */
-		ledctl_blink = IGC_LEDCTL_LED0_BLINK |
-		     (IGC_LEDCTL_MODE_LED_ON << IGC_LEDCTL_LED0_MODE_SHIFT);
-	} else {
-		/* Set the blink bit for each LED that's "on" (0x0E)
-		 * (or "off" if inverted) in ledctl_mode2.  The blink
-		 * logic in hardware only works when mode is set to "on"
-		 * so it must be changed accordingly when the mode is
-		 * "off" and inverted.
-		 */
-		ledctl_blink = hw->mac.ledctl_mode2;
-		for (i = 0; i < 32; i += 8) {
-			u32 mode = (hw->mac.ledctl_mode2 >> i) &
-			    IGC_LEDCTL_LED0_MODE_MASK;
-			u32 led_default = hw->mac.ledctl_default >> i;
-
-			if ((!(led_default & IGC_LEDCTL_LED0_IVRT) &&
-			     mode == IGC_LEDCTL_MODE_LED_ON) ||
-			    ((led_default & IGC_LEDCTL_LED0_IVRT) &&
-			     mode == IGC_LEDCTL_MODE_LED_OFF)) {
-				ledctl_blink &=
-				    ~(IGC_LEDCTL_LED0_MODE_MASK << i);
-				ledctl_blink |= (IGC_LEDCTL_LED0_BLINK |
-						 IGC_LEDCTL_MODE_LED_ON) << i;
-			}
-		}
-	}
-
-	IGC_WRITE_REG(hw, IGC_LEDCTL, ledctl_blink);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_led_on_generic - Turn LED on
- *  @hw: pointer to the HW structure
- *
- *  Turn LED on.
- **/
-s32 igc_led_on_generic(struct igc_hw *hw)
-{
-	u32 ctrl;
-
-	DEBUGFUNC("igc_led_on_generic");
-
-	switch (hw->phy.media_type) {
-	case igc_media_type_fiber:
-		ctrl = IGC_READ_REG(hw, IGC_CTRL);
-		ctrl &= ~IGC_CTRL_SWDPIN0;
-		ctrl |= IGC_CTRL_SWDPIO0;
-		IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-		break;
-	case igc_media_type_copper:
-		IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode2);
-		break;
-	default:
-		break;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_led_off_generic - Turn LED off
- *  @hw: pointer to the HW structure
- *
- *  Turn LED off.
- **/
-s32 igc_led_off_generic(struct igc_hw *hw)
-{
-	u32 ctrl;
-
-	DEBUGFUNC("igc_led_off_generic");
-
-	switch (hw->phy.media_type) {
-	case igc_media_type_fiber:
-		ctrl = IGC_READ_REG(hw, IGC_CTRL);
-		ctrl |= IGC_CTRL_SWDPIN0;
-		ctrl |= IGC_CTRL_SWDPIO0;
-		IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-		break;
-	case igc_media_type_copper:
-		IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode1);
-		break;
-	default:
-		break;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_set_pcie_no_snoop_generic - Set PCI-express capabilities
- *  @hw: pointer to the HW structure
- *  @no_snoop: bitmap of snoop events
- *
- *  Set the PCI-express register to snoop for events enabled in 'no_snoop'.
- **/
-void igc_set_pcie_no_snoop_generic(struct igc_hw *hw, u32 no_snoop)
-{
-	u32 gcr;
-
-	DEBUGFUNC("igc_set_pcie_no_snoop_generic");
-
-	if (hw->bus.type != igc_bus_type_pci_express)
-		return;
-
-	if (no_snoop) {
-		gcr = IGC_READ_REG(hw, IGC_GCR);
-		gcr &= ~(PCIE_NO_SNOOP_ALL);
-		gcr |= no_snoop;
-		IGC_WRITE_REG(hw, IGC_GCR, gcr);
-	}
-}
-
 /**
  *  igc_disable_pcie_master_generic - Disables PCI-express master access
  *  @hw: pointer to the HW structure
@@ -2046,22 +1209,6 @@ static s32 igc_validate_mdi_setting_generic(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_validate_mdi_setting_crossover_generic - Verify MDI/MDIx settings
- *  @hw: pointer to the HW structure
- *
- *  Validate the MDI/MDIx setting, allowing for auto-crossover during forced
- *  operation.
- **/
-s32
-igc_validate_mdi_setting_crossover_generic(struct igc_hw IGC_UNUSEDARG * hw)
-{
-	DEBUGFUNC("igc_validate_mdi_setting_crossover_generic");
-	UNREFERENCED_1PARAMETER(hw);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_write_8bit_ctrl_reg_generic - Write a 8bit CTRL register
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_mac.h b/drivers/net/igc/base/igc_mac.h
index 035a371e1e..26a88c2014 100644
--- a/drivers/net/igc/base/igc_mac.h
+++ b/drivers/net/igc/base/igc_mac.h
@@ -13,51 +13,29 @@ s32  igc_null_link_info(struct igc_hw *hw, u16 *s, u16 *d);
 bool igc_null_mng_mode(struct igc_hw *hw);
 void igc_null_update_mc(struct igc_hw *hw, u8 *h, u32 a);
 void igc_null_write_vfta(struct igc_hw *hw, u32 a, u32 b);
-int  igc_null_rar_set(struct igc_hw *hw, u8 *h, u32 a);
-s32  igc_blink_led_generic(struct igc_hw *hw);
-s32  igc_check_for_copper_link_generic(struct igc_hw *hw);
-s32  igc_check_for_fiber_link_generic(struct igc_hw *hw);
-s32  igc_check_for_serdes_link_generic(struct igc_hw *hw);
-s32  igc_cleanup_led_generic(struct igc_hw *hw);
 s32  igc_commit_fc_settings_generic(struct igc_hw *hw);
 s32  igc_poll_fiber_serdes_link_generic(struct igc_hw *hw);
 s32  igc_config_fc_after_link_up_generic(struct igc_hw *hw);
 s32  igc_disable_pcie_master_generic(struct igc_hw *hw);
 s32  igc_force_mac_fc_generic(struct igc_hw *hw);
 s32  igc_get_auto_rd_done_generic(struct igc_hw *hw);
-s32  igc_get_bus_info_pci_generic(struct igc_hw *hw);
-s32  igc_get_bus_info_pcie_generic(struct igc_hw *hw);
-void igc_set_lan_id_single_port(struct igc_hw *hw);
-void igc_set_lan_id_multi_port_pci(struct igc_hw *hw);
 s32  igc_get_hw_semaphore_generic(struct igc_hw *hw);
 s32  igc_get_speed_and_duplex_copper_generic(struct igc_hw *hw, u16 *speed,
 					       u16 *duplex);
-s32  igc_get_speed_and_duplex_fiber_serdes_generic(struct igc_hw *hw,
-						     u16 *speed, u16 *duplex);
-s32  igc_id_led_init_generic(struct igc_hw *hw);
-s32  igc_led_on_generic(struct igc_hw *hw);
-s32  igc_led_off_generic(struct igc_hw *hw);
 void igc_update_mc_addr_list_generic(struct igc_hw *hw,
 				       u8 *mc_addr_list, u32 mc_addr_count);
-s32  igc_set_default_fc_generic(struct igc_hw *hw);
 s32  igc_set_fc_watermarks_generic(struct igc_hw *hw);
-s32  igc_setup_fiber_serdes_link_generic(struct igc_hw *hw);
-s32  igc_setup_led_generic(struct igc_hw *hw);
 s32  igc_setup_link_generic(struct igc_hw *hw);
-s32  igc_validate_mdi_setting_crossover_generic(struct igc_hw *hw);
 s32  igc_write_8bit_ctrl_reg_generic(struct igc_hw *hw, u32 reg,
 				       u32 offset, u8 data);
 
 u32  igc_hash_mc_addr_generic(struct igc_hw *hw, u8 *mc_addr);
 
 void igc_clear_hw_cntrs_base_generic(struct igc_hw *hw);
-void igc_clear_vfta_generic(struct igc_hw *hw);
 void igc_init_rx_addrs_generic(struct igc_hw *hw, u16 rar_count);
-void igc_pcix_mmrbc_workaround_generic(struct igc_hw *hw);
 void igc_put_hw_semaphore_generic(struct igc_hw *hw);
 s32  igc_check_alt_mac_addr_generic(struct igc_hw *hw);
 void igc_reset_adaptive_generic(struct igc_hw *hw);
-void igc_set_pcie_no_snoop_generic(struct igc_hw *hw, u32 no_snoop);
 void igc_update_adaptive_generic(struct igc_hw *hw);
 void igc_write_vfta_generic(struct igc_hw *hw, u32 offset, u32 value);
 
diff --git a/drivers/net/igc/base/igc_manage.c b/drivers/net/igc/base/igc_manage.c
index 563ab81603..aa68174031 100644
--- a/drivers/net/igc/base/igc_manage.c
+++ b/drivers/net/igc/base/igc_manage.c
@@ -73,24 +73,6 @@ s32 igc_mng_enable_host_if_generic(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_check_mng_mode_generic - Generic check management mode
- *  @hw: pointer to the HW structure
- *
- *  Reads the firmware semaphore register and returns true (>0) if
- *  manageability is enabled, else false (0).
- **/
-bool igc_check_mng_mode_generic(struct igc_hw *hw)
-{
-	u32 fwsm = IGC_READ_REG(hw, IGC_FWSM);
-
-	DEBUGFUNC("igc_check_mng_mode_generic");
-
-
-	return (fwsm & IGC_FWSM_MODE_MASK) ==
-		(IGC_MNG_IAMT_MODE << IGC_FWSM_MODE_SHIFT);
-}
-
 /**
  *  igc_enable_tx_pkt_filtering_generic - Enable packet filtering on Tx
  *  @hw: pointer to the HW structure
@@ -301,247 +283,3 @@ s32 igc_mng_write_dhcp_info_generic(struct igc_hw *hw, u8 *buffer,
 
 	return IGC_SUCCESS;
 }
-
-/**
- *  igc_enable_mng_pass_thru - Check if management passthrough is needed
- *  @hw: pointer to the HW structure
- *
- *  Verifies the hardware needs to leave interface enabled so that frames can
- *  be directed to and from the management interface.
- **/
-bool igc_enable_mng_pass_thru(struct igc_hw *hw)
-{
-	u32 manc;
-	u32 fwsm, factps;
-
-	DEBUGFUNC("igc_enable_mng_pass_thru");
-
-	if (!hw->mac.asf_firmware_present)
-		return false;
-
-	manc = IGC_READ_REG(hw, IGC_MANC);
-
-	if (!(manc & IGC_MANC_RCV_TCO_EN))
-		return false;
-
-	if (hw->mac.has_fwsm) {
-		fwsm = IGC_READ_REG(hw, IGC_FWSM);
-		factps = IGC_READ_REG(hw, IGC_FACTPS);
-
-		if (!(factps & IGC_FACTPS_MNGCG) &&
-		    ((fwsm & IGC_FWSM_MODE_MASK) ==
-		     (igc_mng_mode_pt << IGC_FWSM_MODE_SHIFT)))
-			return true;
-	} else if ((hw->mac.type == igc_82574) ||
-		   (hw->mac.type == igc_82583)) {
-		u16 data;
-		s32 ret_val;
-
-		factps = IGC_READ_REG(hw, IGC_FACTPS);
-		ret_val = igc_read_nvm(hw, NVM_INIT_CONTROL2_REG, 1, &data);
-		if (ret_val)
-			return false;
-
-		if (!(factps & IGC_FACTPS_MNGCG) &&
-		    ((data & IGC_NVM_INIT_CTRL2_MNGM) ==
-		     (igc_mng_mode_pt << 13)))
-			return true;
-	} else if ((manc & IGC_MANC_SMBUS_EN) &&
-		   !(manc & IGC_MANC_ASF_EN)) {
-		return true;
-	}
-
-	return false;
-}
-
-/**
- *  igc_host_interface_command - Writes buffer to host interface
- *  @hw: pointer to the HW structure
- *  @buffer: contains a command to write
- *  @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- *  Writes a buffer to the Host Interface.  Upon success, returns IGC_SUCCESS
- *  else returns IGC_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 igc_host_interface_command(struct igc_hw *hw, u8 *buffer, u32 length)
-{
-	u32 hicr, i;
-
-	DEBUGFUNC("igc_host_interface_command");
-
-	if (!(hw->mac.arc_subsystem_valid)) {
-		DEBUGOUT("Hardware doesn't support host interface command.\n");
-		return IGC_SUCCESS;
-	}
-
-	if (!hw->mac.asf_firmware_present) {
-		DEBUGOUT("Firmware is not present.\n");
-		return IGC_SUCCESS;
-	}
-
-	if (length == 0 || length & 0x3 ||
-	    length > IGC_HI_MAX_BLOCK_BYTE_LENGTH) {
-		DEBUGOUT("Buffer length failure.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Check that the host interface is enabled. */
-	hicr = IGC_READ_REG(hw, IGC_HICR);
-	if (!(hicr & IGC_HICR_EN)) {
-		DEBUGOUT("IGC_HOST_EN bit disabled.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Calculate length in DWORDs */
-	length >>= 2;
-
-	/* The device driver writes the relevant command block
-	 * into the ram area.
-	 */
-	for (i = 0; i < length; i++)
-		IGC_WRITE_REG_ARRAY_DWORD(hw, IGC_HOST_IF, i,
-					    *((u32 *)buffer + i));
-
-	/* Setting this bit tells the ARC that a new command is pending. */
-	IGC_WRITE_REG(hw, IGC_HICR, hicr | IGC_HICR_C);
-
-	for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
-		hicr = IGC_READ_REG(hw, IGC_HICR);
-		if (!(hicr & IGC_HICR_C))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check command successful completion. */
-	if (i == IGC_HI_COMMAND_TIMEOUT ||
-	    (!(IGC_READ_REG(hw, IGC_HICR) & IGC_HICR_SV))) {
-		DEBUGOUT("Command has failed with no status valid.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	for (i = 0; i < length; i++)
-		*((u32 *)buffer + i) = IGC_READ_REG_ARRAY_DWORD(hw,
-								  IGC_HOST_IF,
-								  i);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_load_firmware - Writes proxy FW code buffer to host interface
- *                        and execute.
- *  @hw: pointer to the HW structure
- *  @buffer: contains a firmware to write
- *  @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- *  Upon success returns IGC_SUCCESS, returns IGC_ERR_CONFIG if not enabled
- *  in HW else returns IGC_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 igc_load_firmware(struct igc_hw *hw, u8 *buffer, u32 length)
-{
-	u32 hicr, hibba, fwsm, icr, i;
-
-	DEBUGFUNC("igc_load_firmware");
-
-	if (hw->mac.type < igc_i210) {
-		DEBUGOUT("Hardware doesn't support loading FW by the driver\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	/* Check that the host interface is enabled. */
-	hicr = IGC_READ_REG(hw, IGC_HICR);
-	if (!(hicr & IGC_HICR_EN)) {
-		DEBUGOUT("IGC_HOST_EN bit disabled.\n");
-		return -IGC_ERR_CONFIG;
-	}
-	if (!(hicr & IGC_HICR_MEMORY_BASE_EN)) {
-		DEBUGOUT("IGC_HICR_MEMORY_BASE_EN bit disabled.\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	if (length == 0 || length & 0x3 || length > IGC_HI_FW_MAX_LENGTH) {
-		DEBUGOUT("Buffer length failure.\n");
-		return -IGC_ERR_INVALID_ARGUMENT;
-	}
-
-	/* Clear notification from ROM-FW by reading ICR register */
-	icr = IGC_READ_REG(hw, IGC_ICR_V2);
-
-	/* Reset ROM-FW */
-	hicr = IGC_READ_REG(hw, IGC_HICR);
-	hicr |= IGC_HICR_FW_RESET_ENABLE;
-	IGC_WRITE_REG(hw, IGC_HICR, hicr);
-	hicr |= IGC_HICR_FW_RESET;
-	IGC_WRITE_REG(hw, IGC_HICR, hicr);
-	IGC_WRITE_FLUSH(hw);
-
-	/* Wait till MAC notifies about its readiness after ROM-FW reset */
-	for (i = 0; i < (IGC_HI_COMMAND_TIMEOUT * 2); i++) {
-		icr = IGC_READ_REG(hw, IGC_ICR_V2);
-		if (icr & IGC_ICR_MNG)
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for timeout */
-	if (i == IGC_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("FW reset failed.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Wait till MAC is ready to accept new FW code */
-	for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
-		fwsm = IGC_READ_REG(hw, IGC_FWSM);
-		if ((fwsm & IGC_FWSM_FW_VALID) &&
-		    ((fwsm & IGC_FWSM_MODE_MASK) >> IGC_FWSM_MODE_SHIFT ==
-		    IGC_FWSM_HI_EN_ONLY_MODE))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for timeout */
-	if (i == IGC_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("FW reset failed.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	/* Calculate length in DWORDs */
-	length >>= 2;
-
-	/* The device driver writes the relevant FW code block
-	 * into the ram area in DWORDs via 1kB ram addressing window.
-	 */
-	for (i = 0; i < length; i++) {
-		if (!(i % IGC_HI_FW_BLOCK_DWORD_LENGTH)) {
-			/* Point to correct 1kB ram window */
-			hibba = IGC_HI_FW_BASE_ADDRESS +
-				((IGC_HI_FW_BLOCK_DWORD_LENGTH << 2) *
-				(i / IGC_HI_FW_BLOCK_DWORD_LENGTH));
-
-			IGC_WRITE_REG(hw, IGC_HIBBA, hibba);
-		}
-
-		IGC_WRITE_REG_ARRAY_DWORD(hw, IGC_HOST_IF,
-					    i % IGC_HI_FW_BLOCK_DWORD_LENGTH,
-					    *((u32 *)buffer + i));
-	}
-
-	/* Setting this bit tells the ARC that a new FW is ready to execute. */
-	hicr = IGC_READ_REG(hw, IGC_HICR);
-	IGC_WRITE_REG(hw, IGC_HICR, hicr | IGC_HICR_C);
-
-	for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
-		hicr = IGC_READ_REG(hw, IGC_HICR);
-		if (!(hicr & IGC_HICR_C))
-			break;
-		msec_delay(1);
-	}
-
-	/* Check for successful FW start. */
-	if (i == IGC_HI_COMMAND_TIMEOUT) {
-		DEBUGOUT("New FW did not start within timeout period.\n");
-		return -IGC_ERR_HOST_INTERFACE_COMMAND;
-	}
-
-	return IGC_SUCCESS;
-}
diff --git a/drivers/net/igc/base/igc_manage.h b/drivers/net/igc/base/igc_manage.h
index 10cae6d7f8..7070de54df 100644
--- a/drivers/net/igc/base/igc_manage.h
+++ b/drivers/net/igc/base/igc_manage.h
@@ -5,7 +5,6 @@
 #ifndef _IGC_MANAGE_H_
 #define _IGC_MANAGE_H_
 
-bool igc_check_mng_mode_generic(struct igc_hw *hw);
 bool igc_enable_tx_pkt_filtering_generic(struct igc_hw *hw);
 s32  igc_mng_enable_host_if_generic(struct igc_hw *hw);
 s32  igc_mng_host_if_write_generic(struct igc_hw *hw, u8 *buffer,
@@ -14,10 +13,7 @@ s32  igc_mng_write_cmd_header_generic(struct igc_hw *hw,
 				     struct igc_host_mng_command_header *hdr);
 s32  igc_mng_write_dhcp_info_generic(struct igc_hw *hw,
 				       u8 *buffer, u16 length);
-bool igc_enable_mng_pass_thru(struct igc_hw *hw);
 u8 igc_calculate_checksum(u8 *buffer, u32 length);
-s32 igc_host_interface_command(struct igc_hw *hw, u8 *buffer, u32 length);
-s32 igc_load_firmware(struct igc_hw *hw, u8 *buffer, u32 length);
 
 enum igc_mng_mode {
 	igc_mng_mode_none = 0,
diff --git a/drivers/net/igc/base/igc_nvm.c b/drivers/net/igc/base/igc_nvm.c
index a7c901ab56..1583c232e7 100644
--- a/drivers/net/igc/base/igc_nvm.c
+++ b/drivers/net/igc/base/igc_nvm.c
@@ -114,91 +114,6 @@ static void igc_lower_eec_clk(struct igc_hw *hw, u32 *eecd)
 	usec_delay(hw->nvm.delay_usec);
 }
 
-/**
- *  igc_shift_out_eec_bits - Shift data bits our to the EEPROM
- *  @hw: pointer to the HW structure
- *  @data: data to send to the EEPROM
- *  @count: number of bits to shift out
- *
- *  We need to shift 'count' bits out to the EEPROM.  So, the value in the
- *  "data" parameter will be shifted out to the EEPROM one bit at a time.
- *  In order to do this, "data" must be broken down into bits.
- **/
-static void igc_shift_out_eec_bits(struct igc_hw *hw, u16 data, u16 count)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	u32 eecd = IGC_READ_REG(hw, IGC_EECD);
-	u32 mask;
-
-	DEBUGFUNC("igc_shift_out_eec_bits");
-
-	mask = 0x01 << (count - 1);
-	if (nvm->type == igc_nvm_eeprom_microwire)
-		eecd &= ~IGC_EECD_DO;
-	else if (nvm->type == igc_nvm_eeprom_spi)
-		eecd |= IGC_EECD_DO;
-
-	do {
-		eecd &= ~IGC_EECD_DI;
-
-		if (data & mask)
-			eecd |= IGC_EECD_DI;
-
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-
-		usec_delay(nvm->delay_usec);
-
-		igc_raise_eec_clk(hw, &eecd);
-		igc_lower_eec_clk(hw, &eecd);
-
-		mask >>= 1;
-	} while (mask);
-
-	eecd &= ~IGC_EECD_DI;
-	IGC_WRITE_REG(hw, IGC_EECD, eecd);
-}
-
-/**
- *  igc_shift_in_eec_bits - Shift data bits in from the EEPROM
- *  @hw: pointer to the HW structure
- *  @count: number of bits to shift in
- *
- *  In order to read a register from the EEPROM, we need to shift 'count' bits
- *  in from the EEPROM.  Bits are "shifted in" by raising the clock input to
- *  the EEPROM (setting the SK bit), and then reading the value of the data out
- *  "DO" bit.  During this "shifting in" process the data in "DI" bit should
- *  always be clear.
- **/
-static u16 igc_shift_in_eec_bits(struct igc_hw *hw, u16 count)
-{
-	u32 eecd;
-	u32 i;
-	u16 data;
-
-	DEBUGFUNC("igc_shift_in_eec_bits");
-
-	eecd = IGC_READ_REG(hw, IGC_EECD);
-
-	eecd &= ~(IGC_EECD_DO | IGC_EECD_DI);
-	data = 0;
-
-	for (i = 0; i < count; i++) {
-		data <<= 1;
-		igc_raise_eec_clk(hw, &eecd);
-
-		eecd = IGC_READ_REG(hw, IGC_EECD);
-
-		eecd &= ~IGC_EECD_DI;
-		if (eecd & IGC_EECD_DO)
-			data |= 1;
-
-		igc_lower_eec_clk(hw, &eecd);
-	}
-
-	return data;
-}
-
 /**
  *  igc_poll_eerd_eewr_done - Poll for EEPROM read/write completion
  *  @hw: pointer to the HW structure
@@ -229,83 +144,6 @@ s32 igc_poll_eerd_eewr_done(struct igc_hw *hw, int ee_reg)
 	return -IGC_ERR_NVM;
 }
 
-/**
- *  igc_acquire_nvm_generic - Generic request for access to EEPROM
- *  @hw: pointer to the HW structure
- *
- *  Set the EEPROM access request bit and wait for EEPROM access grant bit.
- *  Return successful if access grant bit set, else clear the request for
- *  EEPROM access and return -IGC_ERR_NVM (-1).
- **/
-s32 igc_acquire_nvm_generic(struct igc_hw *hw)
-{
-	u32 eecd = IGC_READ_REG(hw, IGC_EECD);
-	s32 timeout = IGC_NVM_GRANT_ATTEMPTS;
-
-	DEBUGFUNC("igc_acquire_nvm_generic");
-
-	IGC_WRITE_REG(hw, IGC_EECD, eecd | IGC_EECD_REQ);
-	eecd = IGC_READ_REG(hw, IGC_EECD);
-
-	while (timeout) {
-		if (eecd & IGC_EECD_GNT)
-			break;
-		usec_delay(5);
-		eecd = IGC_READ_REG(hw, IGC_EECD);
-		timeout--;
-	}
-
-	if (!timeout) {
-		eecd &= ~IGC_EECD_REQ;
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		DEBUGOUT("Could not acquire NVM grant\n");
-		return -IGC_ERR_NVM;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_standby_nvm - Return EEPROM to standby state
- *  @hw: pointer to the HW structure
- *
- *  Return the EEPROM to a standby state.
- **/
-static void igc_standby_nvm(struct igc_hw *hw)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	u32 eecd = IGC_READ_REG(hw, IGC_EECD);
-
-	DEBUGFUNC("igc_standby_nvm");
-
-	if (nvm->type == igc_nvm_eeprom_microwire) {
-		eecd &= ~(IGC_EECD_CS | IGC_EECD_SK);
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-		usec_delay(nvm->delay_usec);
-
-		igc_raise_eec_clk(hw, &eecd);
-
-		/* Select EEPROM */
-		eecd |= IGC_EECD_CS;
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-		usec_delay(nvm->delay_usec);
-
-		igc_lower_eec_clk(hw, &eecd);
-	} else if (nvm->type == igc_nvm_eeprom_spi) {
-		/* Toggle CS to flush commands */
-		eecd |= IGC_EECD_CS;
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-		usec_delay(nvm->delay_usec);
-		eecd &= ~IGC_EECD_CS;
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-		usec_delay(nvm->delay_usec);
-	}
-}
-
 /**
  *  igc_stop_nvm - Terminate EEPROM command
  *  @hw: pointer to the HW structure
@@ -332,196 +170,6 @@ void igc_stop_nvm(struct igc_hw *hw)
 	}
 }
 
-/**
- *  igc_release_nvm_generic - Release exclusive access to EEPROM
- *  @hw: pointer to the HW structure
- *
- *  Stop any current commands to the EEPROM and clear the EEPROM request bit.
- **/
-void igc_release_nvm_generic(struct igc_hw *hw)
-{
-	u32 eecd;
-
-	DEBUGFUNC("igc_release_nvm_generic");
-
-	igc_stop_nvm(hw);
-
-	eecd = IGC_READ_REG(hw, IGC_EECD);
-	eecd &= ~IGC_EECD_REQ;
-	IGC_WRITE_REG(hw, IGC_EECD, eecd);
-}
-
-/**
- *  igc_ready_nvm_eeprom - Prepares EEPROM for read/write
- *  @hw: pointer to the HW structure
- *
- *  Setups the EEPROM for reading and writing.
- **/
-static s32 igc_ready_nvm_eeprom(struct igc_hw *hw)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	u32 eecd = IGC_READ_REG(hw, IGC_EECD);
-	u8 spi_stat_reg;
-
-	DEBUGFUNC("igc_ready_nvm_eeprom");
-
-	if (nvm->type == igc_nvm_eeprom_microwire) {
-		/* Clear SK and DI */
-		eecd &= ~(IGC_EECD_DI | IGC_EECD_SK);
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		/* Set CS */
-		eecd |= IGC_EECD_CS;
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-	} else if (nvm->type == igc_nvm_eeprom_spi) {
-		u16 timeout = NVM_MAX_RETRY_SPI;
-
-		/* Clear SK and CS */
-		eecd &= ~(IGC_EECD_CS | IGC_EECD_SK);
-		IGC_WRITE_REG(hw, IGC_EECD, eecd);
-		IGC_WRITE_FLUSH(hw);
-		usec_delay(1);
-
-		/* Read "Status Register" repeatedly until the LSB is cleared.
-		 * The EEPROM will signal that the command has been completed
-		 * by clearing bit 0 of the internal status register.  If it's
-		 * not cleared within 'timeout', then error out.
-		 */
-		while (timeout) {
-			igc_shift_out_eec_bits(hw, NVM_RDSR_OPCODE_SPI,
-						 hw->nvm.opcode_bits);
-			spi_stat_reg = (u8)igc_shift_in_eec_bits(hw, 8);
-			if (!(spi_stat_reg & NVM_STATUS_RDY_SPI))
-				break;
-
-			usec_delay(5);
-			igc_standby_nvm(hw);
-			timeout--;
-		}
-
-		if (!timeout) {
-			DEBUGOUT("SPI NVM Status error\n");
-			return -IGC_ERR_NVM;
-		}
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_read_nvm_spi - Read EEPROM's using SPI
- *  @hw: pointer to the HW structure
- *  @offset: offset of word in the EEPROM to read
- *  @words: number of words to read
- *  @data: word read from the EEPROM
- *
- *  Reads a 16 bit word from the EEPROM.
- **/
-s32 igc_read_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	u32 i = 0;
-	s32 ret_val;
-	u16 word_in;
-	u8 read_opcode = NVM_READ_OPCODE_SPI;
-
-	DEBUGFUNC("igc_read_nvm_spi");
-
-	/* A check for invalid values:  offset too large, too many words,
-	 * and not enough words.
-	 */
-	if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
-			words == 0) {
-		DEBUGOUT("nvm parameter(s) out of bounds\n");
-		return -IGC_ERR_NVM;
-	}
-
-	ret_val = nvm->ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_ready_nvm_eeprom(hw);
-	if (ret_val)
-		goto release;
-
-	igc_standby_nvm(hw);
-
-	if (nvm->address_bits == 8 && offset >= 128)
-		read_opcode |= NVM_A8_OPCODE_SPI;
-
-	/* Send the READ command (opcode + addr) */
-	igc_shift_out_eec_bits(hw, read_opcode, nvm->opcode_bits);
-	igc_shift_out_eec_bits(hw, (u16)(offset * 2), nvm->address_bits);
-
-	/* Read the data.  SPI NVMs increment the address with each byte
-	 * read and will roll over if reading beyond the end.  This allows
-	 * us to read the whole NVM from any offset
-	 */
-	for (i = 0; i < words; i++) {
-		word_in = igc_shift_in_eec_bits(hw, 16);
-		data[i] = (word_in >> 8) | (word_in << 8);
-	}
-
-release:
-	nvm->ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_read_nvm_microwire - Reads EEPROM's using microwire
- *  @hw: pointer to the HW structure
- *  @offset: offset of word in the EEPROM to read
- *  @words: number of words to read
- *  @data: word read from the EEPROM
- *
- *  Reads a 16 bit word from the EEPROM.
- **/
-s32 igc_read_nvm_microwire(struct igc_hw *hw, u16 offset, u16 words,
-			     u16 *data)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	u32 i = 0;
-	s32 ret_val;
-	u8 read_opcode = NVM_READ_OPCODE_MICROWIRE;
-
-	DEBUGFUNC("igc_read_nvm_microwire");
-
-	/* A check for invalid values:  offset too large, too many words,
-	 * and not enough words.
-	 */
-	if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
-			words == 0) {
-		DEBUGOUT("nvm parameter(s) out of bounds\n");
-		return -IGC_ERR_NVM;
-	}
-
-	ret_val = nvm->ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_ready_nvm_eeprom(hw);
-	if (ret_val)
-		goto release;
-
-	for (i = 0; i < words; i++) {
-		/* Send the READ command (opcode + addr) */
-		igc_shift_out_eec_bits(hw, read_opcode, nvm->opcode_bits);
-		igc_shift_out_eec_bits(hw, (u16)(offset + i),
-					nvm->address_bits);
-
-		/* Read the data.  For microwire, each word requires the
-		 * overhead of setup and tear-down.
-		 */
-		data[i] = igc_shift_in_eec_bits(hw, 16);
-		igc_standby_nvm(hw);
-	}
-
-release:
-	nvm->ops.release(hw);
-
-	return ret_val;
-}
-
 /**
  *  igc_read_nvm_eerd - Reads EEPROM using EERD register
  *  @hw: pointer to the HW structure
@@ -567,173 +215,6 @@ s32 igc_read_nvm_eerd(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
 	return ret_val;
 }
 
-/**
- *  igc_write_nvm_spi - Write to EEPROM using SPI
- *  @hw: pointer to the HW structure
- *  @offset: offset within the EEPROM to be written to
- *  @words: number of words to write
- *  @data: 16 bit word(s) to be written to the EEPROM
- *
- *  Writes data to EEPROM at offset using SPI interface.
- *
- *  If igc_update_nvm_checksum is not called after this function , the
- *  EEPROM will most likely contain an invalid checksum.
- **/
-s32 igc_write_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	s32 ret_val = -IGC_ERR_NVM;
-	u16 widx = 0;
-
-	DEBUGFUNC("igc_write_nvm_spi");
-
-	/* A check for invalid values:  offset too large, too many words,
-	 * and not enough words.
-	 */
-	if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
-			words == 0) {
-		DEBUGOUT("nvm parameter(s) out of bounds\n");
-		return -IGC_ERR_NVM;
-	}
-
-	while (widx < words) {
-		u8 write_opcode = NVM_WRITE_OPCODE_SPI;
-
-		ret_val = nvm->ops.acquire(hw);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = igc_ready_nvm_eeprom(hw);
-		if (ret_val) {
-			nvm->ops.release(hw);
-			return ret_val;
-		}
-
-		igc_standby_nvm(hw);
-
-		/* Send the WRITE ENABLE command (8 bit opcode) */
-		igc_shift_out_eec_bits(hw, NVM_WREN_OPCODE_SPI,
-					 nvm->opcode_bits);
-
-		igc_standby_nvm(hw);
-
-		/* Some SPI eeproms use the 8th address bit embedded in the
-		 * opcode
-		 */
-		if (nvm->address_bits == 8 && offset >= 128)
-			write_opcode |= NVM_A8_OPCODE_SPI;
-
-		/* Send the Write command (8-bit opcode + addr) */
-		igc_shift_out_eec_bits(hw, write_opcode, nvm->opcode_bits);
-		igc_shift_out_eec_bits(hw, (u16)((offset + widx) * 2),
-					 nvm->address_bits);
-
-		/* Loop to allow for up to whole page write of eeprom */
-		while (widx < words) {
-			u16 word_out = data[widx];
-			word_out = (word_out >> 8) | (word_out << 8);
-			igc_shift_out_eec_bits(hw, word_out, 16);
-			widx++;
-
-			if ((((offset + widx) * 2) % nvm->page_size) == 0) {
-				igc_standby_nvm(hw);
-				break;
-			}
-		}
-		msec_delay(10);
-		nvm->ops.release(hw);
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_write_nvm_microwire - Writes EEPROM using microwire
- *  @hw: pointer to the HW structure
- *  @offset: offset within the EEPROM to be written to
- *  @words: number of words to write
- *  @data: 16 bit word(s) to be written to the EEPROM
- *
- *  Writes data to EEPROM at offset using microwire interface.
- *
- *  If igc_update_nvm_checksum is not called after this function , the
- *  EEPROM will most likely contain an invalid checksum.
- **/
-s32 igc_write_nvm_microwire(struct igc_hw *hw, u16 offset, u16 words,
-			      u16 *data)
-{
-	struct igc_nvm_info *nvm = &hw->nvm;
-	s32  ret_val;
-	u32 eecd;
-	u16 words_written = 0;
-	u16 widx = 0;
-
-	DEBUGFUNC("igc_write_nvm_microwire");
-
-	/* A check for invalid values:  offset too large, too many words,
-	 * and not enough words.
-	 */
-	if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
-			words == 0) {
-		DEBUGOUT("nvm parameter(s) out of bounds\n");
-		return -IGC_ERR_NVM;
-	}
-
-	ret_val = nvm->ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_ready_nvm_eeprom(hw);
-	if (ret_val)
-		goto release;
-
-	igc_shift_out_eec_bits(hw, NVM_EWEN_OPCODE_MICROWIRE,
-				 (u16)(nvm->opcode_bits + 2));
-
-	igc_shift_out_eec_bits(hw, 0, (u16)(nvm->address_bits - 2));
-
-	igc_standby_nvm(hw);
-
-	while (words_written < words) {
-		igc_shift_out_eec_bits(hw, NVM_WRITE_OPCODE_MICROWIRE,
-					 nvm->opcode_bits);
-
-		igc_shift_out_eec_bits(hw, (u16)(offset + words_written),
-					 nvm->address_bits);
-
-		igc_shift_out_eec_bits(hw, data[words_written], 16);
-
-		igc_standby_nvm(hw);
-
-		for (widx = 0; widx < 200; widx++) {
-			eecd = IGC_READ_REG(hw, IGC_EECD);
-			if (eecd & IGC_EECD_DO)
-				break;
-			usec_delay(50);
-		}
-
-		if (widx == 200) {
-			DEBUGOUT("NVM Write did not complete\n");
-			ret_val = -IGC_ERR_NVM;
-			goto release;
-		}
-
-		igc_standby_nvm(hw);
-
-		words_written++;
-	}
-
-	igc_shift_out_eec_bits(hw, NVM_EWDS_OPCODE_MICROWIRE,
-				 (u16)(nvm->opcode_bits + 2));
-
-	igc_shift_out_eec_bits(hw, 0, (u16)(nvm->address_bits - 2));
-
-release:
-	nvm->ops.release(hw);
-
-	return ret_val;
-}
-
 /**
  *  igc_read_pba_string_generic - Read device part number
  *  @hw: pointer to the HW structure
@@ -939,134 +420,6 @@ s32 igc_read_pba_num_generic(struct igc_hw *hw, u32 *pba_num)
 }
 
 
-/**
- *  igc_read_pba_raw
- *  @hw: pointer to the HW structure
- *  @eeprom_buf: optional pointer to EEPROM image
- *  @eeprom_buf_size: size of EEPROM image in words
- *  @max_pba_block_size: PBA block size limit
- *  @pba: pointer to output PBA structure
- *
- *  Reads PBA from EEPROM image when eeprom_buf is not NULL.
- *  Reads PBA from physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 igc_read_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
-		       u32 eeprom_buf_size, u16 max_pba_block_size,
-		       struct igc_pba *pba)
-{
-	s32 ret_val;
-	u16 pba_block_size;
-
-	if (pba == NULL)
-		return -IGC_ERR_PARAM;
-
-	if (eeprom_buf == NULL) {
-		ret_val = igc_read_nvm(hw, NVM_PBA_OFFSET_0, 2,
-					 &pba->word[0]);
-		if (ret_val)
-			return ret_val;
-	} else {
-		if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
-			pba->word[0] = eeprom_buf[NVM_PBA_OFFSET_0];
-			pba->word[1] = eeprom_buf[NVM_PBA_OFFSET_1];
-		} else {
-			return -IGC_ERR_PARAM;
-		}
-	}
-
-	if (pba->word[0] == NVM_PBA_PTR_GUARD) {
-		if (pba->pba_block == NULL)
-			return -IGC_ERR_PARAM;
-
-		ret_val = igc_get_pba_block_size(hw, eeprom_buf,
-						   eeprom_buf_size,
-						   &pba_block_size);
-		if (ret_val)
-			return ret_val;
-
-		if (pba_block_size > max_pba_block_size)
-			return -IGC_ERR_PARAM;
-
-		if (eeprom_buf == NULL) {
-			ret_val = igc_read_nvm(hw, pba->word[1],
-						 pba_block_size,
-						 pba->pba_block);
-			if (ret_val)
-				return ret_val;
-		} else {
-			if (eeprom_buf_size > (u32)(pba->word[1] +
-					      pba_block_size)) {
-				memcpy(pba->pba_block,
-				       &eeprom_buf[pba->word[1]],
-				       pba_block_size * sizeof(u16));
-			} else {
-				return -IGC_ERR_PARAM;
-			}
-		}
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_write_pba_raw
- *  @hw: pointer to the HW structure
- *  @eeprom_buf: optional pointer to EEPROM image
- *  @eeprom_buf_size: size of EEPROM image in words
- *  @pba: pointer to PBA structure
- *
- *  Writes PBA to EEPROM image when eeprom_buf is not NULL.
- *  Writes PBA to physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 igc_write_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
-			u32 eeprom_buf_size, struct igc_pba *pba)
-{
-	s32 ret_val;
-
-	if (pba == NULL)
-		return -IGC_ERR_PARAM;
-
-	if (eeprom_buf == NULL) {
-		ret_val = igc_write_nvm(hw, NVM_PBA_OFFSET_0, 2,
-					  &pba->word[0]);
-		if (ret_val)
-			return ret_val;
-	} else {
-		if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
-			eeprom_buf[NVM_PBA_OFFSET_0] = pba->word[0];
-			eeprom_buf[NVM_PBA_OFFSET_1] = pba->word[1];
-		} else {
-			return -IGC_ERR_PARAM;
-		}
-	}
-
-	if (pba->word[0] == NVM_PBA_PTR_GUARD) {
-		if (pba->pba_block == NULL)
-			return -IGC_ERR_PARAM;
-
-		if (eeprom_buf == NULL) {
-			ret_val = igc_write_nvm(hw, pba->word[1],
-						  pba->pba_block[0],
-						  pba->pba_block);
-			if (ret_val)
-				return ret_val;
-		} else {
-			if (eeprom_buf_size > (u32)(pba->word[1] +
-					      pba->pba_block[0])) {
-				memcpy(&eeprom_buf[pba->word[1]],
-				       pba->pba_block,
-				       pba->pba_block[0] * sizeof(u16));
-			} else {
-				return -IGC_ERR_PARAM;
-			}
-		}
-	}
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_get_pba_block_size
  *  @hw: pointer to the HW structure
@@ -1188,38 +541,6 @@ s32 igc_validate_nvm_checksum_generic(struct igc_hw *hw)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_update_nvm_checksum_generic - Update EEPROM checksum
- *  @hw: pointer to the HW structure
- *
- *  Updates the EEPROM checksum by reading/adding each word of the EEPROM
- *  up to the checksum.  Then calculates the EEPROM checksum and writes the
- *  value to the EEPROM.
- **/
-s32 igc_update_nvm_checksum_generic(struct igc_hw *hw)
-{
-	s32 ret_val;
-	u16 checksum = 0;
-	u16 i, nvm_data;
-
-	DEBUGFUNC("igc_update_nvm_checksum");
-
-	for (i = 0; i < NVM_CHECKSUM_REG; i++) {
-		ret_val = hw->nvm.ops.read(hw, i, 1, &nvm_data);
-		if (ret_val) {
-			DEBUGOUT("NVM Read Error while updating checksum.\n");
-			return ret_val;
-		}
-		checksum += nvm_data;
-	}
-	checksum = (u16)NVM_SUM - checksum;
-	ret_val = hw->nvm.ops.write(hw, NVM_CHECKSUM_REG, 1, &checksum);
-	if (ret_val)
-		DEBUGOUT("NVM Write Error while updating checksum.\n");
-
-	return ret_val;
-}
-
 /**
  *  igc_reload_nvm_generic - Reloads EEPROM
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_nvm.h b/drivers/net/igc/base/igc_nvm.h
index 0eee5e4571..e4c1c15f9f 100644
--- a/drivers/net/igc/base/igc_nvm.h
+++ b/drivers/net/igc/base/igc_nvm.h
@@ -32,7 +32,6 @@ s32  igc_null_read_nvm(struct igc_hw *hw, u16 a, u16 b, u16 *c);
 void igc_null_nvm_generic(struct igc_hw *hw);
 s32  igc_null_led_default(struct igc_hw *hw, u16 *data);
 s32  igc_null_write_nvm(struct igc_hw *hw, u16 a, u16 b, u16 *c);
-s32  igc_acquire_nvm_generic(struct igc_hw *hw);
 
 s32  igc_poll_eerd_eewr_done(struct igc_hw *hw, int ee_reg);
 s32  igc_read_mac_addr_generic(struct igc_hw *hw);
@@ -40,27 +39,12 @@ s32  igc_read_pba_num_generic(struct igc_hw *hw, u32 *pba_num);
 s32  igc_read_pba_string_generic(struct igc_hw *hw, u8 *pba_num,
 				   u32 pba_num_size);
 s32  igc_read_pba_length_generic(struct igc_hw *hw, u32 *pba_num_size);
-s32 igc_read_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
-		       u32 eeprom_buf_size, u16 max_pba_block_size,
-		       struct igc_pba *pba);
-s32 igc_write_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
-			u32 eeprom_buf_size, struct igc_pba *pba);
 s32 igc_get_pba_block_size(struct igc_hw *hw, u16 *eeprom_buf,
 			     u32 eeprom_buf_size, u16 *pba_block_size);
-s32  igc_read_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32  igc_read_nvm_microwire(struct igc_hw *hw, u16 offset,
-			      u16 words, u16 *data);
 s32  igc_read_nvm_eerd(struct igc_hw *hw, u16 offset, u16 words,
 			 u16 *data);
-s32  igc_valid_led_default_generic(struct igc_hw *hw, u16 *data);
 s32  igc_validate_nvm_checksum_generic(struct igc_hw *hw);
-s32  igc_write_nvm_microwire(struct igc_hw *hw, u16 offset,
-			       u16 words, u16 *data);
-s32  igc_write_nvm_spi(struct igc_hw *hw, u16 offset, u16 words,
-			 u16 *data);
-s32  igc_update_nvm_checksum_generic(struct igc_hw *hw);
 void igc_stop_nvm(struct igc_hw *hw);
-void igc_release_nvm_generic(struct igc_hw *hw);
 void igc_get_fw_version(struct igc_hw *hw,
 			  struct igc_fw_version *fw_vers);
 
diff --git a/drivers/net/igc/base/igc_osdep.c b/drivers/net/igc/base/igc_osdep.c
index 508f2e07ad..22e9471c79 100644
--- a/drivers/net/igc/base/igc_osdep.c
+++ b/drivers/net/igc/base/igc_osdep.c
@@ -26,18 +26,6 @@ igc_read_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value)
 	*value = 0;
 }
 
-void
-igc_pci_set_mwi(struct igc_hw *hw)
-{
-	(void)hw;
-}
-
-void
-igc_pci_clear_mwi(struct igc_hw *hw)
-{
-	(void)hw;
-}
-
 /*
  * Read the PCI Express capabilities
  */
@@ -49,16 +37,3 @@ igc_read_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value)
 	(void)value;
 	return IGC_NOT_IMPLEMENTED;
 }
-
-/*
- * Write the PCI Express capabilities
- */
-int32_t
-igc_write_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value)
-{
-	(void)hw;
-	(void)reg;
-	(void)value;
-
-	return IGC_NOT_IMPLEMENTED;
-}
diff --git a/drivers/net/igc/base/igc_phy.c b/drivers/net/igc/base/igc_phy.c
index 43bbe69bca..ffcb0bb67e 100644
--- a/drivers/net/igc/base/igc_phy.c
+++ b/drivers/net/igc/base/igc_phy.c
@@ -5,31 +5,6 @@
 #include "igc_api.h"
 
 static s32 igc_wait_autoneg(struct igc_hw *hw);
-static s32 igc_access_phy_wakeup_reg_bm(struct igc_hw *hw, u32 offset,
-					  u16 *data, bool read, bool page_set);
-static u32 igc_get_phy_addr_for_hv_page(u32 page);
-static s32 igc_access_phy_debug_regs_hv(struct igc_hw *hw, u32 offset,
-					  u16 *data, bool read);
-
-/* Cable length tables */
-static const u16 igc_m88_cable_length_table[] = {
-	0, 50, 80, 110, 140, 140, IGC_CABLE_LENGTH_UNDEFINED };
-#define M88IGC_CABLE_LENGTH_TABLE_SIZE \
-		(sizeof(igc_m88_cable_length_table) / \
-		 sizeof(igc_m88_cable_length_table[0]))
-
-static const u16 igc_igp_2_cable_length_table[] = {
-	0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 8, 11, 13, 16, 18, 21, 0, 0, 0, 3,
-	6, 10, 13, 16, 19, 23, 26, 29, 32, 35, 38, 41, 6, 10, 14, 18, 22,
-	26, 30, 33, 37, 41, 44, 48, 51, 54, 58, 61, 21, 26, 31, 35, 40,
-	44, 49, 53, 57, 61, 65, 68, 72, 75, 79, 82, 40, 45, 51, 56, 61,
-	66, 70, 75, 79, 83, 87, 91, 94, 98, 101, 104, 60, 66, 72, 77, 82,
-	87, 92, 96, 100, 104, 108, 111, 114, 117, 119, 121, 83, 89, 95,
-	100, 105, 109, 113, 116, 119, 122, 124, 104, 109, 114, 118, 121,
-	124};
-#define IGP02IGC_CABLE_LENGTH_TABLE_SIZE \
-		(sizeof(igc_igp_2_cable_length_table) / \
-		 sizeof(igc_igp_2_cable_length_table[0]))
 
 /**
  *  igc_init_phy_ops_generic - Initialize PHY function pointers
@@ -385,299 +360,6 @@ s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
 	return IGC_SUCCESS;
 }
 
-/**
- *  igc_read_phy_reg_i2c - Read PHY register using i2c
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Reads the PHY register at offset using the i2c interface and stores the
- *  retrieved information in data.
- **/
-s32 igc_read_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	u32 i, i2ccmd = 0;
-
-	DEBUGFUNC("igc_read_phy_reg_i2c");
-
-	/* Set up Op-code, Phy Address, and register address in the I2CCMD
-	 * register.  The MAC will take care of interfacing with the
-	 * PHY to retrieve the desired data.
-	 */
-	i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
-		  (phy->addr << IGC_I2CCMD_PHY_ADDR_SHIFT) |
-		  (IGC_I2CCMD_OPCODE_READ));
-
-	IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
-	/* Poll the ready bit to see if the I2C read completed */
-	for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
-		usec_delay(50);
-		i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
-		if (i2ccmd & IGC_I2CCMD_READY)
-			break;
-	}
-	if (!(i2ccmd & IGC_I2CCMD_READY)) {
-		DEBUGOUT("I2CCMD Read did not complete\n");
-		return -IGC_ERR_PHY;
-	}
-	if (i2ccmd & IGC_I2CCMD_ERROR) {
-		DEBUGOUT("I2CCMD Error bit set\n");
-		return -IGC_ERR_PHY;
-	}
-
-	/* Need to byte-swap the 16-bit value. */
-	*data = ((i2ccmd >> 8) & 0x00FF) | ((i2ccmd << 8) & 0xFF00);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_write_phy_reg_i2c - Write PHY register using i2c
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Writes the data to PHY register at the offset using the i2c interface.
- **/
-s32 igc_write_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 data)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	u32 i, i2ccmd = 0;
-	u16 phy_data_swapped;
-
-	DEBUGFUNC("igc_write_phy_reg_i2c");
-
-	/* Prevent overwriting SFP I2C EEPROM which is at A0 address. */
-	if (hw->phy.addr == 0 || hw->phy.addr > 7) {
-		DEBUGOUT1("PHY I2C Address %d is out of range.\n",
-			hw->phy.addr);
-		return -IGC_ERR_CONFIG;
-	}
-
-	/* Swap the data bytes for the I2C interface */
-	phy_data_swapped = ((data >> 8) & 0x00FF) | ((data << 8) & 0xFF00);
-
-	/* Set up Op-code, Phy Address, and register address in the I2CCMD
-	 * register.  The MAC will take care of interfacing with the
-	 * PHY to retrieve the desired data.
-	 */
-	i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
-		  (phy->addr << IGC_I2CCMD_PHY_ADDR_SHIFT) |
-		  IGC_I2CCMD_OPCODE_WRITE |
-		  phy_data_swapped);
-
-	IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
-	/* Poll the ready bit to see if the I2C read completed */
-	for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
-		usec_delay(50);
-		i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
-		if (i2ccmd & IGC_I2CCMD_READY)
-			break;
-	}
-	if (!(i2ccmd & IGC_I2CCMD_READY)) {
-		DEBUGOUT("I2CCMD Write did not complete\n");
-		return -IGC_ERR_PHY;
-	}
-	if (i2ccmd & IGC_I2CCMD_ERROR) {
-		DEBUGOUT("I2CCMD Error bit set\n");
-		return -IGC_ERR_PHY;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_read_sfp_data_byte - Reads SFP module data.
- *  @hw: pointer to the HW structure
- *  @offset: byte location offset to be read
- *  @data: read data buffer pointer
- *
- *  Reads one byte from SFP module data stored
- *  in SFP resided EEPROM memory or SFP diagnostic area.
- *  Function should be called with
- *  IGC_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- *  IGC_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- *  access
- **/
-s32 igc_read_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 *data)
-{
-	u32 i = 0;
-	u32 i2ccmd = 0;
-	u32 data_local = 0;
-
-	DEBUGFUNC("igc_read_sfp_data_byte");
-
-	if (offset > IGC_I2CCMD_SFP_DIAG_ADDR(255)) {
-		DEBUGOUT("I2CCMD command address exceeds upper limit\n");
-		return -IGC_ERR_PHY;
-	}
-
-	/* Set up Op-code, EEPROM Address,in the I2CCMD
-	 * register. The MAC will take care of interfacing with the
-	 * EEPROM to retrieve the desired data.
-	 */
-	i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
-		  IGC_I2CCMD_OPCODE_READ);
-
-	IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
-	/* Poll the ready bit to see if the I2C read completed */
-	for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
-		usec_delay(50);
-		data_local = IGC_READ_REG(hw, IGC_I2CCMD);
-		if (data_local & IGC_I2CCMD_READY)
-			break;
-	}
-	if (!(data_local & IGC_I2CCMD_READY)) {
-		DEBUGOUT("I2CCMD Read did not complete\n");
-		return -IGC_ERR_PHY;
-	}
-	if (data_local & IGC_I2CCMD_ERROR) {
-		DEBUGOUT("I2CCMD Error bit set\n");
-		return -IGC_ERR_PHY;
-	}
-	*data = (u8)data_local & 0xFF;
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_write_sfp_data_byte - Writes SFP module data.
- *  @hw: pointer to the HW structure
- *  @offset: byte location offset to write to
- *  @data: data to write
- *
- *  Writes one byte to SFP module data stored
- *  in SFP resided EEPROM memory or SFP diagnostic area.
- *  Function should be called with
- *  IGC_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- *  IGC_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- *  access
- **/
-s32 igc_write_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 data)
-{
-	u32 i = 0;
-	u32 i2ccmd = 0;
-	u32 data_local = 0;
-
-	DEBUGFUNC("igc_write_sfp_data_byte");
-
-	if (offset > IGC_I2CCMD_SFP_DIAG_ADDR(255)) {
-		DEBUGOUT("I2CCMD command address exceeds upper limit\n");
-		return -IGC_ERR_PHY;
-	}
-	/* The programming interface is 16 bits wide
-	 * so we need to read the whole word first
-	 * then update appropriate byte lane and write
-	 * the updated word back.
-	 */
-	/* Set up Op-code, EEPROM Address,in the I2CCMD
-	 * register. The MAC will take care of interfacing
-	 * with an EEPROM to write the data given.
-	 */
-	i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
-		  IGC_I2CCMD_OPCODE_READ);
-	/* Set a command to read single word */
-	IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-	for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
-		usec_delay(50);
-		/* Poll the ready bit to see if lastly
-		 * launched I2C operation completed
-		 */
-		i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
-		if (i2ccmd & IGC_I2CCMD_READY) {
-			/* Check if this is READ or WRITE phase */
-			if ((i2ccmd & IGC_I2CCMD_OPCODE_READ) ==
-			    IGC_I2CCMD_OPCODE_READ) {
-				/* Write the selected byte
-				 * lane and update whole word
-				 */
-				data_local = i2ccmd & 0xFF00;
-				data_local |= (u32)data;
-				i2ccmd = ((offset <<
-					IGC_I2CCMD_REG_ADDR_SHIFT) |
-					IGC_I2CCMD_OPCODE_WRITE | data_local);
-				IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-			} else {
-				break;
-			}
-		}
-	}
-	if (!(i2ccmd & IGC_I2CCMD_READY)) {
-		DEBUGOUT("I2CCMD Write did not complete\n");
-		return -IGC_ERR_PHY;
-	}
-	if (i2ccmd & IGC_I2CCMD_ERROR) {
-		DEBUGOUT("I2CCMD Error bit set\n");
-		return -IGC_ERR_PHY;
-	}
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_read_phy_reg_m88 - Read m88 PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Acquires semaphore, if necessary, then reads the PHY register at offset
- *  and storing the retrieved information in data.  Release any acquired
- *  semaphores before exiting.
- **/
-s32 igc_read_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("igc_read_phy_reg_m88");
-
-	if (!hw->phy.ops.acquire)
-		return IGC_SUCCESS;
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					  data);
-
-	hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_write_phy_reg_m88 - Write m88 PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 data)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("igc_write_phy_reg_m88");
-
-	if (!hw->phy.ops.acquire)
-		return IGC_SUCCESS;
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					   data);
-
-	hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
 /**
  *  igc_set_page_igp - Set page as on IGP-like PHY(s)
  *  @hw: pointer to the HW structure
@@ -698,144 +380,6 @@ s32 igc_set_page_igp(struct igc_hw *hw, u16 page)
 	return igc_write_phy_reg_mdic(hw, IGP01IGC_PHY_PAGE_SELECT, page);
 }
 
-/**
- *  __igc_read_phy_reg_igp - Read igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *  @locked: semaphore has already been acquired or not
- *
- *  Acquires semaphore, if necessary, then reads the PHY register at offset
- *  and stores the retrieved information in data.  Release any acquired
- *  semaphores before exiting.
- **/
-static s32 __igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data,
-				    bool locked)
-{
-	s32 ret_val = IGC_SUCCESS;
-
-	DEBUGFUNC("__igc_read_phy_reg_igp");
-
-	if (!locked) {
-		if (!hw->phy.ops.acquire)
-			return IGC_SUCCESS;
-
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			return ret_val;
-	}
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG)
-		ret_val = igc_write_phy_reg_mdic(hw,
-						   IGP01IGC_PHY_PAGE_SELECT,
-						   (u16)offset);
-	if (!ret_val)
-		ret_val = igc_read_phy_reg_mdic(hw,
-						  MAX_PHY_REG_ADDRESS & offset,
-						  data);
-	if (!locked)
-		hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_read_phy_reg_igp - Read igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Acquires semaphore then reads the PHY register at offset and stores the
- *  retrieved information in data.
- *  Release the acquired semaphore before exiting.
- **/
-s32 igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_phy_reg_igp(hw, offset, data, false);
-}
-
-/**
- *  igc_read_phy_reg_igp_locked - Read igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Reads the PHY register at offset and stores the retrieved information
- *  in data.  Assumes semaphore already acquired.
- **/
-s32 igc_read_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_phy_reg_igp(hw, offset, data, true);
-}
-
-/**
- *  igc_write_phy_reg_igp - Write igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *  @locked: semaphore has already been acquired or not
- *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-static s32 __igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data,
-				     bool locked)
-{
-	s32 ret_val = IGC_SUCCESS;
-
-	DEBUGFUNC("igc_write_phy_reg_igp");
-
-	if (!locked) {
-		if (!hw->phy.ops.acquire)
-			return IGC_SUCCESS;
-
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			return ret_val;
-	}
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG)
-		ret_val = igc_write_phy_reg_mdic(hw,
-						   IGP01IGC_PHY_PAGE_SELECT,
-						   (u16)offset);
-	if (!ret_val)
-		ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS &
-						       offset,
-						   data);
-	if (!locked)
-		hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_write_phy_reg_igp - Write igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Acquires semaphore then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_phy_reg_igp(hw, offset, data, false);
-}
-
-/**
- *  igc_write_phy_reg_igp_locked - Write igp PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Writes the data to PHY register at the offset.
- *  Assumes semaphore already acquired.
- **/
-s32 igc_write_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_phy_reg_igp(hw, offset, data, true);
-}
-
 /**
  *  __igc_read_kmrn_reg - Read kumeran register
  *  @hw: pointer to the HW structure
@@ -896,21 +440,6 @@ s32 igc_read_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 *data)
 	return __igc_read_kmrn_reg(hw, offset, data, false);
 }
 
-/**
- *  igc_read_kmrn_reg_locked -  Read kumeran register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Reads the PHY register at offset using the kumeran interface.  The
- *  information retrieved is stored in data.
- *  Assumes semaphore already acquired.
- **/
-s32 igc_read_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_kmrn_reg(hw, offset, data, true);
-}
-
 /**
  *  __igc_write_kmrn_reg - Write kumeran register
  *  @hw: pointer to the HW structure
@@ -968,490 +497,17 @@ s32 igc_write_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 data)
 }
 
 /**
- *  igc_write_kmrn_reg_locked -  Write kumeran register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Write the data to PHY register at the offset using the kumeran interface.
- *  Assumes semaphore already acquired.
- **/
-s32 igc_write_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_kmrn_reg(hw, offset, data, true);
-}
-
-/**
- *  igc_set_master_slave_mode - Setup PHY for Master/slave mode
+ *  igc_phy_setup_autoneg - Configure PHY for auto-negotiation
  *  @hw: pointer to the HW structure
  *
- *  Sets up Master/slave mode
+ *  Reads the MII auto-neg advertisement register and/or the 1000T control
+ *  register and if the PHY is already setup for auto-negotiation, then
+ *  return successful.  Otherwise, setup advertisement and flow control to
+ *  the appropriate values for the wanted auto-negotiation.
  **/
-static s32 igc_set_master_slave_mode(struct igc_hw *hw)
+s32 igc_phy_setup_autoneg(struct igc_hw *hw)
 {
-	s32 ret_val;
-	u16 phy_data;
-
-	/* Resolve Master/Slave mode */
-	ret_val = hw->phy.ops.read_reg(hw, PHY_1000T_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* load defaults for future use */
-	hw->phy.original_ms_type = (phy_data & CR_1000T_MS_ENABLE) ?
-				   ((phy_data & CR_1000T_MS_VALUE) ?
-				    igc_ms_force_master :
-				    igc_ms_force_slave) : igc_ms_auto;
-
-	switch (hw->phy.ms_type) {
-	case igc_ms_force_master:
-		phy_data |= (CR_1000T_MS_ENABLE | CR_1000T_MS_VALUE);
-		break;
-	case igc_ms_force_slave:
-		phy_data |= CR_1000T_MS_ENABLE;
-		phy_data &= ~(CR_1000T_MS_VALUE);
-		break;
-	case igc_ms_auto:
-		phy_data &= ~CR_1000T_MS_ENABLE;
-		/* fall-through */
-	default:
-		break;
-	}
-
-	return hw->phy.ops.write_reg(hw, PHY_1000T_CTRL, phy_data);
-}
-
-/**
- *  igc_copper_link_setup_82577 - Setup 82577 PHY for copper link
- *  @hw: pointer to the HW structure
- *
- *  Sets up Carrier-sense on Transmit and downshift values.
- **/
-s32 igc_copper_link_setup_82577(struct igc_hw *hw)
-{
-	s32 ret_val;
-	u16 phy_data;
-
-	DEBUGFUNC("igc_copper_link_setup_82577");
-
-	if (hw->phy.type == igc_phy_82580) {
-		ret_val = hw->phy.ops.reset(hw);
-		if (ret_val) {
-			DEBUGOUT("Error resetting the PHY.\n");
-			return ret_val;
-		}
-	}
-
-	/* Enable CRS on Tx. This must be set for half-duplex operation. */
-	ret_val = hw->phy.ops.read_reg(hw, I82577_CFG_REG, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	phy_data |= I82577_CFG_ASSERT_CRS_ON_TX;
-
-	/* Enable downshift */
-	phy_data |= I82577_CFG_ENABLE_DOWNSHIFT;
-
-	ret_val = hw->phy.ops.write_reg(hw, I82577_CFG_REG, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* Set MDI/MDIX mode */
-	ret_val = hw->phy.ops.read_reg(hw, I82577_PHY_CTRL_2, &phy_data);
-	if (ret_val)
-		return ret_val;
-	phy_data &= ~I82577_PHY_CTRL2_MDIX_CFG_MASK;
-	/* Options:
-	 *   0 - Auto (default)
-	 *   1 - MDI mode
-	 *   2 - MDI-X mode
-	 */
-	switch (hw->phy.mdix) {
-	case 1:
-		break;
-	case 2:
-		phy_data |= I82577_PHY_CTRL2_MANUAL_MDIX;
-		break;
-	case 0:
-	default:
-		phy_data |= I82577_PHY_CTRL2_AUTO_MDI_MDIX;
-		break;
-	}
-	ret_val = hw->phy.ops.write_reg(hw, I82577_PHY_CTRL_2, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	return igc_set_master_slave_mode(hw);
-}
-
-/**
- *  igc_copper_link_setup_m88 - Setup m88 PHY's for copper link
- *  @hw: pointer to the HW structure
- *
- *  Sets up MDI/MDI-X and polarity for m88 PHY's.  If necessary, transmit clock
- *  and downshift values are set also.
- **/
-s32 igc_copper_link_setup_m88(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data;
-
-	DEBUGFUNC("igc_copper_link_setup_m88");
-
-
-	/* Enable CRS on Tx. This must be set for half-duplex operation. */
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* For BM PHY this bit is downshift enable */
-	if (phy->type != igc_phy_bm)
-		phy_data |= M88IGC_PSCR_ASSERT_CRS_ON_TX;
-
-	/* Options:
-	 *   MDI/MDI-X = 0 (default)
-	 *   0 - Auto for all speeds
-	 *   1 - MDI mode
-	 *   2 - MDI-X mode
-	 *   3 - Auto for 1000Base-T only (MDI-X for 10/100Base-T modes)
-	 */
-	phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
-
-	switch (phy->mdix) {
-	case 1:
-		phy_data |= M88IGC_PSCR_MDI_MANUAL_MODE;
-		break;
-	case 2:
-		phy_data |= M88IGC_PSCR_MDIX_MANUAL_MODE;
-		break;
-	case 3:
-		phy_data |= M88IGC_PSCR_AUTO_X_1000T;
-		break;
-	case 0:
-	default:
-		phy_data |= M88IGC_PSCR_AUTO_X_MODE;
-		break;
-	}
-
-	/* Options:
-	 *   disable_polarity_correction = 0 (default)
-	 *       Automatic Correction for Reversed Cable Polarity
-	 *   0 - Disabled
-	 *   1 - Enabled
-	 */
-	phy_data &= ~M88IGC_PSCR_POLARITY_REVERSAL;
-	if (phy->disable_polarity_correction)
-		phy_data |= M88IGC_PSCR_POLARITY_REVERSAL;
-
-	/* Enable downshift on BM (disabled by default) */
-	if (phy->type == igc_phy_bm) {
-		/* For 82574/82583, first disable then enable downshift */
-		if (phy->id == BMIGC_E_PHY_ID_R2) {
-			phy_data &= ~BMIGC_PSCR_ENABLE_DOWNSHIFT;
-			ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL,
-						     phy_data);
-			if (ret_val)
-				return ret_val;
-			/* Commit the changes. */
-			ret_val = phy->ops.commit(hw);
-			if (ret_val) {
-				DEBUGOUT("Error committing the PHY changes\n");
-				return ret_val;
-			}
-		}
-
-		phy_data |= BMIGC_PSCR_ENABLE_DOWNSHIFT;
-	}
-
-	ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	if (phy->type == igc_phy_m88 && phy->revision < IGC_REVISION_4 &&
-			phy->id != BMIGC_E_PHY_ID_R2) {
-		/* Force TX_CLK in the Extended PHY Specific Control Register
-		 * to 25MHz clock.
-		 */
-		ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		phy_data |= M88IGC_EPSCR_TX_CLK_25;
-
-		if (phy->revision == IGC_REVISION_2 &&
-				phy->id == M88E1111_I_PHY_ID) {
-			/* 82573L PHY - set the downshift counter to 5x. */
-			phy_data &= ~M88EC018_EPSCR_DOWNSHIFT_COUNTER_MASK;
-			phy_data |= M88EC018_EPSCR_DOWNSHIFT_COUNTER_5X;
-		} else {
-			/* Configure Master and Slave downshift values */
-			phy_data &= ~(M88IGC_EPSCR_MASTER_DOWNSHIFT_MASK |
-				     M88IGC_EPSCR_SLAVE_DOWNSHIFT_MASK);
-			phy_data |= (M88IGC_EPSCR_MASTER_DOWNSHIFT_1X |
-				     M88IGC_EPSCR_SLAVE_DOWNSHIFT_1X);
-		}
-		ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
-					     phy_data);
-		if (ret_val)
-			return ret_val;
-	}
-
-	if (phy->type == igc_phy_bm && phy->id == BMIGC_E_PHY_ID_R2) {
-		/* Set PHY page 0, register 29 to 0x0003 */
-		ret_val = phy->ops.write_reg(hw, 29, 0x0003);
-		if (ret_val)
-			return ret_val;
-
-		/* Set PHY page 0, register 30 to 0x0000 */
-		ret_val = phy->ops.write_reg(hw, 30, 0x0000);
-		if (ret_val)
-			return ret_val;
-	}
-
-	/* Commit the changes. */
-	ret_val = phy->ops.commit(hw);
-	if (ret_val) {
-		DEBUGOUT("Error committing the PHY changes\n");
-		return ret_val;
-	}
-
-	if (phy->type == igc_phy_82578) {
-		ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		/* 82578 PHY - set the downshift count to 1x. */
-		phy_data |= I82578_EPSCR_DOWNSHIFT_ENABLE;
-		phy_data &= ~I82578_EPSCR_DOWNSHIFT_COUNTER_MASK;
-		ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
-					     phy_data);
-		if (ret_val)
-			return ret_val;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_copper_link_setup_m88_gen2 - Setup m88 PHY's for copper link
- *  @hw: pointer to the HW structure
- *
- *  Sets up MDI/MDI-X and polarity for i347-AT4, m88e1322 and m88e1112 PHY's.
- *  Also enables and sets the downshift parameters.
- **/
-s32 igc_copper_link_setup_m88_gen2(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data;
-
-	DEBUGFUNC("igc_copper_link_setup_m88_gen2");
-
-
-	/* Enable CRS on Tx. This must be set for half-duplex operation. */
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* Options:
-	 *   MDI/MDI-X = 0 (default)
-	 *   0 - Auto for all speeds
-	 *   1 - MDI mode
-	 *   2 - MDI-X mode
-	 *   3 - Auto for 1000Base-T only (MDI-X for 10/100Base-T modes)
-	 */
-	phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
-
-	switch (phy->mdix) {
-	case 1:
-		phy_data |= M88IGC_PSCR_MDI_MANUAL_MODE;
-		break;
-	case 2:
-		phy_data |= M88IGC_PSCR_MDIX_MANUAL_MODE;
-		break;
-	case 3:
-		/* M88E1112 does not support this mode) */
-		if (phy->id != M88E1112_E_PHY_ID) {
-			phy_data |= M88IGC_PSCR_AUTO_X_1000T;
-			break;
-		}
-		/* Fall through */
-	case 0:
-	default:
-		phy_data |= M88IGC_PSCR_AUTO_X_MODE;
-		break;
-	}
-
-	/* Options:
-	 *   disable_polarity_correction = 0 (default)
-	 *       Automatic Correction for Reversed Cable Polarity
-	 *   0 - Disabled
-	 *   1 - Enabled
-	 */
-	phy_data &= ~M88IGC_PSCR_POLARITY_REVERSAL;
-	if (phy->disable_polarity_correction)
-		phy_data |= M88IGC_PSCR_POLARITY_REVERSAL;
-
-	/* Enable downshift and setting it to X6 */
-	if (phy->id == M88E1543_E_PHY_ID) {
-		phy_data &= ~I347AT4_PSCR_DOWNSHIFT_ENABLE;
-		ret_val =
-		    phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.commit(hw);
-		if (ret_val) {
-			DEBUGOUT("Error committing the PHY changes\n");
-			return ret_val;
-		}
-	}
-
-	phy_data &= ~I347AT4_PSCR_DOWNSHIFT_MASK;
-	phy_data |= I347AT4_PSCR_DOWNSHIFT_6X;
-	phy_data |= I347AT4_PSCR_DOWNSHIFT_ENABLE;
-
-	ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* Commit the changes. */
-	ret_val = phy->ops.commit(hw);
-	if (ret_val) {
-		DEBUGOUT("Error committing the PHY changes\n");
-		return ret_val;
-	}
-
-	ret_val = igc_set_master_slave_mode(hw);
-	if (ret_val)
-		return ret_val;
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_copper_link_setup_igp - Setup igp PHY's for copper link
- *  @hw: pointer to the HW structure
- *
- *  Sets up LPLU, MDI/MDI-X, polarity, Smartspeed and Master/Slave config for
- *  igp PHY's.
- **/
-s32 igc_copper_link_setup_igp(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 data;
-
-	DEBUGFUNC("igc_copper_link_setup_igp");
-
-
-	ret_val = hw->phy.ops.reset(hw);
-	if (ret_val) {
-		DEBUGOUT("Error resetting the PHY.\n");
-		return ret_val;
-	}
-
-	/* Wait 100ms for MAC to configure PHY from NVM settings, to avoid
-	 * timeout issues when LFS is enabled.
-	 */
-	msec_delay(100);
-
-	/* The NVM settings will configure LPLU in D3 for
-	 * non-IGP1 PHYs.
-	 */
-	if (phy->type == igc_phy_igp) {
-		/* disable lplu d3 during driver init */
-		ret_val = hw->phy.ops.set_d3_lplu_state(hw, false);
-		if (ret_val) {
-			DEBUGOUT("Error Disabling LPLU D3\n");
-			return ret_val;
-		}
-	}
-
-	/* disable lplu d0 during driver init */
-	if (hw->phy.ops.set_d0_lplu_state) {
-		ret_val = hw->phy.ops.set_d0_lplu_state(hw, false);
-		if (ret_val) {
-			DEBUGOUT("Error Disabling LPLU D0\n");
-			return ret_val;
-		}
-	}
-	/* Configure mdi-mdix settings */
-	ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CTRL, &data);
-	if (ret_val)
-		return ret_val;
-
-	data &= ~IGP01IGC_PSCR_AUTO_MDIX;
-
-	switch (phy->mdix) {
-	case 1:
-		data &= ~IGP01IGC_PSCR_FORCE_MDI_MDIX;
-		break;
-	case 2:
-		data |= IGP01IGC_PSCR_FORCE_MDI_MDIX;
-		break;
-	case 0:
-	default:
-		data |= IGP01IGC_PSCR_AUTO_MDIX;
-		break;
-	}
-	ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CTRL, data);
-	if (ret_val)
-		return ret_val;
-
-	/* set auto-master slave resolution settings */
-	if (hw->mac.autoneg) {
-		/* when autonegotiation advertisement is only 1000Mbps then we
-		 * should disable SmartSpeed and enable Auto MasterSlave
-		 * resolution as hardware default.
-		 */
-		if (phy->autoneg_advertised == ADVERTISE_1000_FULL) {
-			/* Disable SmartSpeed */
-			ret_val = phy->ops.read_reg(hw,
-						    IGP01IGC_PHY_PORT_CONFIG,
-						    &data);
-			if (ret_val)
-				return ret_val;
-
-			data &= ~IGP01IGC_PSCFR_SMART_SPEED;
-			ret_val = phy->ops.write_reg(hw,
-						     IGP01IGC_PHY_PORT_CONFIG,
-						     data);
-			if (ret_val)
-				return ret_val;
-
-			/* Set auto Master/Slave resolution process */
-			ret_val = phy->ops.read_reg(hw, PHY_1000T_CTRL, &data);
-			if (ret_val)
-				return ret_val;
-
-			data &= ~CR_1000T_MS_ENABLE;
-			ret_val = phy->ops.write_reg(hw, PHY_1000T_CTRL, data);
-			if (ret_val)
-				return ret_val;
-		}
-
-		ret_val = igc_set_master_slave_mode(hw);
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_phy_setup_autoneg - Configure PHY for auto-negotiation
- *  @hw: pointer to the HW structure
- *
- *  Reads the MII auto-neg advertisement register and/or the 1000T control
- *  register and if the PHY is already setup for auto-negotiation, then
- *  return successful.  Otherwise, setup advertisement and flow control to
- *  the appropriate values for the wanted auto-negotiation.
- **/
-s32 igc_phy_setup_autoneg(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
+	struct igc_phy_info *phy = &hw->phy;
 	s32 ret_val;
 	u16 mii_autoneg_adv_reg;
 	u16 mii_1000t_ctrl_reg = 0;
@@ -1745,321 +801,48 @@ s32 igc_setup_copper_link_generic(struct igc_hw *hw)
 }
 
 /**
- *  igc_phy_force_speed_duplex_igp - Force speed/duplex for igp PHY
+ *  igc_phy_force_speed_duplex_setup - Configure forced PHY speed/duplex
  *  @hw: pointer to the HW structure
+ *  @phy_ctrl: pointer to current value of PHY_CONTROL
  *
- *  Calls the PHY setup function to force speed and duplex.  Clears the
- *  auto-crossover to force MDI manually.  Waits for link and returns
- *  successful if link up is successful, else -IGC_ERR_PHY (-2).
+ *  Forces speed and duplex on the PHY by doing the following: disable flow
+ *  control, force speed/duplex on the MAC, disable auto speed detection,
+ *  disable auto-negotiation, configure duplex, configure speed, configure
+ *  the collision distance, write configuration to CTRL register.  The
+ *  caller must write to the PHY_CONTROL register for these settings to
+ *  take affect.
  **/
-s32 igc_phy_force_speed_duplex_igp(struct igc_hw *hw)
+void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
 {
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data;
-	bool link;
+	struct igc_mac_info *mac = &hw->mac;
+	u32 ctrl;
 
-	DEBUGFUNC("igc_phy_force_speed_duplex_igp");
+	DEBUGFUNC("igc_phy_force_speed_duplex_setup");
 
-	ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
-	if (ret_val)
-		return ret_val;
+	/* Turn off flow control when forcing speed/duplex */
+	hw->fc.current_mode = igc_fc_none;
 
-	igc_phy_force_speed_duplex_setup(hw, &phy_data);
+	/* Force speed/duplex on the mac */
+	ctrl = IGC_READ_REG(hw, IGC_CTRL);
+	ctrl |= (IGC_CTRL_FRCSPD | IGC_CTRL_FRCDPX);
+	ctrl &= ~IGC_CTRL_SPD_SEL;
 
-	ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
-	if (ret_val)
-		return ret_val;
+	/* Disable Auto Speed Detection */
+	ctrl &= ~IGC_CTRL_ASDE;
 
-	/* Clear Auto-Crossover to force MDI manually.  IGP requires MDI
-	 * forced whenever speed and duplex are forced.
-	 */
-	ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
+	/* Disable autoneg on the phy */
+	*phy_ctrl &= ~MII_CR_AUTO_NEG_EN;
 
-	phy_data &= ~IGP01IGC_PSCR_AUTO_MDIX;
-	phy_data &= ~IGP01IGC_PSCR_FORCE_MDI_MDIX;
-
-	ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CTRL, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	DEBUGOUT1("IGP PSCR: %X\n", phy_data);
-
-	usec_delay(1);
-
-	if (phy->autoneg_wait_to_complete) {
-		DEBUGOUT("Waiting for forced speed/duplex link on IGP phy.\n");
-
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-
-		if (!link)
-			DEBUGOUT("Link taking longer than expected.\n");
-
-		/* Try once more */
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_phy_force_speed_duplex_m88 - Force speed/duplex for m88 PHY
- *  @hw: pointer to the HW structure
- *
- *  Calls the PHY setup function to force speed and duplex.  Clears the
- *  auto-crossover to force MDI manually.  Resets the PHY to commit the
- *  changes.  If time expires while waiting for link up, we reset the DSP.
- *  After reset, TX_CLK and CRS on Tx must be set.  Return successful upon
- *  successful completion, else return corresponding error code.
- **/
-s32 igc_phy_force_speed_duplex_m88(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data;
-	bool link;
-
-	DEBUGFUNC("igc_phy_force_speed_duplex_m88");
-
-	/* I210 and I211 devices support Auto-Crossover in forced operation. */
-	if (phy->type != igc_phy_i210) {
-		/* Clear Auto-Crossover to force MDI manually.  M88E1000
-		 * requires MDI forced whenever speed and duplex are forced.
-		 */
-		ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL,
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
-		ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL,
-					     phy_data);
-		if (ret_val)
-			return ret_val;
-
-		DEBUGOUT1("M88E1000 PSCR: %X\n", phy_data);
-	}
-
-	ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	igc_phy_force_speed_duplex_setup(hw, &phy_data);
-
-	ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* Reset the phy to commit changes. */
-	ret_val = hw->phy.ops.commit(hw);
-	if (ret_val)
-		return ret_val;
-
-	if (phy->autoneg_wait_to_complete) {
-		DEBUGOUT("Waiting for forced speed/duplex link on M88 phy.\n");
-
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-
-		if (!link) {
-			bool reset_dsp = true;
-
-			switch (hw->phy.id) {
-			case I347AT4_E_PHY_ID:
-			case M88E1340M_E_PHY_ID:
-			case M88E1112_E_PHY_ID:
-			case M88E1543_E_PHY_ID:
-			case M88E1512_E_PHY_ID:
-			case I210_I_PHY_ID:
-			/* fall-through */
-			case I225_I_PHY_ID:
-			/* fall-through */
-				reset_dsp = false;
-				break;
-			default:
-				if (hw->phy.type != igc_phy_m88)
-					reset_dsp = false;
-				break;
-			}
-
-			if (!reset_dsp) {
-				DEBUGOUT("Link taking longer than expected.\n");
-			} else {
-				/* We didn't get link.
-				 * Reset the DSP and cross our fingers.
-				 */
-				ret_val = phy->ops.write_reg(hw,
-						M88IGC_PHY_PAGE_SELECT,
-						0x001d);
-				if (ret_val)
-					return ret_val;
-				ret_val = igc_phy_reset_dsp_generic(hw);
-				if (ret_val)
-					return ret_val;
-			}
-		}
-
-		/* Try once more */
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-	}
-
-	if (hw->phy.type != igc_phy_m88)
-		return IGC_SUCCESS;
-
-	if (hw->phy.id == I347AT4_E_PHY_ID ||
-		hw->phy.id == M88E1340M_E_PHY_ID ||
-		hw->phy.id == M88E1112_E_PHY_ID)
-		return IGC_SUCCESS;
-	if (hw->phy.id == I210_I_PHY_ID)
-		return IGC_SUCCESS;
-	if (hw->phy.id == I225_I_PHY_ID)
-		return IGC_SUCCESS;
-	if (hw->phy.id == M88E1543_E_PHY_ID || hw->phy.id == M88E1512_E_PHY_ID)
-		return IGC_SUCCESS;
-	ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* Resetting the phy means we need to re-force TX_CLK in the
-	 * Extended PHY Specific Control Register to 25MHz clock from
-	 * the reset value of 2.5MHz.
-	 */
-	phy_data |= M88IGC_EPSCR_TX_CLK_25;
-	ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL, phy_data);
-	if (ret_val)
-		return ret_val;
-
-	/* In addition, we must re-enable CRS on Tx for both half and full
-	 * duplex.
-	 */
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	phy_data |= M88IGC_PSCR_ASSERT_CRS_ON_TX;
-	ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
-
-	return ret_val;
-}
-
-/**
- *  igc_phy_force_speed_duplex_ife - Force PHY speed & duplex
- *  @hw: pointer to the HW structure
- *
- *  Forces the speed and duplex settings of the PHY.
- *  This is a function pointer entry point only called by
- *  PHY setup routines.
- **/
-s32 igc_phy_force_speed_duplex_ife(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 data;
-	bool link;
-
-	DEBUGFUNC("igc_phy_force_speed_duplex_ife");
-
-	ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &data);
-	if (ret_val)
-		return ret_val;
-
-	igc_phy_force_speed_duplex_setup(hw, &data);
-
-	ret_val = phy->ops.write_reg(hw, PHY_CONTROL, data);
-	if (ret_val)
-		return ret_val;
-
-	/* Disable MDI-X support for 10/100 */
-	ret_val = phy->ops.read_reg(hw, IFE_PHY_MDIX_CONTROL, &data);
-	if (ret_val)
-		return ret_val;
-
-	data &= ~IFE_PMC_AUTO_MDIX;
-	data &= ~IFE_PMC_FORCE_MDIX;
-
-	ret_val = phy->ops.write_reg(hw, IFE_PHY_MDIX_CONTROL, data);
-	if (ret_val)
-		return ret_val;
-
-	DEBUGOUT1("IFE PMC: %X\n", data);
-
-	usec_delay(1);
-
-	if (phy->autoneg_wait_to_complete) {
-		DEBUGOUT("Waiting for forced speed/duplex link on IFE phy.\n");
-
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-
-		if (!link)
-			DEBUGOUT("Link taking longer than expected.\n");
-
-		/* Try once more */
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-	}
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_phy_force_speed_duplex_setup - Configure forced PHY speed/duplex
- *  @hw: pointer to the HW structure
- *  @phy_ctrl: pointer to current value of PHY_CONTROL
- *
- *  Forces speed and duplex on the PHY by doing the following: disable flow
- *  control, force speed/duplex on the MAC, disable auto speed detection,
- *  disable auto-negotiation, configure duplex, configure speed, configure
- *  the collision distance, write configuration to CTRL register.  The
- *  caller must write to the PHY_CONTROL register for these settings to
- *  take affect.
- **/
-void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
-{
-	struct igc_mac_info *mac = &hw->mac;
-	u32 ctrl;
-
-	DEBUGFUNC("igc_phy_force_speed_duplex_setup");
-
-	/* Turn off flow control when forcing speed/duplex */
-	hw->fc.current_mode = igc_fc_none;
-
-	/* Force speed/duplex on the mac */
-	ctrl = IGC_READ_REG(hw, IGC_CTRL);
-	ctrl |= (IGC_CTRL_FRCSPD | IGC_CTRL_FRCDPX);
-	ctrl &= ~IGC_CTRL_SPD_SEL;
-
-	/* Disable Auto Speed Detection */
-	ctrl &= ~IGC_CTRL_ASDE;
-
-	/* Disable autoneg on the phy */
-	*phy_ctrl &= ~MII_CR_AUTO_NEG_EN;
-
-	/* Forcing Full or Half Duplex? */
-	if (mac->forced_speed_duplex & IGC_ALL_HALF_DUPLEX) {
-		ctrl &= ~IGC_CTRL_FD;
-		*phy_ctrl &= ~MII_CR_FULL_DUPLEX;
-		DEBUGOUT("Half Duplex\n");
-	} else {
-		ctrl |= IGC_CTRL_FD;
-		*phy_ctrl |= MII_CR_FULL_DUPLEX;
-		DEBUGOUT("Full Duplex\n");
-	}
+	/* Forcing Full or Half Duplex? */
+	if (mac->forced_speed_duplex & IGC_ALL_HALF_DUPLEX) {
+		ctrl &= ~IGC_CTRL_FD;
+		*phy_ctrl &= ~MII_CR_FULL_DUPLEX;
+		DEBUGOUT("Half Duplex\n");
+	} else {
+		ctrl |= IGC_CTRL_FD;
+		*phy_ctrl |= MII_CR_FULL_DUPLEX;
+		DEBUGOUT("Full Duplex\n");
+	}
 
 	/* Forcing 10mb or 100mb? */
 	if (mac->forced_speed_duplex & IGC_ALL_100_SPEED) {
@@ -2078,96 +861,6 @@ void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
 	IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
 }
 
-/**
- *  igc_set_d3_lplu_state_generic - Sets low power link up state for D3
- *  @hw: pointer to the HW structure
- *  @active: boolean used to enable/disable lplu
- *
- *  Success returns 0, Failure returns 1
- *
- *  The low power link up (lplu) state is set to the power management level D3
- *  and SmartSpeed is disabled when active is true, else clear lplu for D3
- *  and enable Smartspeed.  LPLU and Smartspeed are mutually exclusive.  LPLU
- *  is used during Dx states where the power conservation is most important.
- *  During driver activity, SmartSpeed should be enabled so performance is
- *  maintained.
- **/
-s32 igc_set_d3_lplu_state_generic(struct igc_hw *hw, bool active)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 data;
-
-	DEBUGFUNC("igc_set_d3_lplu_state_generic");
-
-	if (!hw->phy.ops.read_reg)
-		return IGC_SUCCESS;
-
-	ret_val = phy->ops.read_reg(hw, IGP02IGC_PHY_POWER_MGMT, &data);
-	if (ret_val)
-		return ret_val;
-
-	if (!active) {
-		data &= ~IGP02IGC_PM_D3_LPLU;
-		ret_val = phy->ops.write_reg(hw, IGP02IGC_PHY_POWER_MGMT,
-					     data);
-		if (ret_val)
-			return ret_val;
-		/* LPLU and SmartSpeed are mutually exclusive.  LPLU is used
-		 * during Dx states where the power conservation is most
-		 * important.  During driver activity we should enable
-		 * SmartSpeed, so performance is maintained.
-		 */
-		if (phy->smart_speed == igc_smart_speed_on) {
-			ret_val = phy->ops.read_reg(hw,
-						    IGP01IGC_PHY_PORT_CONFIG,
-						    &data);
-			if (ret_val)
-				return ret_val;
-
-			data |= IGP01IGC_PSCFR_SMART_SPEED;
-			ret_val = phy->ops.write_reg(hw,
-						     IGP01IGC_PHY_PORT_CONFIG,
-						     data);
-			if (ret_val)
-				return ret_val;
-		} else if (phy->smart_speed == igc_smart_speed_off) {
-			ret_val = phy->ops.read_reg(hw,
-						    IGP01IGC_PHY_PORT_CONFIG,
-						    &data);
-			if (ret_val)
-				return ret_val;
-
-			data &= ~IGP01IGC_PSCFR_SMART_SPEED;
-			ret_val = phy->ops.write_reg(hw,
-						     IGP01IGC_PHY_PORT_CONFIG,
-						     data);
-			if (ret_val)
-				return ret_val;
-		}
-	} else if ((phy->autoneg_advertised == IGC_ALL_SPEED_DUPLEX) ||
-		   (phy->autoneg_advertised == IGC_ALL_NOT_GIG) ||
-		   (phy->autoneg_advertised == IGC_ALL_10_SPEED)) {
-		data |= IGP02IGC_PM_D3_LPLU;
-		ret_val = phy->ops.write_reg(hw, IGP02IGC_PHY_POWER_MGMT,
-					     data);
-		if (ret_val)
-			return ret_val;
-
-		/* When LPLU is enabled, we should disable SmartSpeed */
-		ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CONFIG,
-					    &data);
-		if (ret_val)
-			return ret_val;
-
-		data &= ~IGP01IGC_PSCFR_SMART_SPEED;
-		ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CONFIG,
-					     data);
-	}
-
-	return ret_val;
-}
-
 /**
  *  igc_check_downshift_generic - Checks whether a downshift in speed occurred
  *  @hw: pointer to the HW structure
@@ -2408,624 +1101,57 @@ s32 igc_phy_has_link_generic(struct igc_hw *hw, u32 iterations,
 }
 
 /**
- *  igc_get_cable_length_m88 - Determine cable length for m88 PHY
+ *  igc_phy_sw_reset_generic - PHY software reset
  *  @hw: pointer to the HW structure
  *
- *  Reads the PHY specific status register to retrieve the cable length
- *  information.  The cable length is determined by averaging the minimum and
- *  maximum values to get the "average" cable length.  The m88 PHY has four
- *  possible cable length values, which are:
- *	Register Value		Cable Length
- *	0			< 50 meters
- *	1			50 - 80 meters
- *	2			80 - 110 meters
- *	3			110 - 140 meters
- *	4			> 140 meters
+ *  Does a software reset of the PHY by reading the PHY control register and
+ *  setting/write the control register reset bit to the PHY.
  **/
-s32 igc_get_cable_length_m88(struct igc_hw *hw)
+s32 igc_phy_sw_reset_generic(struct igc_hw *hw)
 {
-	struct igc_phy_info *phy = &hw->phy;
 	s32 ret_val;
-	u16 phy_data, index;
+	u16 phy_ctrl;
 
-	DEBUGFUNC("igc_get_cable_length_m88");
+	DEBUGFUNC("igc_phy_sw_reset_generic");
+
+	if (!hw->phy.ops.read_reg)
+		return IGC_SUCCESS;
 
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_STATUS, &phy_data);
+	ret_val = hw->phy.ops.read_reg(hw, PHY_CONTROL, &phy_ctrl);
 	if (ret_val)
 		return ret_val;
 
-	index = ((phy_data & M88IGC_PSSR_CABLE_LENGTH) >>
-		 M88IGC_PSSR_CABLE_LENGTH_SHIFT);
-
-	if (index >= M88IGC_CABLE_LENGTH_TABLE_SIZE - 1)
-		return -IGC_ERR_PHY;
-
-	phy->min_cable_length = igc_m88_cable_length_table[index];
-	phy->max_cable_length = igc_m88_cable_length_table[index + 1];
+	phy_ctrl |= MII_CR_RESET;
+	ret_val = hw->phy.ops.write_reg(hw, PHY_CONTROL, phy_ctrl);
+	if (ret_val)
+		return ret_val;
 
-	phy->cable_length = (phy->min_cable_length + phy->max_cable_length) / 2;
+	usec_delay(1);
 
-	return IGC_SUCCESS;
+	return ret_val;
 }
 
-s32 igc_get_cable_length_m88_gen2(struct igc_hw *hw)
+/**
+ *  igc_get_phy_type_from_id - Get PHY type from id
+ *  @phy_id: phy_id read from the phy
+ *
+ *  Returns the phy type from the id.
+ **/
+enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
 {
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val  = 0;
-	u16 phy_data, phy_data2, is_cm;
-	u16 index, default_page;
-
-	DEBUGFUNC("igc_get_cable_length_m88_gen2");
-
-	switch (hw->phy.id) {
-	case I210_I_PHY_ID:
-		/* Get cable length from PHY Cable Diagnostics Control Reg */
-		ret_val = phy->ops.read_reg(hw, (0x7 << GS40G_PAGE_SHIFT) +
-					    (I347AT4_PCDL + phy->addr),
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		/* Check if the unit of cable length is meters or cm */
-		ret_val = phy->ops.read_reg(hw, (0x7 << GS40G_PAGE_SHIFT) +
-					    I347AT4_PCDC, &phy_data2);
-		if (ret_val)
-			return ret_val;
-
-		is_cm = !(phy_data2 & I347AT4_PCDC_CABLE_LENGTH_UNIT);
+	enum igc_phy_type phy_type = igc_phy_unknown;
 
-		/* Populate the phy structure with cable length in meters */
-		phy->min_cable_length = phy_data / (is_cm ? 100 : 1);
-		phy->max_cable_length = phy_data / (is_cm ? 100 : 1);
-		phy->cable_length = phy_data / (is_cm ? 100 : 1);
-		break;
-	case I225_I_PHY_ID:
-		if (ret_val)
-			return ret_val;
-		/* TODO - complete with Foxville data */
-		break;
+	switch (phy_id) {
+	case M88IGC_I_PHY_ID:
+	case M88IGC_E_PHY_ID:
+	case M88E1111_I_PHY_ID:
+	case M88E1011_I_PHY_ID:
 	case M88E1543_E_PHY_ID:
 	case M88E1512_E_PHY_ID:
-	case M88E1340M_E_PHY_ID:
 	case I347AT4_E_PHY_ID:
-		/* Remember the original page select and set it to 7 */
-		ret_val = phy->ops.read_reg(hw, I347AT4_PAGE_SELECT,
-					    &default_page);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT, 0x07);
-		if (ret_val)
-			return ret_val;
-
-		/* Get cable length from PHY Cable Diagnostics Control Reg */
-		ret_val = phy->ops.read_reg(hw, (I347AT4_PCDL + phy->addr),
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		/* Check if the unit of cable length is meters or cm */
-		ret_val = phy->ops.read_reg(hw, I347AT4_PCDC, &phy_data2);
-		if (ret_val)
-			return ret_val;
-
-		is_cm = !(phy_data2 & I347AT4_PCDC_CABLE_LENGTH_UNIT);
-
-		/* Populate the phy structure with cable length in meters */
-		phy->min_cable_length = phy_data / (is_cm ? 100 : 1);
-		phy->max_cable_length = phy_data / (is_cm ? 100 : 1);
-		phy->cable_length = phy_data / (is_cm ? 100 : 1);
-
-		/* Reset the page select to its original value */
-		ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT,
-					     default_page);
-		if (ret_val)
-			return ret_val;
-		break;
-
 	case M88E1112_E_PHY_ID:
-		/* Remember the original page select and set it to 5 */
-		ret_val = phy->ops.read_reg(hw, I347AT4_PAGE_SELECT,
-					    &default_page);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT, 0x05);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.read_reg(hw, M88E1112_VCT_DSP_DISTANCE,
-					    &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		index = (phy_data & M88IGC_PSSR_CABLE_LENGTH) >>
-			M88IGC_PSSR_CABLE_LENGTH_SHIFT;
-
-		if (index >= M88IGC_CABLE_LENGTH_TABLE_SIZE - 1)
-			return -IGC_ERR_PHY;
-
-		phy->min_cable_length = igc_m88_cable_length_table[index];
-		phy->max_cable_length = igc_m88_cable_length_table[index + 1];
-
-		phy->cable_length = (phy->min_cable_length +
-				     phy->max_cable_length) / 2;
-
-		/* Reset the page select to its original value */
-		ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT,
-					     default_page);
-		if (ret_val)
-			return ret_val;
-
-		break;
-	default:
-		return -IGC_ERR_PHY;
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_get_cable_length_igp_2 - Determine cable length for igp2 PHY
- *  @hw: pointer to the HW structure
- *
- *  The automatic gain control (agc) normalizes the amplitude of the
- *  received signal, adjusting for the attenuation produced by the
- *  cable.  By reading the AGC registers, which represent the
- *  combination of coarse and fine gain value, the value can be put
- *  into a lookup table to obtain the approximate cable length
- *  for each channel.
- **/
-s32 igc_get_cable_length_igp_2(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data, i, agc_value = 0;
-	u16 cur_agc_index, max_agc_index = 0;
-	u16 min_agc_index = IGP02IGC_CABLE_LENGTH_TABLE_SIZE - 1;
-	static const u16 agc_reg_array[IGP02IGC_PHY_CHANNEL_NUM] = {
-		IGP02IGC_PHY_AGC_A,
-		IGP02IGC_PHY_AGC_B,
-		IGP02IGC_PHY_AGC_C,
-		IGP02IGC_PHY_AGC_D
-	};
-
-	DEBUGFUNC("igc_get_cable_length_igp_2");
-
-	/* Read the AGC registers for all channels */
-	for (i = 0; i < IGP02IGC_PHY_CHANNEL_NUM; i++) {
-		ret_val = phy->ops.read_reg(hw, agc_reg_array[i], &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		/* Getting bits 15:9, which represent the combination of
-		 * coarse and fine gain values.  The result is a number
-		 * that can be put into the lookup table to obtain the
-		 * approximate cable length.
-		 */
-		cur_agc_index = ((phy_data >> IGP02IGC_AGC_LENGTH_SHIFT) &
-				 IGP02IGC_AGC_LENGTH_MASK);
-
-		/* Array index bound check. */
-		if (cur_agc_index >= IGP02IGC_CABLE_LENGTH_TABLE_SIZE ||
-				cur_agc_index == 0)
-			return -IGC_ERR_PHY;
-
-		/* Remove min & max AGC values from calculation. */
-		if (igc_igp_2_cable_length_table[min_agc_index] >
-		    igc_igp_2_cable_length_table[cur_agc_index])
-			min_agc_index = cur_agc_index;
-		if (igc_igp_2_cable_length_table[max_agc_index] <
-		    igc_igp_2_cable_length_table[cur_agc_index])
-			max_agc_index = cur_agc_index;
-
-		agc_value += igc_igp_2_cable_length_table[cur_agc_index];
-	}
-
-	agc_value -= (igc_igp_2_cable_length_table[min_agc_index] +
-		      igc_igp_2_cable_length_table[max_agc_index]);
-	agc_value /= (IGP02IGC_PHY_CHANNEL_NUM - 2);
-
-	/* Calculate cable length with the error range of +/- 10 meters. */
-	phy->min_cable_length = (((agc_value - IGP02IGC_AGC_RANGE) > 0) ?
-				 (agc_value - IGP02IGC_AGC_RANGE) : 0);
-	phy->max_cable_length = agc_value + IGP02IGC_AGC_RANGE;
-
-	phy->cable_length = (phy->min_cable_length + phy->max_cable_length) / 2;
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_get_phy_info_m88 - Retrieve PHY information
- *  @hw: pointer to the HW structure
- *
- *  Valid for only copper links.  Read the PHY status register (sticky read)
- *  to verify that link is up.  Read the PHY special control register to
- *  determine the polarity and 10base-T extended distance.  Read the PHY
- *  special status register to determine MDI/MDIx and current speed.  If
- *  speed is 1000, then determine cable length, local and remote receiver.
- **/
-s32 igc_get_phy_info_m88(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32  ret_val;
-	u16 phy_data;
-	bool link;
-
-	DEBUGFUNC("igc_get_phy_info_m88");
-
-	if (phy->media_type != igc_media_type_copper) {
-		DEBUGOUT("Phy info is only valid for copper media\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
-	if (ret_val)
-		return ret_val;
-
-	if (!link) {
-		DEBUGOUT("Phy info is only valid if link is up\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	phy->polarity_correction = !!(phy_data &
-				      M88IGC_PSCR_POLARITY_REVERSAL);
-
-	ret_val = igc_check_polarity_m88(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_STATUS, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	phy->is_mdix = !!(phy_data & M88IGC_PSSR_MDIX);
-
-	if ((phy_data & M88IGC_PSSR_SPEED) == M88IGC_PSSR_1000MBS) {
-		ret_val = hw->phy.ops.get_cable_length(hw);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &phy_data);
-		if (ret_val)
-			return ret_val;
-
-		phy->local_rx = (phy_data & SR_1000T_LOCAL_RX_STATUS)
-				? igc_1000t_rx_status_ok
-				: igc_1000t_rx_status_not_ok;
-
-		phy->remote_rx = (phy_data & SR_1000T_REMOTE_RX_STATUS)
-				 ? igc_1000t_rx_status_ok
-				 : igc_1000t_rx_status_not_ok;
-	} else {
-		/* Set values to "undefined" */
-		phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
-		phy->local_rx = igc_1000t_rx_status_undefined;
-		phy->remote_rx = igc_1000t_rx_status_undefined;
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_get_phy_info_igp - Retrieve igp PHY information
- *  @hw: pointer to the HW structure
- *
- *  Read PHY status to determine if link is up.  If link is up, then
- *  set/determine 10base-T extended distance and polarity correction.  Read
- *  PHY port status to determine MDI/MDIx and speed.  Based on the speed,
- *  determine on the cable length, local and remote receiver.
- **/
-s32 igc_get_phy_info_igp(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 data;
-	bool link;
-
-	DEBUGFUNC("igc_get_phy_info_igp");
-
-	ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
-	if (ret_val)
-		return ret_val;
-
-	if (!link) {
-		DEBUGOUT("Phy info is only valid if link is up\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	phy->polarity_correction = true;
-
-	ret_val = igc_check_polarity_igp(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_STATUS, &data);
-	if (ret_val)
-		return ret_val;
-
-	phy->is_mdix = !!(data & IGP01IGC_PSSR_MDIX);
-
-	if ((data & IGP01IGC_PSSR_SPEED_MASK) ==
-	    IGP01IGC_PSSR_SPEED_1000MBPS) {
-		ret_val = phy->ops.get_cable_length(hw);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &data);
-		if (ret_val)
-			return ret_val;
-
-		phy->local_rx = (data & SR_1000T_LOCAL_RX_STATUS)
-				? igc_1000t_rx_status_ok
-				: igc_1000t_rx_status_not_ok;
-
-		phy->remote_rx = (data & SR_1000T_REMOTE_RX_STATUS)
-				 ? igc_1000t_rx_status_ok
-				 : igc_1000t_rx_status_not_ok;
-	} else {
-		phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
-		phy->local_rx = igc_1000t_rx_status_undefined;
-		phy->remote_rx = igc_1000t_rx_status_undefined;
-	}
-
-	return ret_val;
-}
-
-/**
- *  igc_get_phy_info_ife - Retrieves various IFE PHY states
- *  @hw: pointer to the HW structure
- *
- *  Populates "phy" structure with various feature states.
- **/
-s32 igc_get_phy_info_ife(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 data;
-	bool link;
-
-	DEBUGFUNC("igc_get_phy_info_ife");
-
-	ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
-	if (ret_val)
-		return ret_val;
-
-	if (!link) {
-		DEBUGOUT("Phy info is only valid if link is up\n");
-		return -IGC_ERR_CONFIG;
-	}
-
-	ret_val = phy->ops.read_reg(hw, IFE_PHY_SPECIAL_CONTROL, &data);
-	if (ret_val)
-		return ret_val;
-	phy->polarity_correction = !(data & IFE_PSC_AUTO_POLARITY_DISABLE);
-
-	if (phy->polarity_correction) {
-		ret_val = igc_check_polarity_ife(hw);
-		if (ret_val)
-			return ret_val;
-	} else {
-		/* Polarity is forced */
-		phy->cable_polarity = ((data & IFE_PSC_FORCE_POLARITY)
-				       ? igc_rev_polarity_reversed
-				       : igc_rev_polarity_normal);
-	}
-
-	ret_val = phy->ops.read_reg(hw, IFE_PHY_MDIX_CONTROL, &data);
-	if (ret_val)
-		return ret_val;
-
-	phy->is_mdix = !!(data & IFE_PMC_MDIX_STATUS);
-
-	/* The following parameters are undefined for 10/100 operation. */
-	phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
-	phy->local_rx = igc_1000t_rx_status_undefined;
-	phy->remote_rx = igc_1000t_rx_status_undefined;
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_phy_sw_reset_generic - PHY software reset
- *  @hw: pointer to the HW structure
- *
- *  Does a software reset of the PHY by reading the PHY control register and
- *  setting/write the control register reset bit to the PHY.
- **/
-s32 igc_phy_sw_reset_generic(struct igc_hw *hw)
-{
-	s32 ret_val;
-	u16 phy_ctrl;
-
-	DEBUGFUNC("igc_phy_sw_reset_generic");
-
-	if (!hw->phy.ops.read_reg)
-		return IGC_SUCCESS;
-
-	ret_val = hw->phy.ops.read_reg(hw, PHY_CONTROL, &phy_ctrl);
-	if (ret_val)
-		return ret_val;
-
-	phy_ctrl |= MII_CR_RESET;
-	ret_val = hw->phy.ops.write_reg(hw, PHY_CONTROL, phy_ctrl);
-	if (ret_val)
-		return ret_val;
-
-	usec_delay(1);
-
-	return ret_val;
-}
-
-/**
- *  igc_phy_hw_reset_generic - PHY hardware reset
- *  @hw: pointer to the HW structure
- *
- *  Verify the reset block is not blocking us from resetting.  Acquire
- *  semaphore (if necessary) and read/set/write the device control reset
- *  bit in the PHY.  Wait the appropriate delay time for the device to
- *  reset and release the semaphore (if necessary).
- **/
-s32 igc_phy_hw_reset_generic(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u32 ctrl;
-
-	DEBUGFUNC("igc_phy_hw_reset_generic");
-
-	if (phy->ops.check_reset_block) {
-		ret_val = phy->ops.check_reset_block(hw);
-		if (ret_val)
-			return IGC_SUCCESS;
-	}
-
-	ret_val = phy->ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ctrl = IGC_READ_REG(hw, IGC_CTRL);
-	IGC_WRITE_REG(hw, IGC_CTRL, ctrl | IGC_CTRL_PHY_RST);
-	IGC_WRITE_FLUSH(hw);
-
-	usec_delay(phy->reset_delay_us);
-
-	IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-	IGC_WRITE_FLUSH(hw);
-
-	usec_delay(150);
-
-	phy->ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_get_cfg_done_generic - Generic configuration done
- *  @hw: pointer to the HW structure
- *
- *  Generic function to wait 10 milli-seconds for configuration to complete
- *  and return success.
- **/
-s32 igc_get_cfg_done_generic(struct igc_hw IGC_UNUSEDARG * hw)
-{
-	DEBUGFUNC("igc_get_cfg_done_generic");
-	UNREFERENCED_1PARAMETER(hw);
-
-	msec_delay_irq(10);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_phy_init_script_igp3 - Inits the IGP3 PHY
- *  @hw: pointer to the HW structure
- *
- *  Initializes a Intel Gigabit PHY3 when an EEPROM is not present.
- **/
-s32 igc_phy_init_script_igp3(struct igc_hw *hw)
-{
-	DEBUGOUT("Running IGP 3 PHY init script\n");
-
-	/* PHY init IGP 3 */
-	/* Enable rise/fall, 10-mode work in class-A */
-	hw->phy.ops.write_reg(hw, 0x2F5B, 0x9018);
-	/* Remove all caps from Replica path filter */
-	hw->phy.ops.write_reg(hw, 0x2F52, 0x0000);
-	/* Bias trimming for ADC, AFE and Driver (Default) */
-	hw->phy.ops.write_reg(hw, 0x2FB1, 0x8B24);
-	/* Increase Hybrid poly bias */
-	hw->phy.ops.write_reg(hw, 0x2FB2, 0xF8F0);
-	/* Add 4% to Tx amplitude in Gig mode */
-	hw->phy.ops.write_reg(hw, 0x2010, 0x10B0);
-	/* Disable trimming (TTT) */
-	hw->phy.ops.write_reg(hw, 0x2011, 0x0000);
-	/* Poly DC correction to 94.6% + 2% for all channels */
-	hw->phy.ops.write_reg(hw, 0x20DD, 0x249A);
-	/* ABS DC correction to 95.9% */
-	hw->phy.ops.write_reg(hw, 0x20DE, 0x00D3);
-	/* BG temp curve trim */
-	hw->phy.ops.write_reg(hw, 0x28B4, 0x04CE);
-	/* Increasing ADC OPAMP stage 1 currents to max */
-	hw->phy.ops.write_reg(hw, 0x2F70, 0x29E4);
-	/* Force 1000 ( required for enabling PHY regs configuration) */
-	hw->phy.ops.write_reg(hw, 0x0000, 0x0140);
-	/* Set upd_freq to 6 */
-	hw->phy.ops.write_reg(hw, 0x1F30, 0x1606);
-	/* Disable NPDFE */
-	hw->phy.ops.write_reg(hw, 0x1F31, 0xB814);
-	/* Disable adaptive fixed FFE (Default) */
-	hw->phy.ops.write_reg(hw, 0x1F35, 0x002A);
-	/* Enable FFE hysteresis */
-	hw->phy.ops.write_reg(hw, 0x1F3E, 0x0067);
-	/* Fixed FFE for short cable lengths */
-	hw->phy.ops.write_reg(hw, 0x1F54, 0x0065);
-	/* Fixed FFE for medium cable lengths */
-	hw->phy.ops.write_reg(hw, 0x1F55, 0x002A);
-	/* Fixed FFE for long cable lengths */
-	hw->phy.ops.write_reg(hw, 0x1F56, 0x002A);
-	/* Enable Adaptive Clip Threshold */
-	hw->phy.ops.write_reg(hw, 0x1F72, 0x3FB0);
-	/* AHT reset limit to 1 */
-	hw->phy.ops.write_reg(hw, 0x1F76, 0xC0FF);
-	/* Set AHT master delay to 127 msec */
-	hw->phy.ops.write_reg(hw, 0x1F77, 0x1DEC);
-	/* Set scan bits for AHT */
-	hw->phy.ops.write_reg(hw, 0x1F78, 0xF9EF);
-	/* Set AHT Preset bits */
-	hw->phy.ops.write_reg(hw, 0x1F79, 0x0210);
-	/* Change integ_factor of channel A to 3 */
-	hw->phy.ops.write_reg(hw, 0x1895, 0x0003);
-	/* Change prop_factor of channels BCD to 8 */
-	hw->phy.ops.write_reg(hw, 0x1796, 0x0008);
-	/* Change cg_icount + enable integbp for channels BCD */
-	hw->phy.ops.write_reg(hw, 0x1798, 0xD008);
-	/* Change cg_icount + enable integbp + change prop_factor_master
-	 * to 8 for channel A
-	 */
-	hw->phy.ops.write_reg(hw, 0x1898, 0xD918);
-	/* Disable AHT in Slave mode on channel A */
-	hw->phy.ops.write_reg(hw, 0x187A, 0x0800);
-	/* Enable LPLU and disable AN to 1000 in non-D0a states,
-	 * Enable SPD+B2B
-	 */
-	hw->phy.ops.write_reg(hw, 0x0019, 0x008D);
-	/* Enable restart AN on an1000_dis change */
-	hw->phy.ops.write_reg(hw, 0x001B, 0x2080);
-	/* Enable wh_fifo read clock in 10/100 modes */
-	hw->phy.ops.write_reg(hw, 0x0014, 0x0045);
-	/* Restart AN, Speed selection is 1000 */
-	hw->phy.ops.write_reg(hw, 0x0000, 0x1340);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_get_phy_type_from_id - Get PHY type from id
- *  @phy_id: phy_id read from the phy
- *
- *  Returns the phy type from the id.
- **/
-enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
-{
-	enum igc_phy_type phy_type = igc_phy_unknown;
-
-	switch (phy_id) {
-	case M88IGC_I_PHY_ID:
-	case M88IGC_E_PHY_ID:
-	case M88E1111_I_PHY_ID:
-	case M88E1011_I_PHY_ID:
-	case M88E1543_E_PHY_ID:
-	case M88E1512_E_PHY_ID:
-	case I347AT4_E_PHY_ID:
-	case M88E1112_E_PHY_ID:
-	case M88E1340M_E_PHY_ID:
-		phy_type = igc_phy_m88;
+	case M88E1340M_E_PHY_ID:
+		phy_type = igc_phy_m88;
 		break;
 	case IGP01IGC_I_PHY_ID: /* IGP 1 & 2 share this */
 		phy_type = igc_phy_igp_2;
@@ -3056,1074 +1182,174 @@ enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
 		break;
 	case I217_E_PHY_ID:
 		phy_type = igc_phy_i217;
-		break;
-	case I82580_I_PHY_ID:
-		phy_type = igc_phy_82580;
-		break;
-	case I210_I_PHY_ID:
-		phy_type = igc_phy_i210;
-		break;
-	case I225_I_PHY_ID:
-		phy_type = igc_phy_i225;
-		break;
-	default:
-		phy_type = igc_phy_unknown;
-		break;
-	}
-	return phy_type;
-}
-
-/**
- *  igc_determine_phy_address - Determines PHY address.
- *  @hw: pointer to the HW structure
- *
- *  This uses a trial and error method to loop through possible PHY
- *  addresses. It tests each by reading the PHY ID registers and
- *  checking for a match.
- **/
-s32 igc_determine_phy_address(struct igc_hw *hw)
-{
-	u32 phy_addr = 0;
-	u32 i;
-	enum igc_phy_type phy_type = igc_phy_unknown;
-
-	hw->phy.id = phy_type;
-
-	for (phy_addr = 0; phy_addr < IGC_MAX_PHY_ADDR; phy_addr++) {
-		hw->phy.addr = phy_addr;
-		i = 0;
-
-		do {
-			igc_get_phy_id(hw);
-			phy_type = igc_get_phy_type_from_id(hw->phy.id);
-
-			/* If phy_type is valid, break - we found our
-			 * PHY address
-			 */
-			if (phy_type != igc_phy_unknown)
-				return IGC_SUCCESS;
-
-			msec_delay(1);
-			i++;
-		} while (i < 10);
-	}
-
-	return -IGC_ERR_PHY_TYPE;
-}
-
-/**
- *  igc_get_phy_addr_for_bm_page - Retrieve PHY page address
- *  @page: page to access
- *  @reg: register to access
- *
- *  Returns the phy address for the page requested.
- **/
-static u32 igc_get_phy_addr_for_bm_page(u32 page, u32 reg)
-{
-	u32 phy_addr = 2;
-
-	if (page >= 768 || (page == 0 && reg == 25) || reg == 31)
-		phy_addr = 1;
-
-	return phy_addr;
-}
-
-/**
- *  igc_write_phy_reg_bm - Write BM PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 data)
-{
-	s32 ret_val;
-	u32 page = offset >> IGP_PAGE_SHIFT;
-
-	DEBUGFUNC("igc_write_phy_reg_bm");
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
-							 false, false);
-		goto release;
-	}
-
-	hw->phy.addr = igc_get_phy_addr_for_bm_page(page, offset);
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG) {
-		u32 page_shift, page_select;
-
-		/* Page select is register 31 for phy address 1 and 22 for
-		 * phy address 2 and 3. Page select is shifted only for
-		 * phy address 1.
-		 */
-		if (hw->phy.addr == 1) {
-			page_shift = IGP_PAGE_SHIFT;
-			page_select = IGP01IGC_PHY_PAGE_SELECT;
-		} else {
-			page_shift = 0;
-			page_select = BM_PHY_PAGE_SELECT;
-		}
-
-		/* Page is shifted left, PHY expects (page x 32) */
-		ret_val = igc_write_phy_reg_mdic(hw, page_select,
-						   (page << page_shift));
-		if (ret_val)
-			goto release;
-	}
-
-	ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					   data);
-
-release:
-	hw->phy.ops.release(hw);
-	return ret_val;
-}
-
-/**
- *  igc_read_phy_reg_bm - Read BM PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Acquires semaphore, if necessary, then reads the PHY register at offset
- *  and storing the retrieved information in data.  Release any acquired
- *  semaphores before exiting.
- **/
-s32 igc_read_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	s32 ret_val;
-	u32 page = offset >> IGP_PAGE_SHIFT;
-
-	DEBUGFUNC("igc_read_phy_reg_bm");
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
-							 true, false);
-		goto release;
-	}
-
-	hw->phy.addr = igc_get_phy_addr_for_bm_page(page, offset);
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG) {
-		u32 page_shift, page_select;
-
-		/* Page select is register 31 for phy address 1 and 22 for
-		 * phy address 2 and 3. Page select is shifted only for
-		 * phy address 1.
-		 */
-		if (hw->phy.addr == 1) {
-			page_shift = IGP_PAGE_SHIFT;
-			page_select = IGP01IGC_PHY_PAGE_SELECT;
-		} else {
-			page_shift = 0;
-			page_select = BM_PHY_PAGE_SELECT;
-		}
-
-		/* Page is shifted left, PHY expects (page x 32) */
-		ret_val = igc_write_phy_reg_mdic(hw, page_select,
-						   (page << page_shift));
-		if (ret_val)
-			goto release;
-	}
-
-	ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					  data);
-release:
-	hw->phy.ops.release(hw);
-	return ret_val;
-}
-
-/**
- *  igc_read_phy_reg_bm2 - Read BM PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Acquires semaphore, if necessary, then reads the PHY register at offset
- *  and storing the retrieved information in data.  Release any acquired
- *  semaphores before exiting.
- **/
-s32 igc_read_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	s32 ret_val;
-	u16 page = (u16)(offset >> IGP_PAGE_SHIFT);
-
-	DEBUGFUNC("igc_read_phy_reg_bm2");
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
-							 true, false);
-		goto release;
-	}
-
-	hw->phy.addr = 1;
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG) {
-		/* Page is shifted left, PHY expects (page x 32) */
-		ret_val = igc_write_phy_reg_mdic(hw, BM_PHY_PAGE_SELECT,
-						   page);
-
-		if (ret_val)
-			goto release;
-	}
-
-	ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					  data);
-release:
-	hw->phy.ops.release(hw);
-	return ret_val;
-}
-
-/**
- *  igc_write_phy_reg_bm2 - Write BM PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 data)
-{
-	s32 ret_val;
-	u16 page = (u16)(offset >> IGP_PAGE_SHIFT);
-
-	DEBUGFUNC("igc_write_phy_reg_bm2");
-
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
-							 false, false);
-		goto release;
-	}
-
-	hw->phy.addr = 1;
-
-	if (offset > MAX_PHY_MULTI_PAGE_REG) {
-		/* Page is shifted left, PHY expects (page x 32) */
-		ret_val = igc_write_phy_reg_mdic(hw, BM_PHY_PAGE_SELECT,
-						   page);
-
-		if (ret_val)
-			goto release;
-	}
-
-	ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
-					   data);
-
-release:
-	hw->phy.ops.release(hw);
-	return ret_val;
-}
-
-/**
- *  igc_enable_phy_wakeup_reg_access_bm - enable access to BM wakeup registers
- *  @hw: pointer to the HW structure
- *  @phy_reg: pointer to store original contents of BM_WUC_ENABLE_REG
- *
- *  Assumes semaphore already acquired and phy_reg points to a valid memory
- *  address to store contents of the BM_WUC_ENABLE_REG register.
- **/
-s32 igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
-{
-	s32 ret_val;
-	u16 temp;
-
-	DEBUGFUNC("igc_enable_phy_wakeup_reg_access_bm");
-
-	if (!phy_reg)
-		return -IGC_ERR_PARAM;
-
-	/* All page select, port ctrl and wakeup registers use phy address 1 */
-	hw->phy.addr = 1;
-
-	/* Select Port Control Registers page */
-	ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
-	if (ret_val) {
-		DEBUGOUT("Could not set Port Control page\n");
-		return ret_val;
-	}
-
-	ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
-	if (ret_val) {
-		DEBUGOUT2("Could not read PHY register %d.%d\n",
-			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
-		return ret_val;
-	}
-
-	/* Enable both PHY wakeup mode and Wakeup register page writes.
-	 * Prevent a power state change by disabling ME and Host PHY wakeup.
-	 */
-	temp = *phy_reg;
-	temp |= BM_WUC_ENABLE_BIT;
-	temp &= ~(BM_WUC_ME_WU_BIT | BM_WUC_HOST_WU_BIT);
-
-	ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, temp);
-	if (ret_val) {
-		DEBUGOUT2("Could not write PHY register %d.%d\n",
-			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
-		return ret_val;
-	}
-
-	/* Select Host Wakeup Registers page - caller now able to write
-	 * registers on the Wakeup registers page
-	 */
-	return igc_set_page_igp(hw, (BM_WUC_PAGE << IGP_PAGE_SHIFT));
-}
-
-/**
- *  igc_disable_phy_wakeup_reg_access_bm - disable access to BM wakeup regs
- *  @hw: pointer to the HW structure
- *  @phy_reg: pointer to original contents of BM_WUC_ENABLE_REG
- *
- *  Restore BM_WUC_ENABLE_REG to its original value.
- *
- *  Assumes semaphore already acquired and *phy_reg is the contents of the
- *  BM_WUC_ENABLE_REG before register(s) on BM_WUC_PAGE were accessed by
- *  caller.
- **/
-s32 igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
-{
-	s32 ret_val;
-
-	DEBUGFUNC("igc_disable_phy_wakeup_reg_access_bm");
-
-	if (!phy_reg)
-		return -IGC_ERR_PARAM;
-
-	/* Select Port Control Registers page */
-	ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
-	if (ret_val) {
-		DEBUGOUT("Could not set Port Control page\n");
-		return ret_val;
-	}
-
-	/* Restore 769.17 to its original value */
-	ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, *phy_reg);
-	if (ret_val)
-		DEBUGOUT2("Could not restore PHY register %d.%d\n",
-			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
-
-	return ret_val;
-}
-
-/**
- *  igc_access_phy_wakeup_reg_bm - Read/write BM PHY wakeup register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read or written
- *  @data: pointer to the data to read or write
- *  @read: determines if operation is read or write
- *  @page_set: BM_WUC_PAGE already set and access enabled
- *
- *  Read the PHY register at offset and store the retrieved information in
- *  data, or write data to PHY register at offset.  Note the procedure to
- *  access the PHY wakeup registers is different than reading the other PHY
- *  registers. It works as such:
- *  1) Set 769.17.2 (page 769, register 17, bit 2) = 1
- *  2) Set page to 800 for host (801 if we were manageability)
- *  3) Write the address using the address opcode (0x11)
- *  4) Read or write the data using the data opcode (0x12)
- *  5) Restore 769.17.2 to its original value
- *
- *  Steps 1 and 2 are done by igc_enable_phy_wakeup_reg_access_bm() and
- *  step 5 is done by igc_disable_phy_wakeup_reg_access_bm().
- *
- *  Assumes semaphore is already acquired.  When page_set==true, assumes
- *  the PHY page is set to BM_WUC_PAGE (i.e. a function in the call stack
- *  is responsible for calls to igc_[enable|disable]_phy_wakeup_reg_bm()).
- **/
-static s32 igc_access_phy_wakeup_reg_bm(struct igc_hw *hw, u32 offset,
-					  u16 *data, bool read, bool page_set)
-{
-	s32 ret_val;
-	u16 reg = BM_PHY_REG_NUM(offset);
-	u16 page = BM_PHY_REG_PAGE(offset);
-	u16 phy_reg = 0;
-
-	DEBUGFUNC("igc_access_phy_wakeup_reg_bm");
-
-	/* Gig must be disabled for MDIO accesses to Host Wakeup reg page */
-	if (hw->mac.type == igc_pchlan &&
-		!(IGC_READ_REG(hw, IGC_PHY_CTRL) & IGC_PHY_CTRL_GBE_DISABLE))
-		DEBUGOUT1("Attempting to access page %d while gig enabled.\n",
-			  page);
-
-	if (!page_set) {
-		/* Enable access to PHY wakeup registers */
-		ret_val = igc_enable_phy_wakeup_reg_access_bm(hw, &phy_reg);
-		if (ret_val) {
-			DEBUGOUT("Could not enable PHY wakeup reg access\n");
-			return ret_val;
-		}
-	}
-
-	DEBUGOUT2("Accessing PHY page %d reg 0x%x\n", page, reg);
-
-	/* Write the Wakeup register page offset value using opcode 0x11 */
-	ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ADDRESS_OPCODE, reg);
-	if (ret_val) {
-		DEBUGOUT1("Could not write address opcode to page %d\n", page);
-		return ret_val;
-	}
-
-	if (read) {
-		/* Read the Wakeup register page value using opcode 0x12 */
-		ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
-						  data);
-	} else {
-		/* Write the Wakeup register page value using opcode 0x12 */
-		ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
-						   *data);
-	}
-
-	if (ret_val) {
-		DEBUGOUT2("Could not access PHY reg %d.%d\n", page, reg);
-		return ret_val;
-	}
-
-	if (!page_set)
-		ret_val = igc_disable_phy_wakeup_reg_access_bm(hw, &phy_reg);
-
-	return ret_val;
-}
-
-/**
- * igc_power_up_phy_copper - Restore copper link in case of PHY power down
- * @hw: pointer to the HW structure
- *
- * In the case of a PHY power down to save power, or to turn off link during a
- * driver unload, or wake on lan is not enabled, restore the link to previous
- * settings.
- **/
-void igc_power_up_phy_copper(struct igc_hw *hw)
-{
-	u16 mii_reg = 0;
-
-	/* The PHY will retain its settings across a power down/up cycle */
-	hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
-	mii_reg &= ~MII_CR_POWER_DOWN;
-	hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
-}
-
-/**
- * igc_power_down_phy_copper - Restore copper link in case of PHY power down
- * @hw: pointer to the HW structure
- *
- * In the case of a PHY power down to save power, or to turn off link during a
- * driver unload, or wake on lan is not enabled, restore the link to previous
- * settings.
- **/
-void igc_power_down_phy_copper(struct igc_hw *hw)
-{
-	u16 mii_reg = 0;
-
-	/* The PHY will retain its settings across a power down/up cycle */
-	hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
-	mii_reg |= MII_CR_POWER_DOWN;
-	hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
-	msec_delay(1);
-}
-
-/**
- *  __igc_read_phy_reg_hv -  Read HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *  @locked: semaphore has already been acquired or not
- *  @page_set: BM_WUC_PAGE already set and access enabled
- *
- *  Acquires semaphore, if necessary, then reads the PHY register at offset
- *  and stores the retrieved information in data.  Release any acquired
- *  semaphore before exiting.
- **/
-static s32 __igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data,
-				   bool locked, bool page_set)
-{
-	s32 ret_val;
-	u16 page = BM_PHY_REG_PAGE(offset);
-	u16 reg = BM_PHY_REG_NUM(offset);
-	u32 phy_addr = hw->phy.addr = igc_get_phy_addr_for_hv_page(page);
-
-	DEBUGFUNC("__igc_read_phy_reg_hv");
-
-	if (!locked) {
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			return ret_val;
-	}
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
-							 true, page_set);
-		goto out;
-	}
-
-	if (page > 0 && page < HV_INTC_FC_PAGE_START) {
-		ret_val = igc_access_phy_debug_regs_hv(hw, offset,
-							 data, true);
-		goto out;
-	}
-
-	if (!page_set) {
-		if (page == HV_INTC_FC_PAGE_START)
-			page = 0;
-
-		if (reg > MAX_PHY_MULTI_PAGE_REG) {
-			/* Page is shifted left, PHY expects (page x 32) */
-			ret_val = igc_set_page_igp(hw,
-						     (page << IGP_PAGE_SHIFT));
-
-			hw->phy.addr = phy_addr;
-
-			if (ret_val)
-				goto out;
-		}
-	}
-
-	DEBUGOUT3("reading PHY page %d (or 0x%x shifted) reg 0x%x\n", page,
-		  page << IGP_PAGE_SHIFT, reg);
-
-	ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & reg,
-					  data);
-out:
-	if (!locked)
-		hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_read_phy_reg_hv -  Read HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Acquires semaphore then reads the PHY register at offset and stores
- *  the retrieved information in data.  Release the acquired semaphore
- *  before exiting.
- **/
-s32 igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_phy_reg_hv(hw, offset, data, false, false);
-}
-
-/**
- *  igc_read_phy_reg_hv_locked -  Read HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to be read
- *  @data: pointer to the read data
- *
- *  Reads the PHY register at offset and stores the retrieved information
- *  in data.  Assumes semaphore already acquired.
- **/
-s32 igc_read_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_phy_reg_hv(hw, offset, data, true, false);
-}
-
-/**
- *  igc_read_phy_reg_page_hv - Read HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Reads the PHY register at offset and stores the retrieved information
- *  in data.  Assumes semaphore already acquired and page already set.
- **/
-s32 igc_read_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 *data)
-{
-	return __igc_read_phy_reg_hv(hw, offset, data, true, true);
-}
-
-/**
- *  __igc_write_phy_reg_hv - Write HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *  @locked: semaphore has already been acquired or not
- *  @page_set: BM_WUC_PAGE already set and access enabled
- *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
- **/
-static s32 __igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data,
-				    bool locked, bool page_set)
-{
-	s32 ret_val;
-	u16 page = BM_PHY_REG_PAGE(offset);
-	u16 reg = BM_PHY_REG_NUM(offset);
-	u32 phy_addr = hw->phy.addr = igc_get_phy_addr_for_hv_page(page);
-
-	DEBUGFUNC("__igc_write_phy_reg_hv");
-
-	if (!locked) {
-		ret_val = hw->phy.ops.acquire(hw);
-		if (ret_val)
-			return ret_val;
-	}
-	/* Page 800 works differently than the rest so it has its own func */
-	if (page == BM_WUC_PAGE) {
-		ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
-							 false, page_set);
-		goto out;
-	}
-
-	if (page > 0 && page < HV_INTC_FC_PAGE_START) {
-		ret_val = igc_access_phy_debug_regs_hv(hw, offset,
-							 &data, false);
-		goto out;
-	}
-
-	if (!page_set) {
-		if (page == HV_INTC_FC_PAGE_START)
-			page = 0;
-
-		/*
-		 * Workaround MDIO accesses being disabled after entering IEEE
-		 * Power Down (when bit 11 of the PHY Control register is set)
-		 */
-		if (hw->phy.type == igc_phy_82578 &&
-				hw->phy.revision >= 1 &&
-				hw->phy.addr == 2 &&
-				!(MAX_PHY_REG_ADDRESS & reg) &&
-				(data & (1 << 11))) {
-			u16 data2 = 0x7EFF;
-			ret_val = igc_access_phy_debug_regs_hv(hw,
-								(1 << 6) | 0x3,
-								&data2, false);
-			if (ret_val)
-				goto out;
-		}
-
-		if (reg > MAX_PHY_MULTI_PAGE_REG) {
-			/* Page is shifted left, PHY expects (page x 32) */
-			ret_val = igc_set_page_igp(hw,
-						     (page << IGP_PAGE_SHIFT));
-
-			hw->phy.addr = phy_addr;
-
-			if (ret_val)
-				goto out;
-		}
-	}
-
-	DEBUGOUT3("writing PHY page %d (or 0x%x shifted) reg 0x%x\n", page,
-		  page << IGP_PAGE_SHIFT, reg);
-
-	ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & reg,
-					   data);
-
-out:
-	if (!locked)
-		hw->phy.ops.release(hw);
-
-	return ret_val;
-}
-
-/**
- *  igc_write_phy_reg_hv - Write HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Acquires semaphore then writes the data to PHY register at the offset.
- *  Release the acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_phy_reg_hv(hw, offset, data, false, false);
-}
-
-/**
- *  igc_write_phy_reg_hv_locked - Write HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Writes the data to PHY register at the offset.  Assumes semaphore
- *  already acquired.
- **/
-s32 igc_write_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_phy_reg_hv(hw, offset, data, true, false);
-}
-
-/**
- *  igc_write_phy_reg_page_hv - Write HV PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
- *
- *  Writes the data to PHY register at the offset.  Assumes semaphore
- *  already acquired and page already set.
- **/
-s32 igc_write_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 data)
-{
-	return __igc_write_phy_reg_hv(hw, offset, data, true, true);
-}
-
-/**
- *  igc_get_phy_addr_for_hv_page - Get PHY address based on page
- *  @page: page to be accessed
- **/
-static u32 igc_get_phy_addr_for_hv_page(u32 page)
-{
-	u32 phy_addr = 2;
-
-	if (page >= HV_INTC_FC_PAGE_START)
-		phy_addr = 1;
-
-	return phy_addr;
-}
-
-/**
- * igc_access_phy_debug_regs_hv - Read HV PHY vendor specific high registers
- * @hw: pointer to the HW structure
- * @offset: register offset to be read or written
- * @data: pointer to the data to be read or written
- * @read: determines if operation is read or write
- *
- * Reads the PHY register at offset and stores the retrieved information
- * in data.  Assumes semaphore already acquired.  Note that the procedure
- * to access these regs uses the address port and data port to read/write.
- * These accesses done with PHY address 2 and without using pages.
- **/
-static s32 igc_access_phy_debug_regs_hv(struct igc_hw *hw, u32 offset,
-					  u16 *data, bool read)
-{
-	s32 ret_val;
-	u32 addr_reg;
-	u32 data_reg;
-
-	DEBUGFUNC("igc_access_phy_debug_regs_hv");
-
-	/* This takes care of the difference with desktop vs mobile phy */
-	addr_reg = ((hw->phy.type == igc_phy_82578) ?
-		    I82578_ADDR_REG : I82577_ADDR_REG);
-	data_reg = addr_reg + 1;
-
-	/* All operations in this function are phy address 2 */
-	hw->phy.addr = 2;
-
-	/* masking with 0x3F to remove the page from offset */
-	ret_val = igc_write_phy_reg_mdic(hw, addr_reg, (u16)offset & 0x3F);
-	if (ret_val) {
-		DEBUGOUT("Could not write the Address Offset port register\n");
-		return ret_val;
-	}
-
-	/* Read or write the data value next */
-	if (read)
-		ret_val = igc_read_phy_reg_mdic(hw, data_reg, data);
-	else
-		ret_val = igc_write_phy_reg_mdic(hw, data_reg, *data);
-
-	if (ret_val)
-		DEBUGOUT("Could not access the Data port register\n");
-
-	return ret_val;
-}
-
-/**
- *  igc_link_stall_workaround_hv - Si workaround
- *  @hw: pointer to the HW structure
- *
- *  This function works around a Si bug where the link partner can get
- *  a link up indication before the PHY does.  If small packets are sent
- *  by the link partner they can be placed in the packet buffer without
- *  being properly accounted for by the PHY and will stall preventing
- *  further packets from being received.  The workaround is to clear the
- *  packet buffer after the PHY detects link up.
- **/
-s32 igc_link_stall_workaround_hv(struct igc_hw *hw)
-{
-	s32 ret_val = IGC_SUCCESS;
-	u16 data;
-
-	DEBUGFUNC("igc_link_stall_workaround_hv");
-
-	if (hw->phy.type != igc_phy_82578)
-		return IGC_SUCCESS;
-
-	/* Do not apply workaround if in PHY loopback bit 14 set */
-	hw->phy.ops.read_reg(hw, PHY_CONTROL, &data);
-	if (data & PHY_CONTROL_LB)
-		return IGC_SUCCESS;
-
-	/* check if link is up and at 1Gbps */
-	ret_val = hw->phy.ops.read_reg(hw, BM_CS_STATUS, &data);
-	if (ret_val)
-		return ret_val;
-
-	data &= (BM_CS_STATUS_LINK_UP | BM_CS_STATUS_RESOLVED |
-		 BM_CS_STATUS_SPEED_MASK);
-
-	if (data != (BM_CS_STATUS_LINK_UP | BM_CS_STATUS_RESOLVED |
-		     BM_CS_STATUS_SPEED_1000))
-		return IGC_SUCCESS;
-
-	msec_delay(200);
-
-	/* flush the packets in the fifo buffer */
-	ret_val = hw->phy.ops.write_reg(hw, HV_MUX_DATA_CTRL,
-					(HV_MUX_DATA_CTRL_GEN_TO_MAC |
-					 HV_MUX_DATA_CTRL_FORCE_SPEED));
-	if (ret_val)
-		return ret_val;
-
-	return hw->phy.ops.write_reg(hw, HV_MUX_DATA_CTRL,
-				     HV_MUX_DATA_CTRL_GEN_TO_MAC);
+		break;
+	case I82580_I_PHY_ID:
+		phy_type = igc_phy_82580;
+		break;
+	case I210_I_PHY_ID:
+		phy_type = igc_phy_i210;
+		break;
+	case I225_I_PHY_ID:
+		phy_type = igc_phy_i225;
+		break;
+	default:
+		phy_type = igc_phy_unknown;
+		break;
+	}
+	return phy_type;
 }
 
 /**
- *  igc_check_polarity_82577 - Checks the polarity.
+ *  igc_enable_phy_wakeup_reg_access_bm - enable access to BM wakeup registers
  *  @hw: pointer to the HW structure
+ *  @phy_reg: pointer to store original contents of BM_WUC_ENABLE_REG
  *
- *  Success returns 0, Failure returns -IGC_ERR_PHY (-2)
- *
- *  Polarity is determined based on the PHY specific status register.
+ *  Assumes semaphore already acquired and phy_reg points to a valid memory
+ *  address to store contents of the BM_WUC_ENABLE_REG register.
  **/
-s32 igc_check_polarity_82577(struct igc_hw *hw)
+s32 igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
 {
-	struct igc_phy_info *phy = &hw->phy;
 	s32 ret_val;
-	u16 data;
-
-	DEBUGFUNC("igc_check_polarity_82577");
-
-	ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
-
-	if (!ret_val)
-		phy->cable_polarity = ((data & I82577_PHY_STATUS2_REV_POLARITY)
-				       ? igc_rev_polarity_reversed
-				       : igc_rev_polarity_normal);
+	u16 temp;
 
-	return ret_val;
-}
+	DEBUGFUNC("igc_enable_phy_wakeup_reg_access_bm");
 
-/**
- *  igc_phy_force_speed_duplex_82577 - Force speed/duplex for I82577 PHY
- *  @hw: pointer to the HW structure
- *
- *  Calls the PHY setup function to force speed and duplex.
- **/
-s32 igc_phy_force_speed_duplex_82577(struct igc_hw *hw)
-{
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data;
-	bool link = false;
+	if (!phy_reg)
+		return -IGC_ERR_PARAM;
 
-	DEBUGFUNC("igc_phy_force_speed_duplex_82577");
+	/* All page select, port ctrl and wakeup registers use phy address 1 */
+	hw->phy.addr = 1;
 
-	ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
-	if (ret_val)
+	/* Select Port Control Registers page */
+	ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
+	if (ret_val) {
+		DEBUGOUT("Could not set Port Control page\n");
 		return ret_val;
+	}
 
-	igc_phy_force_speed_duplex_setup(hw, &phy_data);
-
-	ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
-	if (ret_val)
+	ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
+	if (ret_val) {
+		DEBUGOUT2("Could not read PHY register %d.%d\n",
+			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
 		return ret_val;
+	}
 
-	usec_delay(1);
-
-	if (phy->autoneg_wait_to_complete) {
-		DEBUGOUT("Waiting for forced speed/duplex link on 82577 phy\n");
-
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
-		if (ret_val)
-			return ret_val;
-
-		if (!link)
-			DEBUGOUT("Link taking longer than expected.\n");
+	/* Enable both PHY wakeup mode and Wakeup register page writes.
+	 * Prevent a power state change by disabling ME and Host PHY wakeup.
+	 */
+	temp = *phy_reg;
+	temp |= BM_WUC_ENABLE_BIT;
+	temp &= ~(BM_WUC_ME_WU_BIT | BM_WUC_HOST_WU_BIT);
 
-		/* Try once more */
-		ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
-						     100000, &link);
+	ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, temp);
+	if (ret_val) {
+		DEBUGOUT2("Could not write PHY register %d.%d\n",
+			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
+		return ret_val;
 	}
 
-	return ret_val;
+	/* Select Host Wakeup Registers page - caller now able to write
+	 * registers on the Wakeup registers page
+	 */
+	return igc_set_page_igp(hw, (BM_WUC_PAGE << IGP_PAGE_SHIFT));
 }
 
 /**
- *  igc_get_phy_info_82577 - Retrieve I82577 PHY information
+ *  igc_disable_phy_wakeup_reg_access_bm - disable access to BM wakeup regs
  *  @hw: pointer to the HW structure
+ *  @phy_reg: pointer to original contents of BM_WUC_ENABLE_REG
+ *
+ *  Restore BM_WUC_ENABLE_REG to its original value.
  *
- *  Read PHY status to determine if link is up.  If link is up, then
- *  set/determine 10base-T extended distance and polarity correction.  Read
- *  PHY port status to determine MDI/MDIx and speed.  Based on the speed,
- *  determine on the cable length, local and remote receiver.
+ *  Assumes semaphore already acquired and *phy_reg is the contents of the
+ *  BM_WUC_ENABLE_REG before register(s) on BM_WUC_PAGE were accessed by
+ *  caller.
  **/
-s32 igc_get_phy_info_82577(struct igc_hw *hw)
+s32 igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
 {
-	struct igc_phy_info *phy = &hw->phy;
 	s32 ret_val;
-	u16 data;
-	bool link;
-
-	DEBUGFUNC("igc_get_phy_info_82577");
-
-	ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
-	if (ret_val)
-		return ret_val;
 
-	if (!link) {
-		DEBUGOUT("Phy info is only valid if link is up\n");
-		return -IGC_ERR_CONFIG;
-	}
+	DEBUGFUNC("igc_disable_phy_wakeup_reg_access_bm");
 
-	phy->polarity_correction = true;
+	if (!phy_reg)
+		return -IGC_ERR_PARAM;
 
-	ret_val = igc_check_polarity_82577(hw);
-	if (ret_val)
+	/* Select Port Control Registers page */
+	ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
+	if (ret_val) {
+		DEBUGOUT("Could not set Port Control page\n");
 		return ret_val;
+	}
 
-	ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
+	/* Restore 769.17 to its original value */
+	ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, *phy_reg);
 	if (ret_val)
-		return ret_val;
-
-	phy->is_mdix = !!(data & I82577_PHY_STATUS2_MDIX);
-
-	if ((data & I82577_PHY_STATUS2_SPEED_MASK) ==
-	    I82577_PHY_STATUS2_SPEED_1000MBPS) {
-		ret_val = hw->phy.ops.get_cable_length(hw);
-		if (ret_val)
-			return ret_val;
-
-		ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &data);
-		if (ret_val)
-			return ret_val;
-
-		phy->local_rx = (data & SR_1000T_LOCAL_RX_STATUS)
-				? igc_1000t_rx_status_ok
-				: igc_1000t_rx_status_not_ok;
-
-		phy->remote_rx = (data & SR_1000T_REMOTE_RX_STATUS)
-				 ? igc_1000t_rx_status_ok
-				 : igc_1000t_rx_status_not_ok;
-	} else {
-		phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
-		phy->local_rx = igc_1000t_rx_status_undefined;
-		phy->remote_rx = igc_1000t_rx_status_undefined;
-	}
+		DEBUGOUT2("Could not restore PHY register %d.%d\n",
+			  BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
 
-	return IGC_SUCCESS;
+	return ret_val;
 }
 
 /**
- *  igc_get_cable_length_82577 - Determine cable length for 82577 PHY
- *  @hw: pointer to the HW structure
+ * igc_power_up_phy_copper - Restore copper link in case of PHY power down
+ * @hw: pointer to the HW structure
  *
- * Reads the diagnostic status register and verifies result is valid before
- * placing it in the phy_cable_length field.
+ * In the case of a PHY power down to save power, or to turn off link during a
+ * driver unload, or wake on lan is not enabled, restore the link to previous
+ * settings.
  **/
-s32 igc_get_cable_length_82577(struct igc_hw *hw)
+void igc_power_up_phy_copper(struct igc_hw *hw)
 {
-	struct igc_phy_info *phy = &hw->phy;
-	s32 ret_val;
-	u16 phy_data, length;
-
-	DEBUGFUNC("igc_get_cable_length_82577");
-
-	ret_val = phy->ops.read_reg(hw, I82577_PHY_DIAG_STATUS, &phy_data);
-	if (ret_val)
-		return ret_val;
-
-	length = ((phy_data & I82577_DSTATUS_CABLE_LENGTH) >>
-		  I82577_DSTATUS_CABLE_LENGTH_SHIFT);
-
-	if (length == IGC_CABLE_LENGTH_UNDEFINED)
-		return -IGC_ERR_PHY;
-
-	phy->cable_length = length;
+	u16 mii_reg = 0;
 
-	return IGC_SUCCESS;
+	/* The PHY will retain its settings across a power down/up cycle */
+	hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
+	mii_reg &= ~MII_CR_POWER_DOWN;
+	hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
 }
 
 /**
- *  igc_write_phy_reg_gs40g - Write GS40G  PHY register
- *  @hw: pointer to the HW structure
- *  @offset: register offset to write to
- *  @data: data to write at register offset
+ * igc_power_down_phy_copper - Restore copper link in case of PHY power down
+ * @hw: pointer to the HW structure
  *
- *  Acquires semaphore, if necessary, then writes the data to PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
+ * In the case of a PHY power down to save power, or to turn off link during a
+ * driver unload, or wake on lan is not enabled, restore the link to previous
+ * settings.
  **/
-s32 igc_write_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 data)
+void igc_power_down_phy_copper(struct igc_hw *hw)
 {
-	s32 ret_val;
-	u16 page = offset >> GS40G_PAGE_SHIFT;
-
-	DEBUGFUNC("igc_write_phy_reg_gs40g");
-
-	offset = offset & GS40G_OFFSET_MASK;
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
-
-	ret_val = igc_write_phy_reg_mdic(hw, GS40G_PAGE_SELECT, page);
-	if (ret_val)
-		goto release;
-	ret_val = igc_write_phy_reg_mdic(hw, offset, data);
+	u16 mii_reg = 0;
 
-release:
-	hw->phy.ops.release(hw);
-	return ret_val;
+	/* The PHY will retain its settings across a power down/up cycle */
+	hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
+	mii_reg |= MII_CR_POWER_DOWN;
+	hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
+	msec_delay(1);
 }
 
 /**
- *  igc_read_phy_reg_gs40g - Read GS40G  PHY register
+ *  igc_check_polarity_82577 - Checks the polarity.
  *  @hw: pointer to the HW structure
- *  @offset: lower half is register offset to read to
- *     upper half is page to use.
- *  @data: data to read at register offset
  *
- *  Acquires semaphore, if necessary, then reads the data in the PHY register
- *  at the offset.  Release any acquired semaphores before exiting.
+ *  Success returns 0, Failure returns -IGC_ERR_PHY (-2)
+ *
+ *  Polarity is determined based on the PHY specific status register.
  **/
-s32 igc_read_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 *data)
+s32 igc_check_polarity_82577(struct igc_hw *hw)
 {
+	struct igc_phy_info *phy = &hw->phy;
 	s32 ret_val;
-	u16 page = offset >> GS40G_PAGE_SHIFT;
+	u16 data;
 
-	DEBUGFUNC("igc_read_phy_reg_gs40g");
+	DEBUGFUNC("igc_check_polarity_82577");
 
-	offset = offset & GS40G_OFFSET_MASK;
-	ret_val = hw->phy.ops.acquire(hw);
-	if (ret_val)
-		return ret_val;
+	ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
 
-	ret_val = igc_write_phy_reg_mdic(hw, GS40G_PAGE_SELECT, page);
-	if (ret_val)
-		goto release;
-	ret_val = igc_read_phy_reg_mdic(hw, offset, data);
+	if (!ret_val)
+		phy->cable_polarity = ((data & I82577_PHY_STATUS2_REV_POLARITY)
+				       ? igc_rev_polarity_reversed
+				       : igc_rev_polarity_normal);
 
-release:
-	hw->phy.ops.release(hw);
 	return ret_val;
 }
 
@@ -4194,132 +1420,6 @@ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data)
 	return ret_val;
 }
 
-/**
- *  igc_read_phy_reg_mphy - Read mPHY control register
- *  @hw: pointer to the HW structure
- *  @address: address to be read
- *  @data: pointer to the read data
- *
- *  Reads the mPHY control register in the PHY at offset and stores the
- *  information read to data.
- **/
-s32 igc_read_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 *data)
-{
-	u32 mphy_ctrl = 0;
-	bool locked = false;
-	bool ready;
-
-	DEBUGFUNC("igc_read_phy_reg_mphy");
-
-	/* Check if mPHY is ready to read/write operations */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-
-	/* Check if mPHY access is disabled and enable it if so */
-	mphy_ctrl = IGC_READ_REG(hw, IGC_MPHY_ADDR_CTRL);
-	if (mphy_ctrl & IGC_MPHY_DIS_ACCESS) {
-		locked = true;
-		ready = igc_is_mphy_ready(hw);
-		if (!ready)
-			return -IGC_ERR_PHY;
-		mphy_ctrl |= IGC_MPHY_ENA_ACCESS;
-		IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-	}
-
-	/* Set the address that we want to read */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-
-	/* We mask address, because we want to use only current lane */
-	mphy_ctrl = (mphy_ctrl & ~IGC_MPHY_ADDRESS_MASK &
-		~IGC_MPHY_ADDRESS_FNC_OVERRIDE) |
-		(address & IGC_MPHY_ADDRESS_MASK);
-	IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-
-	/* Read data from the address */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-	*data = IGC_READ_REG(hw, IGC_MPHY_DATA);
-
-	/* Disable access to mPHY if it was originally disabled */
-	if (locked)
-		ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-	IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL,
-			IGC_MPHY_DIS_ACCESS);
-
-	return IGC_SUCCESS;
-}
-
-/**
- *  igc_write_phy_reg_mphy - Write mPHY control register
- *  @hw: pointer to the HW structure
- *  @address: address to write to
- *  @data: data to write to register at offset
- *  @line_override: used when we want to use different line than default one
- *
- *  Writes data to mPHY control register.
- **/
-s32 igc_write_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 data,
-			     bool line_override)
-{
-	u32 mphy_ctrl = 0;
-	bool locked = false;
-	bool ready;
-
-	DEBUGFUNC("igc_write_phy_reg_mphy");
-
-	/* Check if mPHY is ready to read/write operations */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-
-	/* Check if mPHY access is disabled and enable it if so */
-	mphy_ctrl = IGC_READ_REG(hw, IGC_MPHY_ADDR_CTRL);
-	if (mphy_ctrl & IGC_MPHY_DIS_ACCESS) {
-		locked = true;
-		ready = igc_is_mphy_ready(hw);
-		if (!ready)
-			return -IGC_ERR_PHY;
-		mphy_ctrl |= IGC_MPHY_ENA_ACCESS;
-		IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-	}
-
-	/* Set the address that we want to read */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-
-	/* We mask address, because we want to use only current lane */
-	if (line_override)
-		mphy_ctrl |= IGC_MPHY_ADDRESS_FNC_OVERRIDE;
-	else
-		mphy_ctrl &= ~IGC_MPHY_ADDRESS_FNC_OVERRIDE;
-	mphy_ctrl = (mphy_ctrl & ~IGC_MPHY_ADDRESS_MASK) |
-		(address & IGC_MPHY_ADDRESS_MASK);
-	IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-
-	/* Read data from the address */
-	ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-	IGC_WRITE_REG(hw, IGC_MPHY_DATA, data);
-
-	/* Disable access to mPHY if it was originally disabled */
-	if (locked)
-		ready = igc_is_mphy_ready(hw);
-	if (!ready)
-		return -IGC_ERR_PHY;
-	IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL,
-			IGC_MPHY_DIS_ACCESS);
-
-	return IGC_SUCCESS;
-}
-
 /**
  *  igc_is_mphy_ready - Check if mPHY control register is not busy
  *  @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_phy.h b/drivers/net/igc/base/igc_phy.h
index fbc0e7cbc9..25f6f9e165 100644
--- a/drivers/net/igc/base/igc_phy.h
+++ b/drivers/net/igc/base/igc_phy.h
@@ -22,75 +22,26 @@ s32  igc_check_polarity_ife(struct igc_hw *hw);
 s32  igc_check_reset_block_generic(struct igc_hw *hw);
 s32  igc_phy_setup_autoneg(struct igc_hw *hw);
 s32  igc_copper_link_autoneg(struct igc_hw *hw);
-s32  igc_copper_link_setup_igp(struct igc_hw *hw);
-s32  igc_copper_link_setup_m88(struct igc_hw *hw);
-s32  igc_copper_link_setup_m88_gen2(struct igc_hw *hw);
-s32  igc_phy_force_speed_duplex_igp(struct igc_hw *hw);
-s32  igc_phy_force_speed_duplex_m88(struct igc_hw *hw);
-s32  igc_phy_force_speed_duplex_ife(struct igc_hw *hw);
-s32  igc_get_cable_length_m88(struct igc_hw *hw);
-s32  igc_get_cable_length_m88_gen2(struct igc_hw *hw);
-s32  igc_get_cable_length_igp_2(struct igc_hw *hw);
-s32  igc_get_cfg_done_generic(struct igc_hw *hw);
 s32  igc_get_phy_id(struct igc_hw *hw);
-s32  igc_get_phy_info_igp(struct igc_hw *hw);
-s32  igc_get_phy_info_m88(struct igc_hw *hw);
-s32  igc_get_phy_info_ife(struct igc_hw *hw);
 s32  igc_phy_sw_reset_generic(struct igc_hw *hw);
 void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl);
-s32  igc_phy_hw_reset_generic(struct igc_hw *hw);
 s32  igc_phy_reset_dsp_generic(struct igc_hw *hw);
 s32  igc_read_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_read_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 *data);
 s32  igc_set_page_igp(struct igc_hw *hw, u16 page);
-s32  igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_read_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_read_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_set_d3_lplu_state_generic(struct igc_hw *hw, bool active);
 s32  igc_setup_copper_link_generic(struct igc_hw *hw);
 s32  igc_write_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 data);
 s32  igc_phy_has_link_generic(struct igc_hw *hw, u32 iterations,
 				u32 usec_interval, bool *success);
-s32  igc_phy_init_script_igp3(struct igc_hw *hw);
 enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id);
-s32  igc_determine_phy_address(struct igc_hw *hw);
-s32  igc_write_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_read_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 *data);
 s32  igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg);
 s32  igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg);
-s32  igc_read_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_write_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 data);
 void igc_power_up_phy_copper(struct igc_hw *hw);
 void igc_power_down_phy_copper(struct igc_hw *hw);
 s32  igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data);
 s32  igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_read_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_write_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_read_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 *data);
-s32  igc_write_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 data);
-s32  igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_read_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_read_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 *data);
-s32  igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_write_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_link_stall_workaround_hv(struct igc_hw *hw);
-s32  igc_copper_link_setup_82577(struct igc_hw *hw);
 s32  igc_check_polarity_82577(struct igc_hw *hw);
-s32  igc_get_phy_info_82577(struct igc_hw *hw);
-s32  igc_phy_force_speed_duplex_82577(struct igc_hw *hw);
-s32  igc_get_cable_length_82577(struct igc_hw *hw);
-s32  igc_write_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 data);
-s32  igc_read_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 *data);
 s32  igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data);
 s32  igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 *data);
-s32 igc_write_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 data,
-			     bool line_override);
 bool igc_is_mphy_ready(struct igc_hw *hw);
 
 s32 igc_read_xmdio_reg(struct igc_hw *hw, u16 addr, u8 dev_addr,
diff --git a/drivers/net/ionic/ionic.h b/drivers/net/ionic/ionic.h
index 1538df3092..3536de39e9 100644
--- a/drivers/net/ionic/ionic.h
+++ b/drivers/net/ionic/ionic.h
@@ -73,10 +73,8 @@ int ionic_setup(struct ionic_adapter *adapter);
 
 int ionic_identify(struct ionic_adapter *adapter);
 int ionic_init(struct ionic_adapter *adapter);
-int ionic_reset(struct ionic_adapter *adapter);
 
 int ionic_port_identify(struct ionic_adapter *adapter);
 int ionic_port_init(struct ionic_adapter *adapter);
-int ionic_port_reset(struct ionic_adapter *adapter);
 
 #endif /* _IONIC_H_ */
diff --git a/drivers/net/ionic/ionic_dev.c b/drivers/net/ionic/ionic_dev.c
index 5c2820b7a1..3700769aab 100644
--- a/drivers/net/ionic/ionic_dev.c
+++ b/drivers/net/ionic/ionic_dev.c
@@ -206,19 +206,6 @@ ionic_dev_cmd_port_speed(struct ionic_dev *idev, uint32_t speed)
 	ionic_dev_cmd_go(idev, &cmd);
 }
 
-void
-ionic_dev_cmd_port_mtu(struct ionic_dev *idev, uint32_t mtu)
-{
-	union ionic_dev_cmd cmd = {
-		.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
-		.port_setattr.index = 0,
-		.port_setattr.attr = IONIC_PORT_ATTR_MTU,
-		.port_setattr.mtu = mtu,
-	};
-
-	ionic_dev_cmd_go(idev, &cmd);
-}
-
 void
 ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable)
 {
@@ -232,19 +219,6 @@ ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable)
 	ionic_dev_cmd_go(idev, &cmd);
 }
 
-void
-ionic_dev_cmd_port_fec(struct ionic_dev *idev, uint8_t fec_type)
-{
-	union ionic_dev_cmd cmd = {
-		.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
-		.port_setattr.index = 0,
-		.port_setattr.attr = IONIC_PORT_ATTR_FEC,
-		.port_setattr.fec_type = fec_type,
-	};
-
-	ionic_dev_cmd_go(idev, &cmd);
-}
-
 void
 ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type)
 {
@@ -258,19 +232,6 @@ ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type)
 	ionic_dev_cmd_go(idev, &cmd);
 }
 
-void
-ionic_dev_cmd_port_loopback(struct ionic_dev *idev, uint8_t loopback_mode)
-{
-	union ionic_dev_cmd cmd = {
-		.port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
-		.port_setattr.index = 0,
-		.port_setattr.attr = IONIC_PORT_ATTR_LOOPBACK,
-		.port_setattr.loopback_mode = loopback_mode,
-	};
-
-	ionic_dev_cmd_go(idev, &cmd);
-}
-
 /* LIF commands */
 
 void
diff --git a/drivers/net/ionic/ionic_dev.h b/drivers/net/ionic/ionic_dev.h
index 532255a603..dc47f0166a 100644
--- a/drivers/net/ionic/ionic_dev.h
+++ b/drivers/net/ionic/ionic_dev.h
@@ -224,12 +224,8 @@ void ionic_dev_cmd_port_init(struct ionic_dev *idev);
 void ionic_dev_cmd_port_reset(struct ionic_dev *idev);
 void ionic_dev_cmd_port_state(struct ionic_dev *idev, uint8_t state);
 void ionic_dev_cmd_port_speed(struct ionic_dev *idev, uint32_t speed);
-void ionic_dev_cmd_port_mtu(struct ionic_dev *idev, uint32_t mtu);
 void ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable);
-void ionic_dev_cmd_port_fec(struct ionic_dev *idev, uint8_t fec_type);
 void ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type);
-void ionic_dev_cmd_port_loopback(struct ionic_dev *idev,
-	uint8_t loopback_mode);
 
 void ionic_dev_cmd_lif_identify(struct ionic_dev *idev, uint8_t type,
 	uint8_t ver);
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 60a5f3d537..9c36090a94 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -73,17 +73,6 @@ ionic_lif_stop(struct ionic_lif *lif __rte_unused)
 	return 0;
 }
 
-void
-ionic_lif_reset(struct ionic_lif *lif)
-{
-	struct ionic_dev *idev = &lif->adapter->idev;
-
-	IONIC_PRINT_CALL();
-
-	ionic_dev_cmd_lif_reset(idev, lif->index);
-	ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
-}
-
 static void
 ionic_lif_get_abs_stats(const struct ionic_lif *lif, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h
index 425762d652..d66da559f1 100644
--- a/drivers/net/ionic/ionic_lif.h
+++ b/drivers/net/ionic/ionic_lif.h
@@ -131,7 +131,6 @@ int ionic_lif_start(struct ionic_lif *lif);
 int ionic_lif_stop(struct ionic_lif *lif);
 
 int ionic_lif_configure(struct ionic_lif *lif);
-void ionic_lif_reset(struct ionic_lif *lif);
 
 int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr);
 void ionic_intr_free(struct ionic_lif *lif, struct ionic_intr_info *intr);
diff --git a/drivers/net/ionic/ionic_main.c b/drivers/net/ionic/ionic_main.c
index 2ade213d2d..2853601f9d 100644
--- a/drivers/net/ionic/ionic_main.c
+++ b/drivers/net/ionic/ionic_main.c
@@ -306,17 +306,6 @@ ionic_init(struct ionic_adapter *adapter)
 	return err;
 }
 
-int
-ionic_reset(struct ionic_adapter *adapter)
-{
-	struct ionic_dev *idev = &adapter->idev;
-	int err;
-
-	ionic_dev_cmd_reset(idev);
-	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
-	return err;
-}
-
 int
 ionic_port_identify(struct ionic_adapter *adapter)
 {
@@ -419,25 +408,3 @@ ionic_port_init(struct ionic_adapter *adapter)
 
 	return 0;
 }
-
-int
-ionic_port_reset(struct ionic_adapter *adapter)
-{
-	struct ionic_dev *idev = &adapter->idev;
-	int err;
-
-	if (!idev->port_info)
-		return 0;
-
-	ionic_dev_cmd_port_reset(idev);
-	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
-	if (err) {
-		IONIC_PRINT(ERR, "Failed to reset port");
-		return err;
-	}
-
-	idev->port_info = NULL;
-	idev->port_info_pa = 0;
-
-	return 0;
-}
diff --git a/drivers/net/ionic/ionic_rx_filter.c b/drivers/net/ionic/ionic_rx_filter.c
index fe624538df..0c2c937a17 100644
--- a/drivers/net/ionic/ionic_rx_filter.c
+++ b/drivers/net/ionic/ionic_rx_filter.c
@@ -18,20 +18,6 @@ ionic_rx_filter_free(struct ionic_rx_filter *f)
 	rte_free(f);
 }
 
-int
-ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f)
-{
-	struct ionic_admin_ctx ctx = {
-		.pending_work = true,
-		.cmd.rx_filter_del = {
-			.opcode = IONIC_CMD_RX_FILTER_DEL,
-			.filter_id = f->filter_id,
-		},
-	};
-
-	return ionic_adminq_post(lif, &ctx);
-}
-
 int
 ionic_rx_filters_init(struct ionic_lif *lif)
 {
diff --git a/drivers/net/ionic/ionic_rx_filter.h b/drivers/net/ionic/ionic_rx_filter.h
index 6204a7b535..851a56073b 100644
--- a/drivers/net/ionic/ionic_rx_filter.h
+++ b/drivers/net/ionic/ionic_rx_filter.h
@@ -34,7 +34,6 @@ struct ionic_admin_ctx;
 struct ionic_lif;
 
 void ionic_rx_filter_free(struct ionic_rx_filter *f);
-int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f);
 int ionic_rx_filters_init(struct ionic_lif *lif);
 void ionic_rx_filters_deinit(struct ionic_lif *lif);
 int ionic_rx_filter_save(struct ionic_lif *lif, uint32_t flow_id,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b0c3a2286d..836798a40c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1041,7 +1041,6 @@ void mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn,
 void mlx5_set_metadata_mask(struct rte_eth_dev *dev);
 int mlx5_dev_check_sibling_config(struct mlx5_priv *priv,
 				  struct mlx5_dev_config *config);
-int mlx5_dev_configure(struct rte_eth_dev *dev);
 int mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info);
 int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size);
 int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index 9889437c56..d607cc4b96 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -287,12 +287,6 @@ cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse)
 	return entry;
 }
 
-struct mlx5_cache_entry *
-mlx5_cache_lookup(struct mlx5_cache_list *list, void *ctx)
-{
-	return cache_lookup(list, ctx, false);
-}
-
 struct mlx5_cache_entry *
 mlx5_cache_register(struct mlx5_cache_list *list, void *ctx)
 {
@@ -734,21 +728,6 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool)
 	return 0;
 }
 
-void
-mlx5_ipool_dump(struct mlx5_indexed_pool *pool)
-{
-	printf("Pool %s entry size %u, trunks %u, %d entry per trunk, "
-	       "total: %d\n",
-	       pool->cfg.type, pool->cfg.size, pool->n_trunk_valid,
-	       pool->cfg.trunk_size, pool->n_trunk_valid);
-#ifdef POOL_DEBUG
-	printf("Pool %s entry %u, trunk alloc %u, empty: %u, "
-	       "available %u free %u\n",
-	       pool->cfg.type, pool->n_entry, pool->trunk_new,
-	       pool->trunk_empty, pool->trunk_avail, pool->trunk_free);
-#endif
-}
-
 struct mlx5_l3t_tbl *
 mlx5_l3t_create(enum mlx5_l3t_type type)
 {
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index be6e5f67aa..e6cf37c96f 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -562,23 +562,6 @@ int mlx5_cache_list_init(struct mlx5_cache_list *list,
 			 mlx5_cache_match_cb cb_match,
 			 mlx5_cache_remove_cb cb_remove);
 
-/**
- * Search an entry matching the key.
- *
- * Result returned might be destroyed by other thread, must use
- * this function only in main thread.
- *
- * @param list
- *   Pointer to the cache list.
- * @param ctx
- *   Common context parameter used by entry callback function.
- *
- * @return
- *   Pointer of the cache entry if found, NULL otherwise.
- */
-struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list,
-					   void *ctx);
-
 /**
  * Reuse or create an entry to the cache list.
  *
@@ -717,14 +700,6 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg);
  */
 int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool);
 
-/**
- * This function dumps debug info of pool.
- *
- * @param pool
- *   Pointer to indexed memory pool.
- */
-void mlx5_ipool_dump(struct mlx5_indexed_pool *pool);
-
 /**
  * This function allocates new empty Three-level table.
  *
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2cd73919ce..afda8f2a50 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -862,24 +862,6 @@ mvneta_eth_dev_destroy(struct rte_eth_dev *eth_dev)
 	rte_eth_dev_release_port(eth_dev);
 }
 
-/**
- * Cleanup previously created device representing Ethernet port.
- *
- * @param name
- *   Pointer to the port name.
- */
-static void
-mvneta_eth_dev_destroy_name(const char *name)
-{
-	struct rte_eth_dev *eth_dev;
-
-	eth_dev = rte_eth_dev_allocated(name);
-	if (!eth_dev)
-		return;
-
-	mvneta_eth_dev_destroy(eth_dev);
-}
-
 /**
  * DPDK callback to register the virtual device.
  *
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 1ce260c89b..beb716f3c9 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -946,37 +946,6 @@ int hn_rndis_get_offload(struct hn_data *hv,
 	return 0;
 }
 
-uint32_t
-hn_rndis_get_ptypes(struct hn_data *hv)
-{
-	struct ndis_offload hwcaps;
-	uint32_t ptypes;
-	int error;
-
-	memset(&hwcaps, 0, sizeof(hwcaps));
-
-	error = hn_rndis_query_hwcaps(hv, &hwcaps);
-	if (error) {
-		PMD_DRV_LOG(ERR, "hwcaps query failed: %d", error);
-		return RTE_PTYPE_L2_ETHER;
-	}
-
-	ptypes = RTE_PTYPE_L2_ETHER;
-
-	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		ptypes |= RTE_PTYPE_L3_IPV4;
-
-	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) ||
-	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		ptypes |= RTE_PTYPE_L4_TCP;
-
-	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) ||
-	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		ptypes |= RTE_PTYPE_L4_UDP;
-
-	return ptypes;
-}
-
 int
 hn_rndis_set_rxfilter(struct hn_data *hv, uint32_t filter)
 {
diff --git a/drivers/net/netvsc/hn_rndis.h b/drivers/net/netvsc/hn_rndis.h
index 9a8251fc2f..11b89042dd 100644
--- a/drivers/net/netvsc/hn_rndis.h
+++ b/drivers/net/netvsc/hn_rndis.h
@@ -25,7 +25,6 @@ int	hn_rndis_query_rsscaps(struct hn_data *hv,
 int	hn_rndis_query_rss(struct hn_data *hv,
 			   struct rte_eth_rss_conf *rss_conf);
 int	hn_rndis_conf_rss(struct hn_data *hv, uint32_t flags);
-uint32_t hn_rndis_get_ptypes(struct hn_data *hv);
 
 #ifdef RTE_LIBRTE_NETVSC_DEBUG_DUMP
 void hn_rndis_dump(const void *buf);
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index bd874c6b4d..1fa8a50c1b 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -225,7 +225,6 @@ int	hn_vf_configure(struct rte_eth_dev *dev,
 			const struct rte_eth_conf *dev_conf);
 const uint32_t *hn_vf_supported_ptypes(struct rte_eth_dev *dev);
 int	hn_vf_start(struct rte_eth_dev *dev);
-void	hn_vf_reset(struct rte_eth_dev *dev);
 int	hn_vf_close(struct rte_eth_dev *dev);
 int	hn_vf_stop(struct rte_eth_dev *dev);
 
@@ -241,7 +240,6 @@ int	hn_vf_tx_queue_setup(struct rte_eth_dev *dev,
 			     uint16_t queue_idx, uint16_t nb_desc,
 			     unsigned int socket_id,
 			     const struct rte_eth_txconf *tx_conf);
-void	hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id);
 int	hn_vf_tx_queue_status(struct hn_data *hv, uint16_t queue_id, uint16_t offset);
 
 int	hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
@@ -252,7 +250,6 @@ int	hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_vf_rx_queue_release(struct hn_data *hv, uint16_t queue_id);
 
 int	hn_vf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
-int	hn_vf_stats_reset(struct rte_eth_dev *dev);
 int	hn_vf_xstats_get_names(struct rte_eth_dev *dev,
 			       struct rte_eth_xstat_name *xstats_names,
 			       unsigned int size);
diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c
index d43ebaa69f..996324282b 100644
--- a/drivers/net/netvsc/hn_vf.c
+++ b/drivers/net/netvsc/hn_vf.c
@@ -318,11 +318,6 @@ int hn_vf_stop(struct rte_eth_dev *dev)
 		return ret;					\
 	}
 
-void hn_vf_reset(struct rte_eth_dev *dev)
-{
-	VF_ETHDEV_FUNC(dev, rte_eth_dev_reset);
-}
-
 int hn_vf_close(struct rte_eth_dev *dev)
 {
 	struct hn_data *hv = dev->data->dev_private;
@@ -340,11 +335,6 @@ int hn_vf_close(struct rte_eth_dev *dev)
 	return ret;
 }
 
-int hn_vf_stats_reset(struct rte_eth_dev *dev)
-{
-	VF_ETHDEV_FUNC_RET_STATUS(dev, rte_eth_stats_reset);
-}
-
 int hn_vf_allmulticast_enable(struct rte_eth_dev *dev)
 {
 	VF_ETHDEV_FUNC_RET_STATUS(dev, rte_eth_allmulticast_enable);
@@ -401,21 +391,6 @@ int hn_vf_tx_queue_setup(struct rte_eth_dev *dev,
 	return ret;
 }
 
-void hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id)
-{
-	struct rte_eth_dev *vf_dev;
-
-	rte_rwlock_read_lock(&hv->vf_lock);
-	vf_dev = hn_get_vf_dev(hv);
-	if (vf_dev && vf_dev->dev_ops->tx_queue_release) {
-		void *subq = vf_dev->data->tx_queues[queue_id];
-
-		(*vf_dev->dev_ops->tx_queue_release)(subq);
-	}
-
-	rte_rwlock_read_unlock(&hv->vf_lock);
-}
-
 int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx, uint16_t nb_desc,
 			 unsigned int socket_id,
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp.h b/drivers/net/nfp/nfpcore/nfp_cpp.h
index 1427954c17..8fe97a37b1 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp.h
+++ b/drivers/net/nfp/nfpcore/nfp_cpp.h
@@ -283,15 +283,6 @@ uint32_t nfp_cpp_model(struct nfp_cpp *cpp);
  */
 uint16_t nfp_cpp_interface(struct nfp_cpp *cpp);
 
-/*
- * Retrieve the NFP Serial Number (unique per NFP)
- * @param[in]	cpp	NFP CPP handle
- * @param[out]	serial	Pointer to reference the serial number array
- *
- * @return	size of the NFP6000 serial number, in bytes
- */
-int nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial);
-
 /*
  * Allocate a NFP CPP area handle, as an offset into a CPP ID
  * @param[in]	cpp	NFP CPP handle
@@ -366,16 +357,6 @@ void nfp_cpp_area_release_free(struct nfp_cpp_area *area);
 uint8_t *nfp_cpp_map_area(struct nfp_cpp *cpp, int domain, int target,
 			   uint64_t addr, unsigned long size,
 			   struct nfp_cpp_area **area);
-/*
- * Return an IO pointer to the beginning of the NFP CPP area handle. The area
- * must be acquired with 'nfp_cpp_area_acquire()' before calling this operation.
- *
- * @param[in]	area	NFP CPP area handle
- *
- * @return Pointer to IO memory, or NULL on failure (and set errno accordingly).
- */
-void *nfp_cpp_area_mapped(struct nfp_cpp_area *area);
-
 /*
  * Read from a NFP CPP area handle into a buffer. The area must be acquired with
  * 'nfp_cpp_area_acquire()' before calling this operation.
@@ -417,18 +398,6 @@ int nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
  */
 void *nfp_cpp_area_iomem(struct nfp_cpp_area *area);
 
-/*
- * Verify that IO can be performed on an offset in an area
- *
- * @param[in]	area	NFP CPP area handle
- * @param[in]	offset	Offset into the area
- * @param[in]	size	Size of region to validate
- *
- * @return 0 on success, -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_area_check_range(struct nfp_cpp_area *area,
-			     unsigned long long offset, unsigned long size);
-
 /*
  * Get the NFP CPP handle that is the parent of a NFP CPP area handle
  *
@@ -437,14 +406,6 @@ int nfp_cpp_area_check_range(struct nfp_cpp_area *area,
  */
 struct nfp_cpp *nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area);
 
-/*
- * Get the name passed during allocation of the NFP CPP area handle
- *
- * @param	cpp_area	NFP CPP area handle
- * @return			Pointer to the area's name
- */
-const char *nfp_cpp_area_name(struct nfp_cpp_area *cpp_area);
-
 /*
  * Read a block of data from a NFP CPP ID
  *
@@ -474,89 +435,6 @@ int nfp_cpp_write(struct nfp_cpp *cpp, uint32_t cpp_id,
 		  unsigned long long address, const void *kernel_vaddr,
 		  size_t length);
 
-
-
-/*
- * Fill a NFP CPP area handle and offset with a value
- *
- * @param[in]	area	NFP CPP area handle
- * @param[in]	offset	Offset into the NFP CPP ID address space
- * @param[in]	value	32-bit value to fill area with
- * @param[in]	length	Size of the area to reserve
- *
- * @return bytes written on success, -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
-		      uint32_t value, size_t length);
-
-/*
- * Read a single 32-bit value from a NFP CPP area handle
- *
- * @param area		NFP CPP area handle
- * @param offset	offset into NFP CPP area handle
- * @param value		output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
-		       uint32_t *value);
-
-/*
- * Write a single 32-bit value to a NFP CPP area handle
- *
- * @param area		NFP CPP area handle
- * @param offset	offset into NFP CPP area handle
- * @param value		value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
-			uint32_t value);
-
-/*
- * Read a single 64-bit value from a NFP CPP area handle
- *
- * @param area		NFP CPP area handle
- * @param offset	offset into NFP CPP area handle
- * @param value		output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
-		       uint64_t *value);
-
-/*
- * Write a single 64-bit value to a NFP CPP area handle
- *
- * @param area		NFP CPP area handle
- * @param offset	offset into NFP CPP area handle
- * @param value		value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
-			uint64_t value);
-
 /*
  * Write a single 32-bit value on the XPB bus
  *
@@ -579,33 +457,6 @@ int nfp_xpb_writel(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t value);
  */
 int nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t *value);
 
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp           NFP CPP device handle
- * @param xpb_tgt       XPB target and address
- * @param mask          mask of bits to alter
- * @param value         value to modify
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
-		    uint32_t value);
-
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp           NFP CPP device handle
- * @param xpb_tgt       XPB target and address
- * @param mask          mask of bits to alter
- * @param value         value to monitor for
- * @param timeout_us    maximum number of us to wait (-1 for forever)
- *
- * @return >= 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
-		   uint32_t value, int timeout_us);
-
 /*
  * Read a 32-bit word from a NFP CPP ID
  *
@@ -659,27 +510,6 @@ int nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id,
 int nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id,
 		   unsigned long long address, uint64_t value);
 
-/*
- * Initialize a mutex location
-
- * The CPP target:address must point to a 64-bit aligned location, and will
- * initialize 64 bits of data at the location.
- *
- * This creates the initial mutex state, as locked by this nfp_cpp_interface().
- *
- * This function should only be called when setting up the initial lock state
- * upon boot-up of the system.
- *
- * @param cpp		NFP CPP handle
- * @param target	NFP CPP target ID
- * @param address	Offset into the address space of the NFP CPP target ID
- * @param key_id	Unique 32-bit value for this mutex
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target,
-		       unsigned long long address, uint32_t key_id);
-
 /*
  * Create a mutex handle from an address controlled by a MU Atomic engine
  *
@@ -701,49 +531,6 @@ struct nfp_cpp_mutex *nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
 					  unsigned long long address,
 					  uint32_t key_id);
 
-/*
- * Get the NFP CPP handle the mutex was created with
- *
- * @param   mutex   NFP mutex handle
- * @return          NFP CPP handle
- */
-struct nfp_cpp *nfp_cpp_mutex_cpp(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex key
- *
- * @param   mutex   NFP mutex handle
- * @return          Mutex key
- */
-uint32_t nfp_cpp_mutex_key(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex owner
- *
- * @param   mutex   NFP mutex handle
- * @return          Interface ID of the mutex owner
- *
- * NOTE: This is for debug purposes ONLY - the owner may change at any time,
- * unless it has been locked by this NFP CPP handle.
- */
-uint16_t nfp_cpp_mutex_owner(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex target
- *
- * @param   mutex   NFP mutex handle
- * @return          Mutex CPP target (ie NFP_CPP_TARGET_MU)
- */
-int nfp_cpp_mutex_target(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex address
- *
- * @param   mutex   NFP mutex handle
- * @return          Mutex CPP address
- */
-uint64_t nfp_cpp_mutex_address(struct nfp_cpp_mutex *mutex);
-
 /*
  * Free a mutex handle - does not alter the lock state
  *
diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c
index dec4a8b6d1..10b7f059a7 100644
--- a/drivers/net/nfp/nfpcore/nfp_cppcore.c
+++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c
@@ -61,13 +61,6 @@ nfp_cpp_interface_set(struct nfp_cpp *cpp, uint32_t interface)
 	cpp->interface = interface;
 }
 
-int
-nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial)
-{
-	*serial = cpp->serial;
-	return cpp->serial_len;
-}
-
 int
 nfp_cpp_serial_set(struct nfp_cpp *cpp, const uint8_t *serial,
 		   size_t serial_len)
@@ -106,12 +99,6 @@ nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area)
 	return cpp_area->cpp;
 }
 
-const char *
-nfp_cpp_area_name(struct nfp_cpp_area *cpp_area)
-{
-	return cpp_area->name;
-}
-
 /*
  * nfp_cpp_area_alloc - allocate a new CPP area
  * @cpp:    CPP handle
@@ -351,34 +338,6 @@ nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
 	return area->cpp->op->area_write(area, kernel_vaddr, offset, length);
 }
 
-void *
-nfp_cpp_area_mapped(struct nfp_cpp_area *area)
-{
-	if (area->cpp->op->area_mapped)
-		return area->cpp->op->area_mapped(area);
-	return NULL;
-}
-
-/*
- * nfp_cpp_area_check_range - check if address range fits in CPP area
- *
- * @area:   CPP area handle
- * @offset: offset into CPP area
- * @length: size of address range in bytes
- *
- * Check if address range fits within CPP area.  Return 0 if area fits
- * or -1 on error.
- */
-int
-nfp_cpp_area_check_range(struct nfp_cpp_area *area, unsigned long long offset,
-			 unsigned long length)
-{
-	if (((offset + length) > area->size))
-		return NFP_ERRNO(EFAULT);
-
-	return 0;
-}
-
 /*
  * Return the correct CPP address, and fixup xpb_addr as needed,
  * based upon NFP model.
@@ -423,55 +382,6 @@ nfp_xpb_to_cpp(struct nfp_cpp *cpp, uint32_t *xpb_addr)
 	return xpb;
 }
 
-int
-nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
-		   uint32_t *value)
-{
-	int sz;
-	uint32_t tmp = 0;
-
-	sz = nfp_cpp_area_read(area, offset, &tmp, sizeof(tmp));
-	*value = rte_le_to_cpu_32(tmp);
-
-	return (sz == sizeof(*value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
-		    uint32_t value)
-{
-	int sz;
-
-	value = rte_cpu_to_le_32(value);
-	sz = nfp_cpp_area_write(area, offset, &value, sizeof(value));
-	return (sz == sizeof(value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
-		   uint64_t *value)
-{
-	int sz;
-	uint64_t tmp = 0;
-
-	sz = nfp_cpp_area_read(area, offset, &tmp, sizeof(tmp));
-	*value = rte_le_to_cpu_64(tmp);
-
-	return (sz == sizeof(*value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
-		    uint64_t value)
-{
-	int sz;
-
-	value = rte_cpu_to_le_64(value);
-	sz = nfp_cpp_area_write(area, offset, &value, sizeof(value));
-
-	return (sz == sizeof(value)) ? 0 : -1;
-}
-
 int
 nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
 	      uint32_t *value)
@@ -610,77 +520,6 @@ nfp_cpp_from_device_name(struct rte_pci_device *dev, int driver_lock_needed)
 	return nfp_cpp_alloc(dev, driver_lock_needed);
 }
 
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp           NFP CPP device handle
- * @param xpb_tgt       XPB target and address
- * @param mask          mask of bits to alter
- * @param value         value to modify
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
-		uint32_t value)
-{
-	int err;
-	uint32_t tmp;
-
-	err = nfp_xpb_readl(cpp, xpb_tgt, &tmp);
-	if (err < 0)
-		return err;
-
-	tmp &= ~mask;
-	tmp |= (mask & value);
-	return nfp_xpb_writel(cpp, xpb_tgt, tmp);
-}
-
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp           NFP CPP device handle
- * @param xpb_tgt       XPB target and address
- * @param mask          mask of bits to alter
- * @param value         value to monitor for
- * @param timeout_us    maximum number of us to wait (-1 for forever)
- *
- * @return >= 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
-	       uint32_t value, int timeout_us)
-{
-	uint32_t tmp;
-	int err;
-
-	do {
-		err = nfp_xpb_readl(cpp, xpb_tgt, &tmp);
-		if (err < 0)
-			goto exit;
-
-		if ((tmp & mask) == (value & mask)) {
-			if (timeout_us < 0)
-				timeout_us = 0;
-			break;
-		}
-
-		if (timeout_us < 0)
-			continue;
-
-		timeout_us -= 100;
-		usleep(100);
-	} while (timeout_us >= 0);
-
-	if (timeout_us < 0)
-		err = NFP_ERRNO(ETIMEDOUT);
-	else
-		err = timeout_us;
-
-exit:
-	return err;
-}
-
 /*
  * nfp_cpp_read - read from CPP target
  * @cpp:        CPP handle
@@ -734,63 +573,6 @@ nfp_cpp_write(struct nfp_cpp *cpp, uint32_t destination,
 	return err;
 }
 
-/*
- * nfp_cpp_area_fill - fill a CPP area with a value
- * @area:       CPP area
- * @offset:     offset into CPP area
- * @value:      value to fill with
- * @length:     length of area to fill
- */
-int
-nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
-		  uint32_t value, size_t length)
-{
-	int err;
-	size_t i;
-	uint64_t value64;
-
-	value = rte_cpu_to_le_32(value);
-	value64 = ((uint64_t)value << 32) | value;
-
-	if ((offset + length) > area->size)
-		return NFP_ERRNO(EINVAL);
-
-	if ((area->offset + offset) & 3)
-		return NFP_ERRNO(EINVAL);
-
-	if (((area->offset + offset) & 7) == 4 && length >= 4) {
-		err = nfp_cpp_area_write(area, offset, &value, sizeof(value));
-		if (err < 0)
-			return err;
-		if (err != sizeof(value))
-			return NFP_ERRNO(ENOSPC);
-		offset += sizeof(value);
-		length -= sizeof(value);
-	}
-
-	for (i = 0; (i + sizeof(value)) < length; i += sizeof(value64)) {
-		err =
-		    nfp_cpp_area_write(area, offset + i, &value64,
-				       sizeof(value64));
-		if (err < 0)
-			return err;
-		if (err != sizeof(value64))
-			return NFP_ERRNO(ENOSPC);
-	}
-
-	if ((i + sizeof(value)) <= length) {
-		err =
-		    nfp_cpp_area_write(area, offset + i, &value, sizeof(value));
-		if (err < 0)
-			return err;
-		if (err != sizeof(value))
-			return NFP_ERRNO(ENOSPC);
-		i += sizeof(value);
-	}
-
-	return (int)i;
-}
-
 /*
  * NOTE: This code should not use nfp_xpb_* functions,
  * as those are model-specific
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.c b/drivers/net/nfp/nfpcore/nfp_mip.c
index c86966df8b..d67ff220eb 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.c
+++ b/drivers/net/nfp/nfpcore/nfp_mip.c
@@ -121,12 +121,6 @@ nfp_mip_close(struct nfp_mip *mip)
 	free(mip);
 }
 
-const char *
-nfp_mip_name(const struct nfp_mip *mip)
-{
-	return mip->name;
-}
-
 /*
  * nfp_mip_symtab() - Get the address and size of the MIP symbol table
  * @mip:	MIP handle
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.h b/drivers/net/nfp/nfpcore/nfp_mip.h
index d0919b58fe..27300ba9cd 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.h
+++ b/drivers/net/nfp/nfpcore/nfp_mip.h
@@ -13,7 +13,6 @@ struct nfp_mip;
 struct nfp_mip *nfp_mip_open(struct nfp_cpp *cpp);
 void nfp_mip_close(struct nfp_mip *mip);
 
-const char *nfp_mip_name(const struct nfp_mip *mip);
 void nfp_mip_symtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
 void nfp_mip_strtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
 int nfp_nffw_info_mip_first(struct nfp_nffw_info *state, uint32_t *cpp_id,
diff --git a/drivers/net/nfp/nfpcore/nfp_mutex.c b/drivers/net/nfp/nfpcore/nfp_mutex.c
index 318c5800d7..9a49635e2b 100644
--- a/drivers/net/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/nfp/nfpcore/nfp_mutex.c
@@ -52,51 +52,6 @@ _nfp_cpp_mutex_validate(uint32_t model, int *target, unsigned long long address)
 	return 0;
 }
 
-/*
- * Initialize a mutex location
- *
- * The CPP target:address must point to a 64-bit aligned location, and
- * will initialize 64 bits of data at the location.
- *
- * This creates the initial mutex state, as locked by this
- * nfp_cpp_interface().
- *
- * This function should only be called when setting up
- * the initial lock state upon boot-up of the system.
- *
- * @param mutex     NFP CPP Mutex handle
- * @param target    NFP CPP target ID (ie NFP_CPP_TARGET_CLS or
- *		    NFP_CPP_TARGET_MU)
- * @param address   Offset into the address space of the NFP CPP target ID
- * @param key       Unique 32-bit value for this mutex
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target, unsigned long long address,
-		   uint32_t key)
-{
-	uint32_t model = nfp_cpp_model(cpp);
-	uint32_t muw = NFP_CPP_ID(target, 4, 0);	/* atomic_write */
-	int err;
-
-	err = _nfp_cpp_mutex_validate(model, &target, address);
-	if (err < 0)
-		return err;
-
-	err = nfp_cpp_writel(cpp, muw, address + 4, key);
-	if (err < 0)
-		return err;
-
-	err =
-	    nfp_cpp_writel(cpp, muw, address + 0,
-			   MUTEX_LOCKED(nfp_cpp_interface(cpp)));
-	if (err < 0)
-		return err;
-
-	return 0;
-}
-
 /*
  * Create a mutex handle from an address controlled by a MU Atomic engine
  *
@@ -174,54 +129,6 @@ nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
 	return mutex;
 }
 
-struct nfp_cpp *
-nfp_cpp_mutex_cpp(struct nfp_cpp_mutex *mutex)
-{
-	return mutex->cpp;
-}
-
-uint32_t
-nfp_cpp_mutex_key(struct nfp_cpp_mutex *mutex)
-{
-	return mutex->key;
-}
-
-uint16_t
-nfp_cpp_mutex_owner(struct nfp_cpp_mutex *mutex)
-{
-	uint32_t mur = NFP_CPP_ID(mutex->target, 3, 0);	/* atomic_read */
-	uint32_t value, key;
-	int err;
-
-	err = nfp_cpp_readl(mutex->cpp, mur, mutex->address, &value);
-	if (err < 0)
-		return err;
-
-	err = nfp_cpp_readl(mutex->cpp, mur, mutex->address + 4, &key);
-	if (err < 0)
-		return err;
-
-	if (key != mutex->key)
-		return NFP_ERRNO(EPERM);
-
-	if (!MUTEX_IS_LOCKED(value))
-		return 0;
-
-	return MUTEX_INTERFACE(value);
-}
-
-int
-nfp_cpp_mutex_target(struct nfp_cpp_mutex *mutex)
-{
-	return mutex->target;
-}
-
-uint64_t
-nfp_cpp_mutex_address(struct nfp_cpp_mutex *mutex)
-{
-	return mutex->address;
-}
-
 /*
  * Free a mutex handle - does not alter the lock state
  *
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 876a4017c9..63689f2cf7 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -146,12 +146,6 @@ nfp_nsp_close(struct nfp_nsp *state)
 	free(state);
 }
 
-uint16_t
-nfp_nsp_get_abi_ver_major(struct nfp_nsp *state)
-{
-	return state->ver.major;
-}
-
 uint16_t
 nfp_nsp_get_abi_ver_minor(struct nfp_nsp *state)
 {
@@ -348,47 +342,12 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp, uint16_t code, uint32_t option,
 	return ret;
 }
 
-int
-nfp_nsp_wait(struct nfp_nsp *state)
-{
-	struct timespec wait;
-	int count;
-	int err;
-
-	wait.tv_sec = 0;
-	wait.tv_nsec = 25000000;
-	count = 0;
-
-	for (;;) {
-		err = nfp_nsp_command(state, SPCODE_NOOP, 0, 0, 0);
-		if (err != -EAGAIN)
-			break;
-
-		nanosleep(&wait, 0);
-
-		if (count++ > 1000) {
-			err = -ETIMEDOUT;
-			break;
-		}
-	}
-	if (err)
-		printf("NSP failed to respond %d\n", err);
-
-	return err;
-}
-
 int
 nfp_nsp_device_soft_reset(struct nfp_nsp *state)
 {
 	return nfp_nsp_command(state, SPCODE_SOFT_RESET, 0, 0, 0);
 }
 
-int
-nfp_nsp_mac_reinit(struct nfp_nsp *state)
-{
-	return nfp_nsp_command(state, SPCODE_MAC_INIT, 0, 0, 0);
-}
-
 int
 nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size)
 {
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h
index c9c7b0d0fb..66cad416da 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.h
@@ -106,12 +106,9 @@ struct nfp_nsp {
 
 struct nfp_nsp *nfp_nsp_open(struct nfp_cpp *cpp);
 void nfp_nsp_close(struct nfp_nsp *state);
-uint16_t nfp_nsp_get_abi_ver_major(struct nfp_nsp *state);
 uint16_t nfp_nsp_get_abi_ver_minor(struct nfp_nsp *state);
-int nfp_nsp_wait(struct nfp_nsp *state);
 int nfp_nsp_device_soft_reset(struct nfp_nsp *state);
 int nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size);
-int nfp_nsp_mac_reinit(struct nfp_nsp *state);
 int nfp_nsp_read_identify(struct nfp_nsp *state, void *buf, unsigned int size);
 int nfp_nsp_read_sensors(struct nfp_nsp *state, unsigned int sensor_mask,
 			 void *buf, unsigned int size);
@@ -229,12 +226,8 @@ struct nfp_eth_table {
 
 struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
 
-int nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable);
 int nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx,
 			   int configed);
-int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode);
-
 int nfp_nsp_read_eth_table(struct nfp_nsp *state, void *buf, unsigned int size);
 int nfp_nsp_write_eth_table(struct nfp_nsp *state, const void *buf,
 			    unsigned int size);
@@ -261,10 +254,6 @@ struct nfp_nsp *nfp_eth_config_start(struct nfp_cpp *cpp, unsigned int idx);
 int nfp_eth_config_commit_end(struct nfp_nsp *nsp);
 void nfp_eth_config_cleanup_end(struct nfp_nsp *nsp);
 
-int __nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode);
-int __nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed);
-int __nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes);
-
 /**
  * struct nfp_nsp_identify - NSP static information
  * @version:      opaque version string
@@ -289,8 +278,6 @@ struct nfp_nsp_identify {
 	uint64_t sensor_mask;
 };
 
-struct nfp_nsp_identify *__nfp_nsp_identify(struct nfp_nsp *nsp);
-
 enum nfp_nsp_sensor_id {
 	NFP_SENSOR_CHIP_TEMPERATURE,
 	NFP_SENSOR_ASSEMBLY_POWER,
@@ -298,7 +285,4 @@ enum nfp_nsp_sensor_id {
 	NFP_SENSOR_ASSEMBLY_3V3_POWER,
 };
 
-int nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id,
-			  long *val);
-
 #endif
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
index bfd1eddb3e..276e14bbeb 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
@@ -22,88 +22,9 @@ struct nsp_identify {
 	uint64_t sensor_mask;
 };
 
-struct nfp_nsp_identify *
-__nfp_nsp_identify(struct nfp_nsp *nsp)
-{
-	struct nfp_nsp_identify *nspi = NULL;
-	struct nsp_identify *ni;
-	int ret;
-
-	if (nfp_nsp_get_abi_ver_minor(nsp) < 15)
-		return NULL;
-
-	ni = malloc(sizeof(*ni));
-	if (!ni)
-		return NULL;
-
-	memset(ni, 0, sizeof(*ni));
-	ret = nfp_nsp_read_identify(nsp, ni, sizeof(*ni));
-	if (ret < 0) {
-		printf("reading bsp version failed %d\n",
-			ret);
-		goto exit_free;
-	}
-
-	nspi = malloc(sizeof(*nspi));
-	if (!nspi)
-		goto exit_free;
-
-	memset(nspi, 0, sizeof(*nspi));
-	memcpy(nspi->version, ni->version, sizeof(nspi->version));
-	nspi->version[sizeof(nspi->version) - 1] = '\0';
-	nspi->flags = ni->flags;
-	nspi->br_primary = ni->br_primary;
-	nspi->br_secondary = ni->br_secondary;
-	nspi->br_nsp = ni->br_nsp;
-	nspi->primary = rte_le_to_cpu_16(ni->primary);
-	nspi->secondary = rte_le_to_cpu_16(ni->secondary);
-	nspi->nsp = rte_le_to_cpu_16(ni->nsp);
-	nspi->sensor_mask = rte_le_to_cpu_64(ni->sensor_mask);
-
-exit_free:
-	free(ni);
-	return nspi;
-}
-
 struct nfp_sensors {
 	uint32_t chip_temp;
 	uint32_t assembly_power;
 	uint32_t assembly_12v_power;
 	uint32_t assembly_3v3_power;
 };
-
-int
-nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id, long *val)
-{
-	struct nfp_sensors s;
-	struct nfp_nsp *nsp;
-	int ret;
-
-	nsp = nfp_nsp_open(cpp);
-	if (!nsp)
-		return -EIO;
-
-	ret = nfp_nsp_read_sensors(nsp, BIT(id), &s, sizeof(s));
-	nfp_nsp_close(nsp);
-
-	if (ret < 0)
-		return ret;
-
-	switch (id) {
-	case NFP_SENSOR_CHIP_TEMPERATURE:
-		*val = rte_le_to_cpu_32(s.chip_temp);
-		break;
-	case NFP_SENSOR_ASSEMBLY_POWER:
-		*val = rte_le_to_cpu_32(s.assembly_power);
-		break;
-	case NFP_SENSOR_ASSEMBLY_12V_POWER:
-		*val = rte_le_to_cpu_32(s.assembly_12v_power);
-		break;
-	case NFP_SENSOR_ASSEMBLY_3V3_POWER:
-		*val = rte_le_to_cpu_32(s.assembly_3v3_power);
-		break;
-	default:
-		return -EINVAL;
-	}
-	return 0;
-}
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 67946891ab..2d0fd1c5cc 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -145,18 +145,6 @@ nfp_eth_rate2speed(enum nfp_eth_rate rate)
 	return 0;
 }
 
-static unsigned int
-nfp_eth_speed2rate(unsigned int speed)
-{
-	int i;
-
-	for (i = 0; i < (int)ARRAY_SIZE(nsp_eth_rate_tbl); i++)
-		if (nsp_eth_rate_tbl[i].speed == speed)
-			return nsp_eth_rate_tbl[i].rate;
-
-	return RATE_INVALID;
-}
-
 static void
 nfp_eth_copy_mac_reverse(uint8_t *dst, const uint8_t *src)
 {
@@ -421,47 +409,6 @@ nfp_eth_config_commit_end(struct nfp_nsp *nsp)
 	return ret;
 }
 
-/*
- * nfp_eth_set_mod_enable() - set PHY module enable control bit
- * @cpp:	NFP CPP handle
- * @idx:	NFP chip-wide port index
- * @enable:	Desired state
- *
- * Enable or disable PHY module (this usually means setting the TX lanes
- * disable bits).
- *
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
- */
-int
-nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable)
-{
-	union eth_table_entry *entries;
-	struct nfp_nsp *nsp;
-	uint64_t reg;
-
-	nsp = nfp_eth_config_start(cpp, idx);
-	if (!nsp)
-		return -1;
-
-	entries = nfp_nsp_config_entries(nsp);
-
-	/* Check if we are already in requested state */
-	reg = rte_le_to_cpu_64(entries[idx].state);
-	if (enable != (int)FIELD_GET(NSP_ETH_CTRL_ENABLED, reg)) {
-		reg = rte_le_to_cpu_64(entries[idx].control);
-		reg &= ~NSP_ETH_CTRL_ENABLED;
-		reg |= FIELD_PREP(NSP_ETH_CTRL_ENABLED, enable);
-		entries[idx].control = rte_cpu_to_le_64(reg);
-
-		nfp_nsp_config_set_modified(nsp, 1);
-	}
-
-	return nfp_eth_config_commit_end(nsp);
-}
-
 /*
  * nfp_eth_set_configured() - set PHY module configured control bit
  * @cpp:	NFP CPP handle
@@ -510,156 +457,3 @@ nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx, int configed)
 
 	return nfp_eth_config_commit_end(nsp);
 }
-
-static int
-nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
-		       const uint64_t mask, const unsigned int shift,
-		       unsigned int val, const uint64_t ctrl_bit)
-{
-	union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
-	unsigned int idx = nfp_nsp_config_idx(nsp);
-	uint64_t reg;
-
-	/*
-	 * Note: set features were added in ABI 0.14 but the error
-	 *	 codes were initially not populated correctly.
-	 */
-	if (nfp_nsp_get_abi_ver_minor(nsp) < 17) {
-		printf("set operations not supported, please update flash\n");
-		return -EOPNOTSUPP;
-	}
-
-	/* Check if we are already in requested state */
-	reg = rte_le_to_cpu_64(entries[idx].raw[raw_idx]);
-	if (val == (reg & mask) >> shift)
-		return 0;
-
-	reg &= ~mask;
-	reg |= (val << shift) & mask;
-	entries[idx].raw[raw_idx] = rte_cpu_to_le_64(reg);
-
-	entries[idx].control |= rte_cpu_to_le_64(ctrl_bit);
-
-	nfp_nsp_config_set_modified(nsp, 1);
-
-	return 0;
-}
-
-#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit)	\
-	(__extension__ ({ \
-		typeof(mask) _x = (mask); \
-		nfp_eth_set_bit_config(nsp, raw_idx, _x, __bf_shf(_x), \
-				       val, ctrl_bit);			\
-	}))
-
-/*
- * __nfp_eth_set_aneg() - set PHY autonegotiation control bit
- * @nsp:	NFP NSP handle returned from nfp_eth_config_start()
- * @mode:	Desired autonegotiation mode
- *
- * Allow/disallow PHY module to advertise/perform autonegotiation.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode)
-{
-	return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
-				      NSP_ETH_STATE_ANEG, mode,
-				      NSP_ETH_CTRL_SET_ANEG);
-}
-
-/*
- * __nfp_eth_set_fec() - set PHY forward error correction control bit
- * @nsp:	NFP NSP handle returned from nfp_eth_config_start()
- * @mode:	Desired fec mode
- *
- * Set the PHY module forward error correction mode.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-static int
-__nfp_eth_set_fec(struct nfp_nsp *nsp, enum nfp_eth_fec mode)
-{
-	return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
-				      NSP_ETH_STATE_FEC, mode,
-				      NSP_ETH_CTRL_SET_FEC);
-}
-
-/*
- * nfp_eth_set_fec() - set PHY forward error correction control mode
- * @cpp:	NFP CPP handle
- * @idx:	NFP chip-wide port index
- * @mode:	Desired fec mode
- *
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
- */
-int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode)
-{
-	struct nfp_nsp *nsp;
-	int err;
-
-	nsp = nfp_eth_config_start(cpp, idx);
-	if (!nsp)
-		return -EIO;
-
-	err = __nfp_eth_set_fec(nsp, mode);
-	if (err) {
-		nfp_eth_config_cleanup_end(nsp);
-		return err;
-	}
-
-	return nfp_eth_config_commit_end(nsp);
-}
-
-/*
- * __nfp_eth_set_speed() - set interface speed/rate
- * @nsp:	NFP NSP handle returned from nfp_eth_config_start()
- * @speed:	Desired speed (per lane)
- *
- * Set lane speed.  Provided @speed value should be subport speed divided
- * by number of lanes this subport is spanning (i.e. 10000 for 40G, 25000 for
- * 50G, etc.)
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed)
-{
-	enum nfp_eth_rate rate;
-
-	rate = nfp_eth_speed2rate(speed);
-	if (rate == RATE_INVALID) {
-		printf("could not find matching lane rate for speed %u\n",
-			 speed);
-		return -EINVAL;
-	}
-
-	return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
-				      NSP_ETH_STATE_RATE, rate,
-				      NSP_ETH_CTRL_SET_RATE);
-}
-
-/*
- * __nfp_eth_set_split() - set interface lane split
- * @nsp:	NFP NSP handle returned from nfp_eth_config_start()
- * @lanes:	Desired lanes per port
- *
- * Set number of lanes in the port.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes)
-{
-	return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_PORT, NSP_ETH_PORT_LANES,
-				      lanes, NSP_ETH_CTRL_SET_LANES);
-}
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c
index dd41fa4de4..2a07a8e411 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/nfp/nfpcore/nfp_resource.c
@@ -229,18 +229,6 @@ nfp_resource_cpp_id(const struct nfp_resource *res)
 	return res->cpp_id;
 }
 
-/*
- * nfp_resource_name() - Return the name of a resource handle
- * @res:        NFP Resource handle
- *
- * Return: const char pointer to the name of the resource
- */
-const char
-*nfp_resource_name(const struct nfp_resource *res)
-{
-	return res->name;
-}
-
 /*
  * nfp_resource_address() - Return the address of a resource handle
  * @res:        NFP Resource handle
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 06cc6f74f4..d846402aac 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -33,13 +33,6 @@ void nfp_resource_release(struct nfp_resource *res);
  */
 uint32_t nfp_resource_cpp_id(const struct nfp_resource *res);
 
-/**
- * Return the name of a NFP Resource
- * @param[in]   res     NFP Resource handle
- * @return      Name of the NFP Resource
- */
-const char *nfp_resource_name(const struct nfp_resource *res);
-
 /**
  * Return the target address of a NFP Resource
  * @param[in]   res     NFP Resource handle
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c
index cb7d83db51..b02063f3b9 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.c
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c
@@ -176,40 +176,6 @@ __nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip)
 	return NULL;
 }
 
-/*
- * nfp_rtsym_count() - Get the number of RTSYM descriptors
- * @rtbl:	NFP RTsym table
- *
- * Return: Number of RTSYM descriptors
- */
-int
-nfp_rtsym_count(struct nfp_rtsym_table *rtbl)
-{
-	if (!rtbl)
-		return -EINVAL;
-
-	return rtbl->num;
-}
-
-/*
- * nfp_rtsym_get() - Get the Nth RTSYM descriptor
- * @rtbl:	NFP RTsym table
- * @idx:	Index (0-based) of the RTSYM descriptor
- *
- * Return: const pointer to a struct nfp_rtsym descriptor, or NULL
- */
-const struct nfp_rtsym *
-nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx)
-{
-	if (!rtbl)
-		return NULL;
-
-	if (idx >= rtbl->num)
-		return NULL;
-
-	return &rtbl->symtab[idx];
-}
-
 /*
  * nfp_rtsym_lookup() - Return the RTSYM descriptor for a symbol name
  * @rtbl:	NFP RTsym table
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.h b/drivers/net/nfp/nfpcore/nfp_rtsym.h
index 8b494211bc..c63bc05fff 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.h
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.h
@@ -46,10 +46,6 @@ struct nfp_rtsym_table *nfp_rtsym_table_read(struct nfp_cpp *cpp);
 struct nfp_rtsym_table *
 __nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip);
 
-int nfp_rtsym_count(struct nfp_rtsym_table *rtbl);
-
-const struct nfp_rtsym *nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx);
-
 const struct nfp_rtsym *
 nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name);
 
diff --git a/drivers/net/octeontx/base/octeontx_bgx.c b/drivers/net/octeontx/base/octeontx_bgx.c
index ac856ff86d..59249dcced 100644
--- a/drivers/net/octeontx/base/octeontx_bgx.c
+++ b/drivers/net/octeontx/base/octeontx_bgx.c
@@ -90,60 +90,6 @@ octeontx_bgx_port_stop(int port)
 	return res;
 }
 
-int
-octeontx_bgx_port_get_config(int port, octeontx_mbox_bgx_port_conf_t *conf)
-{
-	struct octeontx_mbox_hdr hdr;
-	octeontx_mbox_bgx_port_conf_t bgx_conf;
-	int len = sizeof(octeontx_mbox_bgx_port_conf_t);
-	int res;
-
-	hdr.coproc = OCTEONTX_BGX_COPROC;
-	hdr.msg = MBOX_BGX_PORT_GET_CONFIG;
-	hdr.vfid = port;
-
-	memset(&bgx_conf, 0, sizeof(octeontx_mbox_bgx_port_conf_t));
-	res = octeontx_mbox_send(&hdr, NULL, 0, &bgx_conf, len);
-	if (res < 0)
-		return -EACCES;
-
-	conf->enable = bgx_conf.enable;
-	conf->promisc = bgx_conf.promisc;
-	conf->bpen = bgx_conf.bpen;
-	conf->node = bgx_conf.node;
-	conf->base_chan = bgx_conf.base_chan;
-	conf->num_chans = bgx_conf.num_chans;
-	conf->mtu = bgx_conf.mtu;
-	conf->bgx = bgx_conf.bgx;
-	conf->lmac = bgx_conf.lmac;
-	conf->mode = bgx_conf.mode;
-	conf->pkind = bgx_conf.pkind;
-	memcpy(conf->macaddr, bgx_conf.macaddr, 6);
-
-	return res;
-}
-
-int
-octeontx_bgx_port_status(int port, octeontx_mbox_bgx_port_status_t *stat)
-{
-	struct octeontx_mbox_hdr hdr;
-	octeontx_mbox_bgx_port_status_t bgx_stat;
-	int len = sizeof(octeontx_mbox_bgx_port_status_t);
-	int res;
-
-	hdr.coproc = OCTEONTX_BGX_COPROC;
-	hdr.msg = MBOX_BGX_PORT_GET_STATUS;
-	hdr.vfid = port;
-
-	res = octeontx_mbox_send(&hdr, NULL, 0, &bgx_stat, len);
-	if (res < 0)
-		return -EACCES;
-
-	stat->link_up = bgx_stat.link_up;
-
-	return res;
-}
-
 int
 octeontx_bgx_port_stats(int port, octeontx_mbox_bgx_port_stats_t *stats)
 {
diff --git a/drivers/net/octeontx/base/octeontx_bgx.h b/drivers/net/octeontx/base/octeontx_bgx.h
index d126a0b7fc..fc61168b62 100644
--- a/drivers/net/octeontx/base/octeontx_bgx.h
+++ b/drivers/net/octeontx/base/octeontx_bgx.h
@@ -147,8 +147,6 @@ int octeontx_bgx_port_open(int port, octeontx_mbox_bgx_port_conf_t *conf);
 int octeontx_bgx_port_close(int port);
 int octeontx_bgx_port_start(int port);
 int octeontx_bgx_port_stop(int port);
-int octeontx_bgx_port_get_config(int port, octeontx_mbox_bgx_port_conf_t *conf);
-int octeontx_bgx_port_status(int port, octeontx_mbox_bgx_port_status_t *stat);
 int octeontx_bgx_port_stats(int port, octeontx_mbox_bgx_port_stats_t *stats);
 int octeontx_bgx_port_stats_clr(int port);
 int octeontx_bgx_port_link_status(int port);
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c
index 0ddff54886..30528c269e 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.c
+++ b/drivers/net/octeontx/base/octeontx_pkivf.c
@@ -114,28 +114,6 @@ octeontx_pki_port_create_qos(int port, pki_qos_cfg_t *qos_cfg)
 	return res;
 }
 
-
-int
-octeontx_pki_port_errchk_config(int port, pki_errchk_cfg_t *cfg)
-{
-	struct octeontx_mbox_hdr hdr;
-	int res;
-
-	pki_errchk_cfg_t e_cfg;
-	e_cfg = *((pki_errchk_cfg_t *)(cfg));
-	int len = sizeof(pki_errchk_cfg_t);
-
-	hdr.coproc = OCTEONTX_PKI_COPROC;
-	hdr.msg = MBOX_PKI_PORT_ERRCHK_CONFIG;
-	hdr.vfid = port;
-
-	res = octeontx_mbox_send(&hdr, &e_cfg, len, NULL, 0);
-	if (res < 0)
-		return -EACCES;
-
-	return res;
-}
-
 int
 octeontx_pki_port_vlan_fltr_config(int port,
 				   pki_port_vlan_filter_config_t *fltr_cfg)
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.h b/drivers/net/octeontx/base/octeontx_pkivf.h
index d41eaa57ed..06c409225f 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.h
+++ b/drivers/net/octeontx/base/octeontx_pkivf.h
@@ -363,7 +363,6 @@ int octeontx_pki_port_hash_config(int port, pki_hash_cfg_t *hash_cfg);
 int octeontx_pki_port_pktbuf_config(int port, pki_pktbuf_cfg_t *buf_cfg);
 int octeontx_pki_port_create_qos(int port, pki_qos_cfg_t *qos_cfg);
 int octeontx_pki_port_close(int port);
-int octeontx_pki_port_errchk_config(int port, pki_errchk_cfg_t *cfg);
 int octeontx_pki_port_vlan_fltr_config(int port,
 				pki_port_vlan_filter_config_t *fltr_cfg);
 int octeontx_pki_port_vlan_fltr_entry_config(int port,
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6cebbe677d..b8f9eb188f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -160,32 +160,6 @@ nix_lf_free(struct otx2_eth_dev *dev)
 	return otx2_mbox_process(mbox);
 }
 
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
-	struct otx2_mbox *mbox = dev->mbox;
-
-	if (otx2_dev_is_vf_or_sdp(dev))
-		return 0;
-
-	otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
-	return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
-	struct otx2_mbox *mbox = dev->mbox;
-
-	if (otx2_dev_is_vf_or_sdp(dev))
-		return 0;
-
-	otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
-	return otx2_mbox_process(mbox);
-}
-
 static int
 npc_rx_enable(struct otx2_eth_dev *dev)
 {
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3b9871f4dc..f0ed59d89a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -471,7 +471,6 @@ int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
 int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
 int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
 void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
 
 /* Stats */
@@ -521,8 +520,6 @@ int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			       struct rte_eth_rss_conf *rss_conf);
 
 /* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
 int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
 			  struct rte_ether_addr *addr);
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index 6d951bc7e2..dab0b8e3cd 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -480,61 +480,6 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
-	const struct nix_rx_parse_s *rx =
-		 (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
-	nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
-		 cq->tag, cq->q, cq->node, cq->cqe_type);
-
-	nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
-		 rx->chan, rx->desc_sizem1);
-	nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
-		 rx->imm_copy, rx->express);
-	nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
-		 rx->wqwd, rx->errlev, rx->errcode);
-	nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
-		 rx->latype, rx->lbtype, rx->lctype);
-	nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
-		 rx->ldtype, rx->letype, rx->lftype);
-	nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
-		 rx->lgtype, rx->lhtype);
-
-	nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
-	nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
-		 rx->l2m, rx->l2b, rx->l3m, rx->l3b);
-	nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
-		 rx->vtag0_valid, rx->vtag0_gone);
-	nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
-		 rx->vtag1_valid, rx->vtag1_gone);
-	nix_dump("W1: pkind \t%d", rx->pkind);
-	nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
-		 rx->vtag0_tci, rx->vtag1_tci);
-
-	nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
-		 rx->laflags, rx->lbflags, rx->lcflags);
-	nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
-		 rx->ldflags, rx->leflags, rx->lfflags);
-	nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
-		 rx->lgflags, rx->lhflags);
-
-	nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
-		 rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
-	nix_dump("W3: match_id \t%d", rx->match_id);
-
-	nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
-		 rx->laptr, rx->lbptr, rx->lcptr);
-	nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
-		 rx->ldptr, rx->leptr, rx->lfptr);
-	nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
-	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
-		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
 static uint8_t
 prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
 			uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 30a823c8a7..e390629b2f 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -360,8 +360,6 @@ int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
 			       struct otx2_flow_item_info *info,
 			       struct rte_flow_error *error);
 
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
 int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
 				   struct otx2_mbox *mbox,
 				   struct otx2_parse_state *pst,
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index 9a0a5f9fb4..79541c86c0 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -432,24 +432,6 @@ otx2_flow_parse_item_basic(const struct rte_flow_item *item,
 	return 0;
 }
 
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
-	uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
-	int i, j = 0;
-
-	for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
-		if (nibble_mask & (1 << i)) {
-			nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
-			cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
-			j += 1;
-		}
-	}
-
-	data[0] = cdata[0];
-	data[1] = cdata[1];
-}
-
 static int
 flow_first_set_bit(uint64_t slab)
 {
diff --git a/drivers/net/pfe/base/pfe.h b/drivers/net/pfe/base/pfe.h
index 0a88e98c1b..884694985d 100644
--- a/drivers/net/pfe/base/pfe.h
+++ b/drivers/net/pfe/base/pfe.h
@@ -312,20 +312,16 @@ enum mac_loop {LB_NONE, LB_EXT, LB_LOCAL};
 #endif
 
 void gemac_init(void *base, void *config);
-void gemac_disable_rx_checksum_offload(void *base);
 void gemac_enable_rx_checksum_offload(void *base);
 void gemac_set_mdc_div(void *base, int mdc_div);
 void gemac_set_speed(void *base, enum mac_speed gem_speed);
 void gemac_set_duplex(void *base, int duplex);
 void gemac_set_mode(void *base, int mode);
 void gemac_enable(void *base);
-void gemac_tx_disable(void *base);
-void gemac_tx_enable(void *base);
 void gemac_disable(void *base);
 void gemac_reset(void *base);
 void gemac_set_address(void *base, struct spec_addr *addr);
 struct spec_addr gemac_get_address(void *base);
-void gemac_set_loop(void *base, enum mac_loop gem_loop);
 void gemac_set_laddr1(void *base, struct pfe_mac_addr *address);
 void gemac_set_laddr2(void *base, struct pfe_mac_addr *address);
 void gemac_set_laddr3(void *base, struct pfe_mac_addr *address);
@@ -336,7 +332,6 @@ void gemac_clear_laddr1(void *base);
 void gemac_clear_laddr2(void *base);
 void gemac_clear_laddr3(void *base);
 void gemac_clear_laddr4(void *base);
-void gemac_clear_laddrN(void *base, unsigned int entry_index);
 struct pfe_mac_addr gemac_get_hash(void *base);
 void gemac_set_hash(void *base, struct pfe_mac_addr *hash);
 struct pfe_mac_addr gem_get_laddr1(void *base);
@@ -346,24 +341,17 @@ struct pfe_mac_addr gem_get_laddr4(void *base);
 struct pfe_mac_addr gem_get_laddrN(void *base, unsigned int entry_index);
 void gemac_set_config(void *base, struct gemac_cfg *cfg);
 void gemac_allow_broadcast(void *base);
-void gemac_no_broadcast(void *base);
 void gemac_enable_1536_rx(void *base);
 void gemac_disable_1536_rx(void *base);
 int gemac_set_rx(void *base, int mtu);
-void gemac_enable_rx_jmb(void *base);
 void gemac_disable_rx_jmb(void *base);
 void gemac_enable_stacked_vlan(void *base);
 void gemac_disable_stacked_vlan(void *base);
 void gemac_enable_pause_rx(void *base);
-void gemac_disable_pause_rx(void *base);
-void gemac_enable_pause_tx(void *base);
-void gemac_disable_pause_tx(void *base);
 void gemac_enable_copy_all(void *base);
 void gemac_disable_copy_all(void *base);
 void gemac_set_bus_width(void *base, int width);
-void gemac_set_wol(void *base, u32 wol_conf);
 
-void gpi_init(void *base, struct gpi_cfg *cfg);
 void gpi_reset(void *base);
 void gpi_enable(void *base);
 void gpi_disable(void *base);
diff --git a/drivers/net/pfe/pfe_hal.c b/drivers/net/pfe/pfe_hal.c
index 0d25ec0523..303308c35b 100644
--- a/drivers/net/pfe/pfe_hal.c
+++ b/drivers/net/pfe/pfe_hal.c
@@ -118,16 +118,6 @@ gemac_enable_rx_checksum_offload(__rte_unused void *base)
 	/*Do not find configuration to do this */
 }
 
-/* Disable Rx Checksum Engine.
- *
- * @param[in] base	GEMAC base address.
- */
-void
-gemac_disable_rx_checksum_offload(__rte_unused void *base)
-{
-	/*Do not find configuration to do this */
-}
-
 /* GEMAC set speed.
  * @param[in] base	GEMAC base address
  * @param[in] speed	GEMAC speed (10, 100 or 1000 Mbps)
@@ -214,23 +204,6 @@ gemac_disable(void *base)
 		EMAC_ECNTRL_REG);
 }
 
-/* GEMAC TX disable function.
- * @param[in] base	GEMAC base address
- */
-void
-gemac_tx_disable(void *base)
-{
-	writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_GTS, base +
-		EMAC_TCNTRL_REG);
-}
-
-void
-gemac_tx_enable(void *base)
-{
-	writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_GTS, base +
-			EMAC_TCNTRL_REG);
-}
-
 /* Sets the hash register of the MAC.
  * This register is used for matching unicast and multicast frames.
  *
@@ -264,40 +237,6 @@ gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
 	}
 }
 
-void
-gemac_clear_laddrN(void *base, unsigned int entry_index)
-{
-	if (entry_index < 1 || entry_index > EMAC_SPEC_ADDR_MAX)
-		return;
-
-	entry_index = entry_index - 1;
-	if (entry_index < 1) {
-		writel(0, base + EMAC_PHY_ADDR_LOW);
-		writel(0, base + EMAC_PHY_ADDR_HIGH);
-	} else {
-		writel(0,  base + ((entry_index - 1) * 8) + EMAC_SMAC_0_0);
-		writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_1);
-	}
-}
-
-/* Set the loopback mode of the MAC.  This can be either no loopback for
- * normal operation, local loopback through MAC internal loopback module or PHY
- *   loopback for external loopback through a PHY.  This asserts the external
- * loop pin.
- *
- * @param[in] base	GEMAC base address.
- * @param[in] gem_loop	Loopback mode to be enabled. LB_LOCAL - MAC
- * Loopback,
- *			LB_EXT - PHY Loopback.
- */
-void
-gemac_set_loop(void *base, __rte_unused enum mac_loop gem_loop)
-{
-	pr_info("%s()\n", __func__);
-	writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_LOOP, (base +
-		EMAC_RCNTRL_REG));
-}
-
 /* GEMAC allow frames
  * @param[in] base	GEMAC base address
  */
@@ -328,16 +267,6 @@ gemac_allow_broadcast(void *base)
 		EMAC_RCNTRL_REG);
 }
 
-/* GEMAC no broadcast function.
- * @param[in] base	GEMAC base address
- */
-void
-gemac_no_broadcast(void *base)
-{
-	writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_BC_REJ, base +
-		EMAC_RCNTRL_REG);
-}
-
 /* GEMAC enable 1536 rx function.
  * @param[in]	base	GEMAC base address
  */
@@ -373,21 +302,6 @@ gemac_set_rx(void *base, int mtu)
 	return 0;
 }
 
-/* GEMAC enable jumbo function.
- * @param[in]	base	GEMAC base address
- */
-void
-gemac_enable_rx_jmb(void *base)
-{
-	if (pfe_svr == SVR_LS1012A_REV1) {
-		PFE_PMD_ERR("Jumbo not supported on Rev1");
-		return;
-	}
-
-	writel((readl(base + EMAC_RCNTRL_REG) & PFE_MTU_RESET_MASK) |
-			(JUMBO_FRAME_SIZE << 16), base + EMAC_RCNTRL_REG);
-}
-
 /* GEMAC enable stacked vlan function.
  * @param[in]	base	GEMAC base address
  */
@@ -407,50 +321,6 @@ gemac_enable_pause_rx(void *base)
 	       base + EMAC_RCNTRL_REG);
 }
 
-/* GEMAC disable pause rx function.
- * @param[in] base	GEMAC base address
- */
-void
-gemac_disable_pause_rx(void *base)
-{
-	writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_FCE,
-	       base + EMAC_RCNTRL_REG);
-}
-
-/* GEMAC enable pause tx function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_enable_pause_tx(void *base)
-{
-	writel(EMAC_RX_SECTION_EMPTY_V, base + EMAC_RX_SECTION_EMPTY);
-}
-
-/* GEMAC disable pause tx function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_disable_pause_tx(void *base)
-{
-	writel(0x0, base + EMAC_RX_SECTION_EMPTY);
-}
-
-/* GEMAC wol configuration
- * @param[in] base	GEMAC base address
- * @param[in] wol_conf	WoL register configuration
- */
-void
-gemac_set_wol(void *base, u32 wol_conf)
-{
-	u32  val = readl(base + EMAC_ECNTRL_REG);
-
-	if (wol_conf)
-		val |= (EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
-	else
-		val &= ~(EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
-	writel(val, base + EMAC_ECNTRL_REG);
-}
-
 /* Sets Gemac bus width to 64bit
  * @param[in] base       GEMAC base address
  * @param[in] width     gemac bus width to be set possible values are 32/64/128
@@ -488,20 +358,6 @@ gemac_set_config(void *base, struct gemac_cfg *cfg)
 
 /**************************** GPI ***************************/
 
-/* Initializes a GPI block.
- * @param[in] base	GPI base address
- * @param[in] cfg	GPI configuration
- */
-void
-gpi_init(void *base, struct gpi_cfg *cfg)
-{
-	gpi_reset(base);
-
-	gpi_disable(base);
-
-	gpi_set_config(base, cfg);
-}
-
 /* Resets a GPI block.
  * @param[in] base	GPI base address
  */
diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c
index 799050dce3..83edbd64fc 100644
--- a/drivers/net/pfe/pfe_hif_lib.c
+++ b/drivers/net/pfe/pfe_hif_lib.c
@@ -318,26 +318,6 @@ hif_lib_client_register(struct hif_client_s *client)
 	return err;
 }
 
-int
-hif_lib_client_unregister(struct hif_client_s *client)
-{
-	struct pfe *pfe = client->pfe;
-	u32 client_id = client->id;
-
-	PFE_PMD_INFO("client: %p, client_id: %d, txQ_depth: %d, rxQ_depth: %d",
-		     client, client->id, client->tx_qsize, client->rx_qsize);
-
-	rte_spinlock_lock(&pfe->hif.lock);
-	hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_UNREGISTER, client->id, 0);
-
-	hif_lib_client_release_tx_buffers(client);
-	hif_lib_client_release_rx_buffers(client);
-	pfe->hif_client[client_id] = NULL;
-	rte_spinlock_unlock(&pfe->hif.lock);
-
-	return 0;
-}
-
 int
 hif_lib_event_handler_start(struct hif_client_s *client, int event,
 				int qno)
diff --git a/drivers/net/pfe/pfe_hif_lib.h b/drivers/net/pfe/pfe_hif_lib.h
index d7c0606943..c89c8fed74 100644
--- a/drivers/net/pfe/pfe_hif_lib.h
+++ b/drivers/net/pfe/pfe_hif_lib.h
@@ -161,7 +161,6 @@ extern unsigned int emac_txq_cnt;
 int pfe_hif_lib_init(struct pfe *pfe);
 void pfe_hif_lib_exit(struct pfe *pfe);
 int hif_lib_client_register(struct hif_client_s *client);
-int hif_lib_client_unregister(struct  hif_client_s *client);
 void hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno,
 			void *data, void *data1, unsigned int len,
 			u32 client_ctrl, unsigned int flags, void *client_data);
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 6c8e6d4072..b86674fdff 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -1027,7 +1027,6 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
 
 int ecore_configure_pf_max_bandwidth(struct ecore_dev *p_dev, u8 max_bw);
 int ecore_configure_pf_min_bandwidth(struct ecore_dev *p_dev, u8 min_bw);
-void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 int ecore_device_num_engines(struct ecore_dev *p_dev);
 int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
@@ -1055,7 +1054,6 @@ u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl);
 const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
 
 /* doorbell recovery mechanism */
-void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn);
 void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
 			       enum ecore_db_rec_exec);
 
@@ -1091,7 +1089,6 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 
 /* Utility functions for dumping the content of the NIG LLH filters */
 enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid);
-enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
 
 /**
  * @brief ecore_set_platform_str - Set the debug dump platform string.
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index d3025724b6..2fe607d1fb 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -242,13 +242,6 @@ static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn,
 	return OSAL_NULL;
 }
 
-static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
-{
-	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
-
-	p_mgr->srq_count = num_srqs;
-}
-
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
@@ -283,31 +276,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 	return p_hwfn->p_cxt_mngr->acquired[type].start_cid;
 }
 
-u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
-					 enum protocol_type type)
-{
-	u32 cnt = 0;
-	int i;
-
-	for (i = 0; i < TASK_SEGMENTS; i++)
-		cnt += p_hwfn->p_cxt_mngr->conn_cfg[type].tid_seg[i].count;
-
-	return cnt;
-}
-
-static OSAL_INLINE void
-ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn,
-			      enum protocol_type proto,
-			      u8 seg, u8 seg_type, u32 count, bool has_fl)
-{
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	struct ecore_tid_seg *p_seg = &p_mngr->conn_cfg[proto].tid_seg[seg];
-
-	p_seg->count = count;
-	p_seg->has_fl_mem = has_fl;
-	p_seg->type = seg_type;
-}
-
 /* the *p_line parameter must be either 0 for the first invocation or the
  * value returned in the previous invocation.
  */
@@ -1905,11 +1873,6 @@ void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 		   cid, rel_cid, vfid, type);
 }
 
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
-{
-	_ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
-}
-
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info)
 {
@@ -1987,198 +1950,6 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
-/* This function is very RoCE oriented, if another protocol in the future
- * will want this feature we'll need to modify the function to be more generic
- */
-enum _ecore_status_t
-ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
-			    enum ecore_cxt_elem_type elem_type,
-			    u32 iid)
-{
-	u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line;
-	struct ecore_ilt_client_cfg *p_cli;
-	struct ecore_ilt_cli_blk *p_blk;
-	struct ecore_ptt *p_ptt;
-	dma_addr_t p_phys;
-	u64 ilt_hw_entry;
-	void *p_virt;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	switch (elem_type) {
-	case ECORE_ELEM_CXT:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
-		elem_size = CONN_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUC_BLK];
-		break;
-	case ECORE_ELEM_SRQ:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
-		elem_size = SRQ_CXT_SIZE;
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
-		break;
-	case ECORE_ELEM_TASK:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
-		elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
-		break;
-	default:
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_INVALID elem type = %d", elem_type);
-		return ECORE_INVAL;
-	}
-
-	/* Calculate line in ilt */
-	hw_p_size = p_cli->p_size.val;
-	elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
-	line = p_blk->start_line + (iid / elems_per_p);
-	shadow_line = line - p_hwfn->p_cxt_mngr->pf_start_line;
-
-	/* If line is already allocated, do nothing, otherwise allocate it and
-	 * write it to the PSWRQ2 registers.
-	 * This section can be run in parallel from different contexts and thus
-	 * a mutex protection is needed.
-	 */
-
-	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
-
-	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr)
-		goto out0;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt) {
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
-		rc = ECORE_TIMEOUT;
-		goto out0;
-	}
-
-	p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-					 &p_phys,
-					 p_blk->real_size_in_page);
-	if (!p_virt) {
-		rc = ECORE_NOMEM;
-		goto out1;
-	}
-	OSAL_MEM_ZERO(p_virt, p_blk->real_size_in_page);
-
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr = p_virt;
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr = p_phys;
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size =
-		p_blk->real_size_in_page;
-
-	/* compute absolute offset */
-	reg_offset = PSWRQ2_REG_ILT_MEMORY +
-		     (line * ILT_REG_SIZE_IN_BYTES * ILT_ENTRY_IN_REGS);
-
-	ilt_hw_entry = 0;
-	SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
-	SET_FIELD(ilt_hw_entry,
-		  ILT_ENTRY_PHY_ADDR,
-		 (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12));
-
-/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
-
-	ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry,
-			    reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
-			    OSAL_NULL /* default parameters */);
-
-out1:
-	ecore_ptt_release(p_hwfn, p_ptt);
-out0:
-	OSAL_MUTEX_RELEASE(&p_hwfn->p_cxt_mngr->mutex);
-
-	return rc;
-}
-
-/* This function is very RoCE oriented, if another protocol in the future
- * will want this feature we'll need to modify the function to be more generic
- */
-static enum _ecore_status_t
-ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
-			 enum ecore_cxt_elem_type elem_type,
-			 u32 start_iid, u32 count)
-{
-	u32 start_line, end_line, shadow_start_line, shadow_end_line;
-	u32 reg_offset, elem_size, hw_p_size, elems_per_p;
-	struct ecore_ilt_client_cfg *p_cli;
-	struct ecore_ilt_cli_blk *p_blk;
-	u32 end_iid = start_iid + count;
-	struct ecore_ptt *p_ptt;
-	u64 ilt_hw_entry = 0;
-	u32 i;
-
-	switch (elem_type) {
-	case ECORE_ELEM_CXT:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
-		elem_size = CONN_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUC_BLK];
-		break;
-	case ECORE_ELEM_SRQ:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
-		elem_size = SRQ_CXT_SIZE;
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
-		break;
-	case ECORE_ELEM_TASK:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
-		elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
-		break;
-	default:
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_INVALID elem type = %d", elem_type);
-		return ECORE_INVAL;
-	}
-
-	/* Calculate line in ilt */
-	hw_p_size = p_cli->p_size.val;
-	elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
-	start_line = p_blk->start_line + (start_iid / elems_per_p);
-	end_line = p_blk->start_line + (end_iid / elems_per_p);
-	if (((end_iid + 1) / elems_per_p) != (end_iid / elems_per_p))
-		end_line--;
-
-	shadow_start_line = start_line - p_hwfn->p_cxt_mngr->pf_start_line;
-	shadow_end_line = end_line - p_hwfn->p_cxt_mngr->pf_start_line;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt) {
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
-		return ECORE_TIMEOUT;
-	}
-
-	for (i = shadow_start_line; i < shadow_end_line; i++) {
-		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr)
-			continue;
-
-		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
-
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0;
-
-		/* compute absolute offset */
-		reg_offset = PSWRQ2_REG_ILT_MEMORY +
-		    ((start_line++) * ILT_REG_SIZE_IN_BYTES *
-		     ILT_ENTRY_IN_REGS);
-
-		/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a
-		 * wide-bus.
-		 */
-		ecore_dmae_host2grc(p_hwfn, p_ptt,
-				    (u64)(osal_uintptr_t)&ilt_hw_entry,
-				    reg_offset,
-				    sizeof(ilt_hw_entry) / sizeof(u32),
-				    OSAL_NULL /* default parameters */);
-	}
-
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return ECORE_SUCCESS;
-}
-
 static u16 ecore_blk_calculate_pages(struct ecore_ilt_cli_blk *p_blk)
 {
 	if (p_blk->real_size_in_page == 0)
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1a539bbc71..dc5f49ef57 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -38,9 +38,6 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type,
 				  u32 *vf_cid);
 
-u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
-				  enum protocol_type type);
-
 u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type);
 u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
@@ -135,14 +132,6 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 
 #define ECORE_CXT_PF_CID (0xff)
 
-/**
- * @brief ecore_cxt_release - Release a cid
- *
- * @param p_hwfn
- * @param cid
- */
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
-
 /**
  * @brief ecore_cxt_release - Release a cid belonging to a vf-queue
  *
@@ -181,22 +170,6 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
 					    enum protocol_type type,
 					    u32 *p_cid, u8 vfid);
 
-/**
- * @brief ecore_cxt_get_tid_mem_info - function checks if the
- *        page containing the iid in the ilt is already
- *        allocated, if it is not it allocates the page.
- *
- * @param p_hwfn
- * @param elem_type
- * @param iid
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
-			    enum ecore_cxt_elem_type elem_type,
-			    u32 iid);
-
 /**
  * @brief ecore_cxt_free_proto_ilt - function frees ilt pages
  *        associated with the protocol passed.
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 31234f18cf..024aad3f2c 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -70,23 +70,6 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
-static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
-				 u16 proto_id, bool ieee)
-{
-	bool port;
-
-	if (!p_hwfn->p_dcbx_info->iwarp_port)
-		return false;
-
-	if (ieee)
-		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
-						DCBX_APP_SF_IEEE_TCP_PORT);
-	else
-		port = ecore_dcbx_app_port(app_info_bitmap);
-
-	return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
-}
-
 static void
 ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
@@ -1323,40 +1306,6 @@ enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     enum ecore_lldp_agent agent,
-					     u8 tlv_type)
-{
-	u32 mb_param = 0, mcp_resp = 0, mcp_param = 0, val = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	switch (agent) {
-	case ECORE_LLDP_NEAREST_BRIDGE:
-		val = LLDP_NEAREST_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
-		val = LLDP_NEAREST_NON_TPMR_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
-		val = LLDP_NEAREST_CUSTOMER_BRIDGE;
-		break;
-	default:
-		DP_ERR(p_hwfn, "Invalid agent type %d\n", agent);
-		return ECORE_INVAL;
-	}
-
-	SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_AGENT, val);
-	SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_TLV_RX_TYPE, tlv_type);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX,
-			   mb_param, &mcp_resp, &mcp_param);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_hwfn, false, "Failed to register TLV\n");
-
-	return rc;
-}
-
 enum _ecore_status_t
 ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
@@ -1390,218 +1339,3 @@ ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	return rc;
 }
-
-enum _ecore_status_t
-ecore_lldp_get_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      struct ecore_lldp_config_params *p_params)
-{
-	struct lldp_config_params_s lldp_params;
-	u32 addr, val;
-	int i;
-
-	switch (p_params->agent) {
-	case ECORE_LLDP_NEAREST_BRIDGE:
-		val = LLDP_NEAREST_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
-		val = LLDP_NEAREST_NON_TPMR_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
-		val = LLDP_NEAREST_CUSTOMER_BRIDGE;
-		break;
-	default:
-		DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent);
-		return ECORE_INVAL;
-	}
-
-	addr = p_hwfn->mcp_info->port_addr +
-			offsetof(struct public_port, lldp_config_params[val]);
-
-	ecore_memcpy_from(p_hwfn, p_ptt, &lldp_params, addr,
-			  sizeof(lldp_params));
-
-	p_params->tx_interval = GET_MFW_FIELD(lldp_params.config,
-					      LLDP_CONFIG_TX_INTERVAL);
-	p_params->tx_hold = GET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_HOLD);
-	p_params->tx_credit = GET_MFW_FIELD(lldp_params.config,
-					    LLDP_CONFIG_MAX_CREDIT);
-	p_params->rx_enable = GET_MFW_FIELD(lldp_params.config,
-					    LLDP_CONFIG_ENABLE_RX);
-	p_params->tx_enable = GET_MFW_FIELD(lldp_params.config,
-					    LLDP_CONFIG_ENABLE_TX);
-
-	OSAL_MEMCPY(p_params->chassis_id_tlv, lldp_params.local_chassis_id,
-		    sizeof(p_params->chassis_id_tlv));
-	for (i = 0; i < ECORE_LLDP_CHASSIS_ID_STAT_LEN; i++)
-		p_params->chassis_id_tlv[i] =
-				OSAL_BE32_TO_CPU(p_params->chassis_id_tlv[i]);
-
-	OSAL_MEMCPY(p_params->port_id_tlv, lldp_params.local_port_id,
-		    sizeof(p_params->port_id_tlv));
-	for (i = 0; i < ECORE_LLDP_PORT_ID_STAT_LEN; i++)
-		p_params->port_id_tlv[i] =
-				OSAL_BE32_TO_CPU(p_params->port_id_tlv[i]);
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t
-ecore_lldp_set_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      struct ecore_lldp_config_params *p_params)
-{
-	u32 mb_param = 0, mcp_resp = 0, mcp_param = 0;
-	struct lldp_config_params_s lldp_params;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 addr, val;
-	int i;
-
-	switch (p_params->agent) {
-	case ECORE_LLDP_NEAREST_BRIDGE:
-		val = LLDP_NEAREST_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
-		val = LLDP_NEAREST_NON_TPMR_BRIDGE;
-		break;
-	case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
-		val = LLDP_NEAREST_CUSTOMER_BRIDGE;
-		break;
-	default:
-		DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent);
-		return ECORE_INVAL;
-	}
-
-	SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_AGENT, val);
-	addr = p_hwfn->mcp_info->port_addr +
-			offsetof(struct public_port, lldp_config_params[val]);
-
-	OSAL_MEMSET(&lldp_params, 0, sizeof(lldp_params));
-	SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_TX_INTERVAL,
-		      p_params->tx_interval);
-	SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_HOLD, p_params->tx_hold);
-	SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_MAX_CREDIT,
-		      p_params->tx_credit);
-	SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_ENABLE_RX,
-		      !!p_params->rx_enable);
-	SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_ENABLE_TX,
-		      !!p_params->tx_enable);
-
-	for (i = 0; i < ECORE_LLDP_CHASSIS_ID_STAT_LEN; i++)
-		p_params->chassis_id_tlv[i] =
-				OSAL_CPU_TO_BE32(p_params->chassis_id_tlv[i]);
-	OSAL_MEMCPY(lldp_params.local_chassis_id, p_params->chassis_id_tlv,
-		    sizeof(lldp_params.local_chassis_id));
-
-	for (i = 0; i < ECORE_LLDP_PORT_ID_STAT_LEN; i++)
-		p_params->port_id_tlv[i] =
-				OSAL_CPU_TO_BE32(p_params->port_id_tlv[i]);
-	OSAL_MEMCPY(lldp_params.local_port_id, p_params->port_id_tlv,
-		    sizeof(lldp_params.local_port_id));
-
-	ecore_memcpy_to(p_hwfn, p_ptt, addr, &lldp_params, sizeof(lldp_params));
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LLDP,
-			   mb_param, &mcp_resp, &mcp_param);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_hwfn, false, "SET_LLDP failed, error = %d\n", rc);
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_lldp_set_system_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			   struct ecore_lldp_sys_tlvs *p_params)
-{
-	u32 mb_param = 0, mcp_resp = 0, mcp_param = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct lldp_system_tlvs_buffer_s lld_tlv_buf;
-	u32 addr, *p_val;
-	u8 len;
-	int i;
-
-	p_val = (u32 *)p_params->buf;
-	for (i = 0; i < ECORE_LLDP_SYS_TLV_SIZE / 4; i++)
-		p_val[i] = OSAL_CPU_TO_BE32(p_val[i]);
-
-	OSAL_MEMSET(&lld_tlv_buf, 0, sizeof(lld_tlv_buf));
-	SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_VALID, 1);
-	SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_MANDATORY,
-		      !!p_params->discard_mandatory_tlv);
-	SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_LENGTH,
-		      p_params->buf_size);
-	len = ECORE_LLDP_SYS_TLV_SIZE / 2;
-	OSAL_MEMCPY(lld_tlv_buf.data, p_params->buf, len);
-
-	addr = p_hwfn->mcp_info->port_addr +
-		offsetof(struct public_port, system_lldp_tlvs_buf);
-	ecore_memcpy_to(p_hwfn, p_ptt, addr, &lld_tlv_buf, sizeof(lld_tlv_buf));
-
-	if  (p_params->buf_size > len) {
-		addr = p_hwfn->mcp_info->port_addr +
-			offsetof(struct public_port, system_lldp_tlvs_buf2);
-		ecore_memcpy_to(p_hwfn, p_ptt, addr, &p_params->buf[len],
-				ECORE_LLDP_SYS_TLV_SIZE / 2);
-	}
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LLDP,
-			   mb_param, &mcp_resp, &mcp_param);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_hwfn, false, "SET_LLDP failed, error = %d\n", rc);
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
-			     u8 dscp_index, u8 *p_dscp_pri)
-{
-	struct ecore_dcbx_get *p_dcbx_info;
-	enum _ecore_status_t rc;
-
-	if (dscp_index >= ECORE_DCBX_DSCP_SIZE) {
-		DP_ERR(p_hwfn, "Invalid dscp index %d\n", dscp_index);
-		return ECORE_INVAL;
-	}
-
-	p_dcbx_info = OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
-				 sizeof(*p_dcbx_info));
-	if (!p_dcbx_info)
-		return ECORE_NOMEM;
-
-	OSAL_MEMSET(p_dcbx_info, 0, sizeof(*p_dcbx_info));
-	rc = ecore_dcbx_query_params(p_hwfn, p_dcbx_info,
-				     ECORE_DCBX_OPERATIONAL_MIB);
-	if (rc) {
-		OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
-		return rc;
-	}
-
-	*p_dscp_pri = p_dcbx_info->dscp.dscp_pri_map[dscp_index];
-	OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t
-ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			     u8 dscp_index, u8 pri_val)
-{
-	struct ecore_dcbx_set dcbx_set;
-	enum _ecore_status_t rc;
-
-	if (dscp_index >= ECORE_DCBX_DSCP_SIZE ||
-	    pri_val >= ECORE_MAX_PFC_PRIORITIES) {
-		DP_ERR(p_hwfn, "Invalid dscp params: index = %d pri = %d\n",
-		       dscp_index, pri_val);
-		return ECORE_INVAL;
-	}
-
-	OSAL_MEMSET(&dcbx_set, 0, sizeof(dcbx_set));
-	rc = ecore_dcbx_get_config_params(p_hwfn, &dcbx_set);
-	if (rc)
-		return rc;
-
-	dcbx_set.override_flags = ECORE_DCBX_OVERRIDE_DSCP_CFG;
-	dcbx_set.dscp.dscp_pri_map[dscp_index] = pri_val;
-
-	return ecore_dcbx_config_params(p_hwfn, p_ptt, &dcbx_set, 1);
-}
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 6fad2ecc2e..5d7cd1b48b 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -211,33 +211,6 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
 					      struct ecore_dcbx_set *,
 					      bool);
 
-enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     enum ecore_lldp_agent agent,
-					     u8 tlv_type);
-
-enum _ecore_status_t
-ecore_lldp_get_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      struct ecore_lldp_config_params *p_params);
-
-enum _ecore_status_t
-ecore_lldp_set_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      struct ecore_lldp_config_params *p_params);
-
-enum _ecore_status_t
-ecore_lldp_set_system_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			   struct ecore_lldp_sys_tlvs *p_params);
-
-/* Returns priority value for a given dscp index */
-enum _ecore_status_t
-ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
-			     u8 dscp_index, u8 *p_dscp_pri);
-
-/* Sets priority value for a given dscp index */
-enum _ecore_status_t
-ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			     u8 dscp_index, u8 pri_val);
-
 static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI},
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e895dee405..96676055b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -263,27 +263,6 @@ void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
 	p_hwfn->db_recovery_info.db_recovery_counter = 0;
 }
 
-/* print the content of the doorbell recovery mechanism */
-void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn)
-{
-	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
-
-	DP_NOTICE(p_hwfn, false,
-		  "Dispalying doorbell recovery database. Counter was %d\n",
-		  p_hwfn->db_recovery_info.db_recovery_counter);
-
-	/* protect the list */
-	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
-	OSAL_LIST_FOR_EACH_ENTRY(db_entry,
-				 &p_hwfn->db_recovery_info.list,
-				 list_entry,
-				 struct ecore_db_recovery_entry) {
-		ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Printing");
-	}
-
-	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
-}
-
 /* ring the doorbell of a single doorbell recovery entry */
 void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
 			    struct ecore_db_recovery_entry *db_entry,
@@ -823,16 +802,6 @@ static enum _ecore_status_t ecore_llh_hw_init_pf(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev)
-{
-	return p_dev->p_llh_info->num_ppfid;
-}
-
-enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev)
-{
-	return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0;
-}
-
 /* TBD - should be removed when these definitions are available in reg_addr.h */
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK		0x3
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT		0
@@ -1204,76 +1173,6 @@ ecore_llh_protocol_filter_to_hilo(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
-			      enum ecore_llh_prot_filter_type_t type,
-			      u16 source_port_or_eth_type, u16 dest_port)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	u8 filter_idx, abs_ppfid, type_bitmap;
-	char str[32];
-	union ecore_llh_filter filter;
-	u32 high, low, ref_cnt;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	if (p_ptt == OSAL_NULL)
-		return ECORE_AGAIN;
-
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
-		goto out;
-
-	rc = ecore_llh_protocol_filter_stringify(p_dev, type,
-						 source_port_or_eth_type,
-						 dest_port, str, sizeof(str));
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	OSAL_MEM_ZERO(&filter, sizeof(filter));
-	filter.protocol.type = type;
-	filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
-	filter.protocol.dest_port = dest_port;
-	rc = ecore_llh_shadow_add_filter(p_dev, ppfid,
-					 ECORE_LLH_FILTER_TYPE_PROTOCOL,
-					 &filter, &filter_idx, &ref_cnt);
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	/* Configure the LLH only in case of a new the filter */
-	if (ref_cnt == 1) {
-		rc = ecore_llh_protocol_filter_to_hilo(p_dev, type,
-						       source_port_or_eth_type,
-						       dest_port, &high, &low);
-		if (rc != ECORE_SUCCESS)
-			goto err;
-
-		type_bitmap = 0x1 << type;
-		rc = ecore_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-					  type_bitmap, high, low);
-		if (rc != ECORE_SUCCESS)
-			goto err;
-	}
-
-	DP_VERBOSE(p_dev, ECORE_MSG_SP,
-		   "LLH: Added protocol filter [%s] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
-		   str, ppfid, abs_ppfid, filter_idx, ref_cnt);
-
-	goto out;
-
-err:
-	DP_NOTICE(p_hwfn, false,
-		  "LLH: Failed to add protocol filter [%s] to ppfid %hhd\n",
-		  str, ppfid);
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
 void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
 				 u8 mac_addr[ETH_ALEN])
 {
@@ -1326,66 +1225,6 @@ void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
 	ecore_ptt_release(p_hwfn, p_ptt);
 }
 
-void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
-				      enum ecore_llh_prot_filter_type_t type,
-				      u16 source_port_or_eth_type,
-				      u16 dest_port)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	u8 filter_idx, abs_ppfid;
-	char str[32];
-	union ecore_llh_filter filter;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 ref_cnt;
-
-	if (p_ptt == OSAL_NULL)
-		return;
-
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
-		goto out;
-
-	rc = ecore_llh_protocol_filter_stringify(p_dev, type,
-						 source_port_or_eth_type,
-						 dest_port, str, sizeof(str));
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	OSAL_MEM_ZERO(&filter, sizeof(filter));
-	filter.protocol.type = type;
-	filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
-	filter.protocol.dest_port = dest_port;
-	rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx,
-					    &ref_cnt);
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
-	if (rc != ECORE_SUCCESS)
-		goto err;
-
-	/* Remove from the LLH in case the filter is not in use */
-	if (!ref_cnt) {
-		rc = ecore_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
-					     filter_idx);
-		if (rc != ECORE_SUCCESS)
-			goto err;
-	}
-
-	DP_VERBOSE(p_dev, ECORE_MSG_SP,
-		   "LLH: Removed protocol filter [%s] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
-		   str, ppfid, abs_ppfid, filter_idx, ref_cnt);
-
-	goto out;
-
-err:
-	DP_NOTICE(p_dev, false,
-		  "LLH: Failed to remove protocol filter [%s] from ppfid %hhd\n",
-		  str, ppfid);
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-}
-
 void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
@@ -1419,18 +1258,6 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 	ecore_ptt_release(p_hwfn, p_ptt);
 }
 
-void ecore_llh_clear_all_filters(struct ecore_dev *p_dev)
-{
-	u8 ppfid;
-
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
-	    !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
-		return;
-
-	for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++)
-		ecore_llh_clear_ppfid_filters(p_dev, ppfid);
-}
-
 enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt, u32 addr,
 					 u32 val)
@@ -1497,20 +1324,6 @@ ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
 	return rc;
 }
 
-enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev)
-{
-	u8 ppfid;
-	enum _ecore_status_t rc;
-
-	for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
-		rc = ecore_llh_dump_ppfid(p_dev, ppfid);
-		if (rc != ECORE_SUCCESS)
-			return rc;
-	}
-
-	return ECORE_SUCCESS;
-}
-
 /******************************* NIG LLH - End ********************************/
 
 /* Configurable */
@@ -4000,18 +3813,6 @@ static void ecore_hw_timers_stop(struct ecore_dev *p_dev,
 		  (u8)ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_TASK));
 }
 
-void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
-{
-	int j;
-
-	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
-
-		ecore_hw_timers_stop(p_dev, p_hwfn, p_ptt);
-	}
-}
-
 static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u32 addr, u32 expected_val)
@@ -5481,16 +5282,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 #define ECORE_MAX_DEVICE_NAME_LEN (8)
 
-void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
-{
-	u8 n;
-
-	n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN);
-	OSAL_SNPRINTF((char *)name, n, "%s %c%d",
-		      ECORE_IS_BB(p_dev) ? "BB" : "AH",
-		      'A' + p_dev->chip_rev, (int)p_dev->chip_metal);
-}
-
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
@@ -5585,27 +5376,6 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#ifndef LINUX_REMOVE
-void ecore_prepare_hibernate(struct ecore_dev *p_dev)
-{
-	int j;
-
-	if (IS_VF(p_dev))
-		return;
-
-	for_each_hwfn(p_dev, j) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-			   "Mark hw/fw uninitialized\n");
-
-		p_hwfn->hw_init_done = false;
-
-		ecore_ptt_invalidate(p_hwfn);
-	}
-}
-#endif
-
 static enum _ecore_status_t
 ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 			void OSAL_IOMEM *p_doorbells, u64 db_phys_addr,
@@ -6219,23 +5989,6 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt)
-{
-	if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) {
-		ecore_wr(p_hwfn, p_ptt,
-			 NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR,
-			 1 << p_hwfn->abs_pf_id / 2);
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, 0);
-		return ECORE_SUCCESS;
-	}
-
-	DP_NOTICE(p_hwfn, false,
-		  "This function can't be set as default\n");
-	return ECORE_INVAL;
-}
-
 static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
 					       u32 hw_addr, void *p_eth_qzone,
@@ -6259,46 +6012,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
-					      u16 rx_coal, u16 tx_coal,
-					      void *p_handle)
-{
-	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_ptt *p_ptt;
-
-	/* TODO - Configuring a single queue's coalescing but
-	 * claiming all queues are abiding same configuration
-	 * for PF and VF both.
-	 */
-
-	if (IS_VF(p_hwfn->p_dev))
-		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
-						tx_coal, p_cid);
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_AGAIN;
-
-	if (rx_coal) {
-		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
-		if (rc)
-			goto out;
-		p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
-	}
-
-	if (tx_coal) {
-		rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
-		if (rc)
-			goto out;
-		p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
-	}
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
 enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    u16 coalesce,
@@ -6761,20 +6474,6 @@ int ecore_configure_pf_min_bandwidth(struct ecore_dev *p_dev, u8 min_bw)
 	return rc;
 }
 
-void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	struct ecore_mcp_link_state *p_link;
-
-	p_link = &p_hwfn->mcp_info->link_output;
-
-	if (p_link->min_pf_rate)
-		ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt);
-
-	OSAL_MEMSET(p_hwfn->qm_info.wfq_data, 0,
-		    sizeof(*p_hwfn->qm_info.wfq_data) *
-		    p_hwfn->qm_info.num_vports);
-}
-
 int ecore_device_num_engines(struct ecore_dev *p_dev)
 {
 	return ECORE_IS_BB(p_dev) ? 2 : 1;
@@ -6810,8 +6509,3 @@ void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
 	len = OSAL_STRLEN(buf_str);
 	OSAL_SET_PLATFORM_STR(p_hwfn, &buf_str[len], buf_size - len);
 }
-
-bool ecore_is_mf_fip_special(struct ecore_dev *p_dev)
-{
-	return !!OSAL_GET_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits);
-}
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 9ddf502eb9..37a8d99712 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -132,15 +132,6 @@ struct ecore_hw_init_params {
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params);
 
-/**
- * @brief ecore_hw_timers_stop_all -
- *
- * @param p_dev
- *
- * @return void
- */
-void ecore_hw_timers_stop_all(struct ecore_dev *p_dev);
-
 /**
  * @brief ecore_hw_stop -
  *
@@ -162,15 +153,6 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev);
 enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
 
 #ifndef LINUX_REMOVE
-/**
- * @brief ecore_prepare_hibernate -should be called when
- *        the system is going into the hibernate state
- *
- * @param p_dev
- *
- */
-void ecore_prepare_hibernate(struct ecore_dev *p_dev);
-
 enum ecore_db_rec_width {
 	DB_REC_WIDTH_32B,
 	DB_REC_WIDTH_64B,
@@ -488,31 +470,12 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
 				      u8 src_id,
 				      u8 *dst_id);
 
-/**
- * @brief ecore_llh_get_num_ppfid - Return the allocated number of LLH filter
- *	banks that are allocated to the PF.
- *
- * @param p_dev
- *
- * @return u8 - Number of LLH filter banks
- */
-u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev);
-
 enum ecore_eng {
 	ECORE_ENG0,
 	ECORE_ENG1,
 	ECORE_BOTH_ENG,
 };
 
-/**
- * @brief ecore_llh_get_l2_affinity_hint - Return the hint for the L2 affinity
- *
- * @param p_dev
- *
- * @return enum ecore_eng - L2 affintiy hint
- */
-enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev);
-
 /**
  * @brief ecore_llh_set_ppfid_affinity - Set the engine affinity for the given
  *	LLH filter bank.
@@ -571,38 +534,6 @@ enum ecore_llh_prot_filter_type_t {
 	ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT
 };
 
-/**
- * @brief ecore_llh_add_protocol_filter - Add a LLH protocol filter into the
- *	given filter bank.
- *
- * @param p_dev
- * @param ppfid - relative within the allocated ppfids ('0' is the default one).
- * @param type - type of filters and comparing
- * @param source_port_or_eth_type - source port or ethertype to add
- * @param dest_port - destination port to add
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
-			      enum ecore_llh_prot_filter_type_t type,
-			      u16 source_port_or_eth_type, u16 dest_port);
-
-/**
- * @brief ecore_llh_remove_protocol_filter - Remove a LLH protocol filter from
- *	the given filter bank.
- *
- * @param p_dev
- * @param ppfid - relative within the allocated ppfids ('0' is the default one).
- * @param type - type of filters and comparing
- * @param source_port_or_eth_type - source port or ethertype to add
- * @param dest_port - destination port to add
- */
-void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
-				      enum ecore_llh_prot_filter_type_t type,
-				      u16 source_port_or_eth_type,
-				      u16 dest_port);
-
 /**
  * @brief ecore_llh_clear_ppfid_filters - Remove all LLH filters from the given
  *	filter bank.
@@ -612,23 +543,6 @@ void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
  */
 void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid);
 
-/**
- * @brief ecore_llh_clear_all_filters - Remove all LLH filters
- *
- * @param p_dev
- */
-void ecore_llh_clear_all_filters(struct ecore_dev *p_dev);
-
-/**
- * @brief ecore_llh_set_function_as_default - set function as default per port
- *
- * @param p_hwfn
- * @param p_ptt
- */
-enum _ecore_status_t
-ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt);
-
 /**
  *@brief Cleanup of previous driver remains prior to load
  *
@@ -644,39 +558,6 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn	*p_hwfn,
 					 u16			id,
 					 bool			is_vf);
 
-/**
- * @brief ecore_get_queue_coalesce - Retrieve coalesce value for a given queue.
- *
- * @param p_hwfn
- * @param p_coal - store coalesce value read from the hardware.
- * @param p_handle
- *
- * @return enum _ecore_status_t
- **/
-enum _ecore_status_t
-ecore_get_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 *coal,
-			 void *handle);
-
-/**
- * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
- *    Tx queue. The fact that we can configure coalescing to up to 511, but on
- *    varying accuracy [the bigger the value the less accurate] up to a mistake
- *    of 3usec for the highest values.
- *    While the API allows setting coalescing per-qid, all queues sharing a SB
- *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
- *    otherwise configuration would break.
- *
- * @param p_hwfn
- * @param rx_coal - Rx Coalesce value in micro seconds.
- * @param tx_coal - TX Coalesce value in micro seconds.
- * @param p_handle
- *
- * @return enum _ecore_status_t
- **/
-enum _ecore_status_t
-ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
-			 u16 tx_coal, void *p_handle);
-
 /**
  * @brief ecore_pglueb_set_pfid_enable - Enable or disable PCI BUS MASTER
  *
@@ -690,12 +571,4 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
 						  struct ecore_ptt *p_ptt,
 						  bool b_enable);
 
-/**
- * @brief Whether FIP discovery fallback special mode is enabled or not.
- *
- * @param cdev
- *
- * @return true if device is in FIP special mode, false otherwise.
- */
-bool ecore_is_mf_fip_special(struct ecore_dev *p_dev);
 #endif
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 1db39d6a36..881682df25 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -407,22 +407,6 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
 			*(u32 *)&p_ptt->pxp.pretend);
 }
 
-void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
-	u16 control = 0;
-
-	SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
-	SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
-	SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
-
-	p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
-
-	REG_WR(p_hwfn,
-	       ecore_ptt_config_addr(p_ptt) +
-	       OFFSETOF(struct pxp_ptt_entry, pretend),
-			*(u32 *)&p_ptt->pxp.pretend);
-}
-
 void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    u8 port_id, u16 fid)
 {
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 238bdb9dbc..e1042eefec 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -191,16 +191,6 @@ void ecore_port_pretend(struct ecore_hwfn	*p_hwfn,
 			struct ecore_ptt	*p_ptt,
 			u8			port_id);
 
-/**
- * @brief ecore_port_unpretend - cancel any previously set port
- *        pretend
- *
- * @param p_hwfn
- * @param p_ptt
- */
-void ecore_port_unpretend(struct ecore_hwfn	*p_hwfn,
-			  struct ecore_ptt	*p_ptt);
-
 /**
  * @brief ecore_port_fid_pretend - pretend to another port and another function
  *        when accessing the ptt window
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 6a52f32cc9..6a0c7935e6 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -936,33 +936,6 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
 	return 0;
 }
 
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, u8 vport_id,
-						u32 vport_rl,
-						u32 link_speed)
-{
-	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-
-	if (vport_id >= max_qm_global_rls) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration\n");
-		return -1;
-	}
-
-	inc_val = QM_RL_INC_VAL(vport_rl ? vport_rl : link_speed);
-	if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration\n");
-		return -1;
-	}
-
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
-		 (u32)QM_RL_CRD_REG_SIGN_BIT);
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
-
-	return 0;
-}
-
 bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool is_release_cmd,
@@ -1032,385 +1005,11 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 /* NIG: packet prioritry configuration constants */
 #define NIG_PRIORITY_MAP_TC_BITS	4
 
-
-void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt,
-			struct init_ets_req *req, bool is_lb)
-{
-	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
-	u32 tc_bound_base_addr, tc_bound_addr_diff;
-	u8 sp_tc_map = 0, wfq_tc_map = 0;
-	u8 tc, num_tc, tc_client_offset;
-
-	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
-				   NIG_TX_ETS_CLIENT_OFFSET;
-	min_weight = 0xffffffff;
-	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
-				      NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
-				      NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
-				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
-	tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-	tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
-				     NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
-				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
-				     NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-
-	for (tc = 0; tc < num_tc; tc++) {
-		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-
-		/* Update SP map */
-		if (tc_req->use_sp)
-			sp_tc_map |= (1 << tc);
-
-		if (!tc_req->use_wfq)
-			continue;
-
-		/* Update WFQ map */
-		wfq_tc_map |= (1 << tc);
-
-		/* Find minimal weight */
-		if (tc_req->weight < min_weight)
-			min_weight = tc_req->weight;
-	}
-
-	/* Write SP map */
-	ecore_wr(p_hwfn, p_ptt,
-		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
-		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
-		 (sp_tc_map << tc_client_offset));
-
-	/* Write WFQ map */
-	ecore_wr(p_hwfn, p_ptt,
-		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
-		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
-		 (wfq_tc_map << tc_client_offset));
-	/* write WFQ weights */
-	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
-		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		u32 byte_weight;
-
-		if (!tc_req->use_wfq)
-			continue;
-
-		/* Translate weight to bytes */
-		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			      min_weight;
-
-		/* Write WFQ weight */
-		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
-			 tc_weight_addr_diff * tc_client_offset, byte_weight);
-
-		/* Write WFQ upper bound */
-		ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
-			 tc_bound_addr_diff * tc_client_offset,
-			 NIG_ETS_UP_BOUND(byte_weight, req->mtu));
-	}
-}
-
-void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
-			  struct ecore_ptt *p_ptt,
-			  struct init_nig_lb_rl_req *req)
-{
-	u32 ctrl, inc_val, reg_offset;
-	u8 tc;
-
-	/* Disable global MAC+LB RL */
-	ctrl =
-	    NIG_RL_BASE_TYPE <<
-	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-
-	/* Configure and enable global MAC+LB RL */
-	if (req->lb_mac_rate) {
-		/* Configure  */
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
-			 NIG_RL_PERIOD_CLK_25M);
-		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE,
-			 inc_val);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
-			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-
-		/* Enable */
-		ctrl |=
-		    1 <<
-		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-	}
-
-	/* Disable global LB-only RL */
-	ctrl =
-	    NIG_RL_BASE_TYPE <<
-	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-
-	/* Configure and enable global LB-only RL */
-	if (req->lb_rate) {
-		/* Configure  */
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
-			 NIG_RL_PERIOD_CLK_25M);
-		inc_val = NIG_RL_INC_VAL(req->lb_rate);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE,
-			 inc_val);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
-			 NIG_RL_MAX_VAL(inc_val, req->mtu));
-
-		/* Enable */
-		ctrl |=
-		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-	}
-
-	/* Per-TC RLs */
-	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
-	     tc++, reg_offset += 4) {
-		/* Disable TC RL */
-		ctrl =
-		    NIG_RL_BASE_TYPE <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
-		ecore_wr(p_hwfn, p_ptt,
-			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-
-		/* Configure and enable TC RL */
-		if (!req->tc_rate[tc])
-			continue;
-
-		/* Configure */
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
-			 reg_offset, NIG_RL_PERIOD_CLK_25M);
-		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-			 reg_offset, inc_val);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
-			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-
-		/* Enable */
-		ctrl |= 1 <<
-			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
-			 reg_offset, ctrl);
-	}
-}
-
-void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt,
-			       struct init_nig_pri_tc_map_req *req)
-{
-	u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
-	u32 pri_tc_mask = 0;
-	u8 pri, tc;
-
-	for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
-		if (!req->pri[pri].valid)
-			continue;
-
-		pri_tc_mask |= (req->pri[pri].tc_id <<
-				(pri * NIG_PRIORITY_MAP_TC_BITS));
-		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
-	}
-
-	/* Write priority -> TC mask */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-
-	/* Write TC -> priority mask */
-	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
-			 tc_pri_mask[tc]);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4,
-			 tc_pri_mask[tc]);
-	}
-}
-
-#endif /* UNUSED_HSI_FUNC */
-
-#ifndef UNUSED_HSI_FUNC
-
 /* PRS: ETS configuration constants */
 #define PRS_ETS_MIN_WFQ_BYTES		1600
 #define PRS_ETS_UP_BOUND(weight, mtu) \
 	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
 
-
-void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct init_ets_req *req)
-{
-	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
-	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-
-	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
-			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
-	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
-			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
-
-	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-
-		/* Update SP map */
-		if (tc_req->use_sp)
-			sp_tc_map |= (1 << tc);
-
-		if (!tc_req->use_wfq)
-			continue;
-
-		/* Update WFQ map */
-		wfq_tc_map |= (1 << tc);
-
-		/* Find minimal weight */
-		if (tc_req->weight < min_weight)
-			min_weight = tc_req->weight;
-	}
-
-	/* write SP map */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
-
-	/* write WFQ map */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
-		 wfq_tc_map);
-
-	/* write WFQ weights */
-	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-		u32 byte_weight;
-
-		if (!tc_req->use_wfq)
-			continue;
-
-		/* Translate weight to bytes */
-		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			      min_weight;
-
-		/* Write WFQ weight */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
-			 tc_weight_addr_diff, byte_weight);
-
-		/* Write WFQ upper bound */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
-								   req->mtu));
-	}
-}
-
-#endif /* UNUSED_HSI_FUNC */
-#ifndef UNUSED_HSI_FUNC
-
-/* BRB: RAM configuration constants */
-#define BRB_TOTAL_RAM_BLOCKS_BB	4800
-#define BRB_TOTAL_RAM_BLOCKS_K2	5632
-#define BRB_BLOCK_SIZE		128
-#define BRB_MIN_BLOCKS_PER_TC	9
-#define BRB_HYST_BYTES		10240
-#define BRB_HYST_BLOCKS		(BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-
-/* Temporary big RAM allocation - should be updated */
-void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
-{
-	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
-	u32 active_port_blocks, reg_offset = 0;
-	u8 port, active_ports = 0;
-
-	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
-					       BRB_BLOCK_SIZE);
-	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
-						BRB_BLOCK_SIZE);
-	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
-						    BRB_TOTAL_RAM_BLOCKS_BB;
-
-	/* Find number of active ports */
-	for (port = 0; port < MAX_NUM_PORTS; port++)
-		if (req->num_active_tcs[port])
-			active_ports++;
-
-	active_port_blocks = (u32)(total_blocks / active_ports);
-
-	for (port = 0; port < req->max_ports_per_engine; port++) {
-		u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
-		u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
-		u32 tc_guaranteed_blocks;
-		u8 tc;
-
-		/* Calculate per-port sizes */
-		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
-							 BRB_BLOCK_SIZE);
-		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
-							  0;
-		port_guaranteed_blocks = req->num_active_tcs[port] *
-					 tc_guaranteed_blocks;
-		port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		full_xoff_th = req->num_active_tcs[port] *
-			       BRB_MIN_BLOCKS_PER_TC;
-		full_xon_th = full_xoff_th + min_pkt_size_blocks;
-		pause_xoff_th = tc_headroom_blocks;
-		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
-
-		/* Init total size per port */
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
-			 port_blocks);
-
-		/* Init shared size per port */
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
-			 port_shared_blocks);
-
-		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
-			/* Clear init values for non-active TCs */
-			if (tc == req->num_active_tcs[port]) {
-				tc_guaranteed_blocks = 0;
-				full_xoff_th = 0;
-				full_xon_th = 0;
-				pause_xoff_th = 0;
-				pause_xon_th = 0;
-			}
-
-			/* Init guaranteed size per TC */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
-				 tc_guaranteed_blocks);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
-				 BRB_HYST_BLOCKS);
-
-			/* Init pause/full thresholds per physical TC - for
-			 * loopback traffic.
-			 */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
-				 reg_offset, full_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 +
-				 reg_offset, full_xon_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 +
-				 reg_offset, pause_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
-				 reg_offset, pause_xon_th);
-
-			/* Init pause/full thresholds per physical TC - for
-			 * main traffic.
-			 */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
-				 reg_offset, full_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 +
-				 reg_offset, full_xon_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 +
-				 reg_offset, pause_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 +
-				 reg_offset, pause_xon_th);
-		}
-	}
-}
-
-#endif /* UNUSED_HSI_FUNC */
-#ifndef UNUSED_HSI_FUNC
-
 #define ARR_REG_WR(dev, ptt, addr, arr, arr_size)		\
 	do {							\
 		u32 i;						\
@@ -1423,7 +1022,6 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 #define DWORDS_TO_BYTES(dwords)		((dwords) * REG_SIZE)
 #endif
 
-
 /**
  * @brief ecore_dmae_to_grc - is an internal function - writes from host to
  * wide-bus registers (split registers are not supported yet)
@@ -1467,13 +1065,6 @@ static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
 	return len_in_dwords;
 }
 
-/* In MF, should be called once per port to set EtherType of OuterTag */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
-{
-	/* Update DORQ register */
-	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
-}
-
 #endif /* UNUSED_HSI_FUNC */
 
 #define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
@@ -1627,33 +1218,6 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 #define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET      3
 #define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT   -925189872
 
-void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  bool enable)
-{
-	u32 reg_val, cfg_mask;
-
-	/* read PRS config register */
-	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_MSG_INFO);
-
-	/* set VXLAN_NO_L2_ENABLE mask */
-	cfg_mask = (1 << PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET);
-
-	if (enable) {
-		/* set VXLAN_NO_L2_ENABLE flag */
-		reg_val |= cfg_mask;
-
-		/* update PRS FIC Format register */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
-		 (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT);
-		/* clear VXLAN_NO_L2_ENABLE flag */
-		reg_val &= ~cfg_mask;
-	}
-
-	/* write PRS config register */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, reg_val);
-}
-
 #ifndef UNUSED_HSI_FUNC
 
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
@@ -1686,21 +1250,6 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 
 }
 
-
-void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt)
-{
-	u32 rfs_cm_hdr_event_id;
-
-	/* Set RFS event ID to be awakened i Tstorm By Prs */
-	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
-	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
-	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
-	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
-	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
-}
-
 void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       u16 pf_id,
@@ -1825,76 +1374,6 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 
 #endif /* UNUSED_HSI_FUNC */
 
-/* Configure VF zone size mode */
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt, u16 mode,
-				    bool runtime_init)
-{
-	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
-	u32 msdm_vf_offset_mask;
-
-	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
-		msdm_vf_size_log += 1;
-	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
-		msdm_vf_size_log += 2;
-
-	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
-
-	if (runtime_init) {
-		STORE_RT_REG(p_hwfn,
-			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
-			     msdm_vf_size_log);
-		STORE_RT_REG(p_hwfn,
-			     PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET,
-			     msdm_vf_offset_mask);
-	} else {
-		ecore_wr(p_hwfn, p_ptt,
-			 PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log);
-		ecore_wr(p_hwfn, p_ptt,
-			 PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask);
-	}
-}
-
-/* Get mstorm statistics for offset by VF zone size mode */
-u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
-				       u16 stat_cnt_id,
-				       u16 vf_zone_size_mode)
-{
-	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
-
-	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
-	    (stat_cnt_id > MAX_NUM_PFS)) {
-		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
-			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-			    (stat_cnt_id - MAX_NUM_PFS);
-		else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
-			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-			    (stat_cnt_id - MAX_NUM_PFS);
-	}
-
-	return offset;
-}
-
-/* Get mstorm VF producer offset by VF zone size mode */
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
-					 u8 vf_id,
-					 u8 vf_queue_id,
-					 u16 vf_zone_size_mode)
-{
-	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
-
-	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
-		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
-			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-				   vf_id;
-		else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
-			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-				  vf_id;
-	}
-
-	return offset;
-}
-
 #ifndef LINUX_REMOVE
 #define CRC8_INIT_VALUE 0xFF
 #endif
@@ -1964,101 +1443,6 @@ static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
 	return validation_byte;
 }
 
-/* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
-				       void *p_ctx_mem, u16 ctx_size,
-				       u8 ctx_type, u32 cid)
-{
-	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
-
-	p_ctx = (u8 *)p_ctx_mem;
-
-	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
-	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
-	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
-
-	OSAL_MEMSET(p_ctx, 0, ctx_size);
-
-	*x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
-	*t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
-	*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
-}
-
-/* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-				    u16 ctx_size, u8 ctx_type, u32 tid)
-{
-	u8 *p_ctx, *region1_val_ptr;
-
-	p_ctx = (u8 *)p_ctx_mem;
-	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
-
-	OSAL_MEMSET(p_ctx, 0, ctx_size);
-
-	*region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 1,
-							  tid);
-}
-
-/* Memset session context to 0 while preserving validation bytes */
-void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-			      u32 ctx_size, u8 ctx_type)
-{
-	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
-	u8 x_val, t_val, u_val;
-
-	p_ctx = (u8 *)p_ctx_mem;
-
-	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
-	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
-	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
-
-	x_val = *x_val_ptr;
-	t_val = *t_val_ptr;
-	u_val = *u_val_ptr;
-
-	OSAL_MEMSET(p_ctx, 0, ctx_size);
-
-	*x_val_ptr = x_val;
-	*t_val_ptr = t_val;
-	*u_val_ptr = u_val;
-}
-
-/* Memset task context to 0 while preserving validation bytes */
-void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-			   u32 ctx_size, u8 ctx_type)
-{
-	u8 *p_ctx, *region1_val_ptr;
-	u8 region1_val;
-
-	p_ctx = (u8 *)p_ctx_mem;
-	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
-
-	region1_val = *region1_val_ptr;
-
-	OSAL_MEMSET(p_ctx, 0, ctx_size);
-
-	*region1_val_ptr = region1_val;
-}
-
-/* Enable and configure context validation */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt)
-{
-	u32 ctx_validation;
-
-	/* Enable validation for connection region 3: CCFC_CTX_VALID0[31:24] */
-	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 24;
-	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
-
-	/* Enable validation for connection region 5: CCFC_CTX_VALID1[15:8] */
-	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
-	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
-
-	/* Enable validation for connection region 1: TCFC_CTX_VALID0[15:8] */
-	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
-	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
-}
-
 #define PHYS_ADDR_DWORDS        DIV_ROUND_UP(sizeof(dma_addr_t), 4)
 #define OVERLAY_HDR_SIZE_DWORDS (sizeof(struct fw_overlay_buf_hdr) / 4)
 
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index a393d088fe..54d169ed86 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -176,24 +176,6 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
 			 u16 rl_id,
 			 u32 rate_limit);
 
-/**
- * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
- * VPORT.
- *
- * @param p_hwfn -	       HW device data
- * @param p_ptt -	       ptt window used for writing the registers
- * @param vport_id -   VPORT ID
- * @param vport_rl -   rate limit in Mb/sec units
- * @param link_speed - link speed in Mbps.
- *
- * @return 0 on success, -1 on error.
- */
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u8 vport_id,
-						u32 vport_rl,
-						u32 link_speed);
-
 /**
  * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
  *
@@ -213,100 +195,6 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 							bool is_tx_pq,
 							u16 start_pq,
 							u16 num_pqs);
-#ifndef UNUSED_HSI_FUNC
-
-/**
- * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the NIG ETS initialization requirements.
- * @param is_lb	- if set, the loopback port arbiter is initialized, otherwise
- *		  the physical port arbiter is initialized. The pure-LB TC
- *		  requirements are ignored when is_lb is cleared.
- */
-void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_ets_req *req,
-						bool is_lb);
-
-/**
- * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
- *
- * Based on global and per-TC rate requirements
- *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the NIG LB RLs initialization requirements.
- */
-void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  struct init_nig_lb_rl_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-/**
- * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
- *
- * Assumes valid arguments.
- *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- required mapping from prioirties to TCs.
- */
-void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   struct init_nig_pri_tc_map_req *req);
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the PRS ETS initialization requirements.
- */
-void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_ets_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the BRB RAM initialization requirements.
- */
-void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_brb_ram_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-/**
- * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing
- *
- * @param p_ptt             - ptt window used for writing the registers.
- * @param enable            - VXLAN no L2 enable flag.
- */
-void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  bool enable);
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
- *                                           input ethType should Be called
- *                                           once per port.
- *
- * @param p_hwfn -	    HW device data
- * @param ethType - etherType to configure
- */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
-				      u32 ethType);
-#endif /* UNUSED_HSI_FUNC */
-
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
  * port.
@@ -369,14 +257,6 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     bool ip_geneve_enable);
 #ifndef UNUSED_HSI_FUNC
 
-/**
-* @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
-*
-* @param p_ptt          - ptt window used for writing the registers.
-*/
-void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt);
-
 /**
  * @brief ecore_gft_disable - Disable GFT
  *
@@ -410,113 +290,6 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 	enum gft_profile_type profile_type);
 #endif /* UNUSED_HSI_FUNC */
 
-/**
-* @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
-*                                         used before first ETH queue started.
-*
- * @param p_hwfn -      HW device data
-* @param p_ptt        -  ptt window used for writing the registers. Don't care
- *           if runtime_init used.
-* @param mode         -  VF zone size mode. Use enum vf_zone_size_mode.
- * @param runtime_init - Set 1 to init runtime registers in engine phase.
- *           Set 0 if VF zone size mode configured after engine
- *           phase.
-*/
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
-				    *p_ptt, u16 mode, bool runtime_init);
-
-/**
- * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
- * VF zone size mode.
-*
- * @param p_hwfn -         HW device data
-* @param stat_cnt_id         -  statistic counter id
-* @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
-*/
-u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
-				       u16 stat_cnt_id, u16 vf_zone_size_mode);
-
-/**
- * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
- * size mode.
-*
- * @param p_hwfn -           HW device data
-* @param vf_id               -  vf id.
-* @param vf_queue_id         -  per VF rx queue id.
-* @param vf_zone_size_mode   -  vf zone size mode. Use enum vf_zone_size_mode.
-*/
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
-					 vf_queue_id, u16 vf_zone_size_mode);
-/**
- * @brief ecore_enable_context_validation - Enable and configure context
- *                                          validation.
- *
- * @param p_hwfn -   HW device data
- * @param p_ptt - ptt window used for writing the registers.
- */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt);
-/**
- * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
- * session context.
- *
- * @param p_hwfn -		HW device data
- * @param p_ctx_mem -	pointer to context memory.
- * @param ctx_size -	context size.
- * @param ctx_type -	context type.
- * @param cid -		context cid.
- */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
-				       void *p_ctx_mem,
-				       u16 ctx_size,
-				       u8 ctx_type,
-				       u32 cid);
-
-/**
- * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
- * context.
- *
- * @param p_hwfn -		HW device data
- * @param p_ctx_mem -	pointer to context memory.
- * @param ctx_size -	context size.
- * @param ctx_type -	context type.
- * @param tid -		    context tid.
- */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
-				    void *p_ctx_mem,
-				    u16 ctx_size,
-				    u8 ctx_type,
-				    u32 tid);
-
-/**
- * @brief ecore_memset_session_ctx - Memset session context to 0 while
- * preserving validation bytes.
- *
- * @param p_hwfn -		  HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size -  size to initialzie.
- * @param ctx_type -  context type.
- */
-void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
-			      void *p_ctx_mem,
-			      u32 ctx_size,
-			      u8 ctx_type);
-
-/**
- * @brief ecore_memset_task_ctx - Memset task context to 0 while preserving
- * validation bytes.
- *
- * @param p_hwfn -		HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size -  size to initialzie.
- * @param ctx_type -  context type.
- */
-void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
-			   void *p_ctx_mem,
-			   u32 ctx_size,
-			   u8 ctx_type);
-
-
 /*******************************************************************************
  * File name : rdma_init.h
  * Author    : Michael Shteinbok
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 4207b1853e..13464d060a 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1565,16 +1565,6 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   struct ecore_sb_info *p_sb, u32 pi_index,
-			   enum ecore_coalescing_fsm coalescing_fsm,
-			   u8 timeset)
-{
-	_ecore_int_cau_conf_pi(p_hwfn, p_ptt, p_sb->igu_sb_id,
-			       pi_index, coalescing_fsm, timeset);
-}
-
 void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt,
 			   dma_addr_t sb_phys, u16 igu_sb_id,
@@ -1793,42 +1783,6 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
-					  struct ecore_sb_info *sb_info,
-					  u16 sb_id)
-{
-	struct ecore_igu_info *p_info;
-	struct ecore_igu_block *p_block;
-
-	if (sb_info == OSAL_NULL)
-		return ECORE_SUCCESS;
-
-	/* zero status block and ack counter */
-	sb_info->sb_ack = 0;
-	OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		ecore_vf_set_sb_info(p_hwfn, sb_id, OSAL_NULL);
-		return ECORE_SUCCESS;
-	}
-
-	p_info = p_hwfn->hw_info.p_igu_info;
-	p_block = &p_info->entry[sb_info->igu_sb_id];
-
-	/* Vector 0 is reserved to Default SB */
-	if (p_block->vector_number == 0) {
-		DP_ERR(p_hwfn, "Do Not free sp sb using this function");
-		return ECORE_INVAL;
-	}
-
-	/* Lose reference to client's SB info, and fix counters */
-	p_block->sb_info = OSAL_NULL;
-	p_block->status |= ECORE_IGU_STATUS_FREE;
-	p_info->usage.free_cnt++;
-
-	return ECORE_SUCCESS;
-}
-
 static void ecore_int_sp_sb_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_sb_sp_info *p_sb = p_hwfn->p_sp_sb;
@@ -1905,18 +1859,6 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi)
-{
-	struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb;
-
-	if (p_sp_sb->pi_info_arr[pi].comp_cb == OSAL_NULL)
-		return ECORE_NOMEM;
-
-	p_sp_sb->pi_info_arr[pi].comp_cb = OSAL_NULL;
-	p_sp_sb->pi_info_arr[pi].cookie = OSAL_NULL;
-	return ECORE_SUCCESS;
-}
-
 u16 ecore_int_get_sp_sb_id(struct ecore_hwfn *p_hwfn)
 {
 	return p_hwfn->p_sp_sb->sb_info.igu_sb_id;
@@ -2429,133 +2371,6 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			  u16 sb_id, bool b_to_vf)
-{
-	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
-	struct ecore_igu_block *p_block = OSAL_NULL;
-	u16 igu_sb_id = 0, vf_num = 0;
-	u32 val = 0;
-
-	if (IS_VF(p_hwfn->p_dev) || !IS_PF_SRIOV(p_hwfn))
-		return ECORE_INVAL;
-
-	if (sb_id == ECORE_SP_SB_ID)
-		return ECORE_INVAL;
-
-	if (!p_info->b_allow_pf_vf_change) {
-		DP_INFO(p_hwfn, "Can't relocate SBs as MFW is too old.\n");
-		return ECORE_INVAL;
-	}
-
-	/* If we're moving a SB from PF to VF, the client had to specify
-	 * which vector it wants to move.
-	 */
-	if (b_to_vf) {
-		igu_sb_id = ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1);
-		if (igu_sb_id == ECORE_SB_INVALID_IDX)
-			return ECORE_INVAL;
-	}
-
-	/* If we're moving a SB from VF to PF, need to validate there isn't
-	 * already a line configured for that vector.
-	 */
-	if (!b_to_vf) {
-		if (ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1) !=
-		    ECORE_SB_INVALID_IDX)
-			return ECORE_INVAL;
-	}
-
-	/* We need to validate that the SB can actually be relocated.
-	 * This would also handle the previous case where we've explicitly
-	 * stated which IGU SB needs to move.
-	 */
-	for (; igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
-	     igu_sb_id++) {
-		p_block = &p_info->entry[igu_sb_id];
-
-		if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
-		    !(p_block->status & ECORE_IGU_STATUS_FREE) ||
-		    (!!(p_block->status & ECORE_IGU_STATUS_PF) != b_to_vf)) {
-			if (b_to_vf)
-				return ECORE_INVAL;
-			else
-				continue;
-		}
-
-		break;
-	}
-
-	if (igu_sb_id == ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev)) {
-		DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV),
-			   "Failed to find a free SB to move\n");
-		return ECORE_INVAL;
-	}
-
-	/* At this point, p_block points to the SB we want to relocate */
-	if (b_to_vf) {
-		p_block->status &= ~ECORE_IGU_STATUS_PF;
-
-		/* It doesn't matter which VF number we choose, since we're
-		 * going to disable the line; But let's keep it in range.
-		 */
-		vf_num = (u16)p_hwfn->p_dev->p_iov_info->first_vf_in_pf;
-
-		p_block->function_id = (u8)vf_num;
-		p_block->is_pf = 0;
-		p_block->vector_number = 0;
-
-		p_info->usage.cnt--;
-		p_info->usage.free_cnt--;
-		p_info->usage.iov_cnt++;
-		p_info->usage.free_cnt_iov++;
-
-		/* TODO - if SBs aren't really the limiting factor,
-		 * then it might not be accurate [in the since that
-		 * we might not need decrement the feature].
-		 */
-		p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]--;
-		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]++;
-	} else {
-		p_block->status |= ECORE_IGU_STATUS_PF;
-		p_block->function_id = p_hwfn->rel_pf_id;
-		p_block->is_pf = 1;
-		p_block->vector_number = sb_id + 1;
-
-		p_info->usage.cnt++;
-		p_info->usage.free_cnt++;
-		p_info->usage.iov_cnt--;
-		p_info->usage.free_cnt_iov--;
-
-		p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]++;
-		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]--;
-	}
-
-	/* Update the IGU and CAU with the new configuration */
-	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER,
-		  p_block->function_id);
-	SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, p_block->is_pf);
-	SET_FIELD(val, IGU_MAPPING_LINE_VALID, p_block->is_pf);
-	SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER,
-		  p_block->vector_number);
-
-	ecore_wr(p_hwfn, p_ptt,
-		 IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_sb_id,
-		 val);
-
-	ecore_int_cau_conf_sb(p_hwfn, p_ptt, 0,
-			      igu_sb_id, vf_num,
-			      p_block->is_pf ? 0 : 1);
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-		   "Relocation: [SB 0x%04x] func_id = %d is_pf = %d vector_num = 0x%x\n",
-		   igu_sb_id, p_block->function_id,
-		   p_block->is_pf, p_block->vector_number);
-
-	return ECORE_SUCCESS;
-}
-
 /**
  * @brief Initialize igu runtime registers
  *
@@ -2661,14 +2476,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 		    sizeof(*p_sb_cnt_info));
 }
 
-void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
-{
-	int i;
-
-	for_each_hwfn(p_dev, i)
-		p_dev->hwfns[i].b_int_requested = false;
-}
-
 void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable)
 {
 	p_dev->attn_clr_en = clr_enable;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 5042cd1d18..83ab4c9a97 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -136,19 +136,6 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 					   ecore_int_comp_cb_t comp_cb,
 					   void *cookie,
 					   u8 *sb_idx, __le16 **p_fw_cons);
-/**
- * @brief ecore_int_unregister_cb - Unregisters callback
- *      function from sp sb.
- *      Partner of ecore_int_register_cb -> should be called
- *      when no longer required.
- *
- * @param p_hwfn
- * @param pi
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi);
-
 /**
  * @brief ecore_int_get_sp_sb_id - Get the slowhwfn sb id.
  *
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index d7b6b86cc1..3c9ad653bb 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -177,24 +177,6 @@ enum ecore_coalescing_fsm {
 	ECORE_COAL_TX_STATE_MACHINE
 };
 
-/**
- * @brief ecore_int_cau_conf_pi - configure cau for a given
- *        status block
- *
- * @param p_hwfn
- * @param p_ptt
- * @param p_sb
- * @param pi_index
- * @param state
- * @param timeset
- */
-void ecore_int_cau_conf_pi(struct ecore_hwfn		*p_hwfn,
-			   struct ecore_ptt		*p_ptt,
-			   struct ecore_sb_info		*p_sb,
-			   u32				pi_index,
-			   enum ecore_coalescing_fsm	coalescing_fsm,
-			   u8				timeset);
-
 /**
  *
  * @brief ecore_int_igu_enable_int - enable device interrupts
@@ -261,23 +243,6 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info);
 
-/**
- * @brief ecore_int_sb_release - releases the sb_info structure.
- *
- * once the structure is released, it's memory can be freed
- *
- * @param p_hwfn
- * @param sb_info	points to an allocated sb_info structure
- * @param sb_id		the sb_id to be used (zero based in driver)
- *			should never be equal to ECORE_SP_SB_ID
- *			(SP Status block)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
-					  struct ecore_sb_info *sb_info,
-					  u16 sb_id);
-
 /**
  * @brief ecore_int_sp_dpc - To be called when an interrupt is received on the
  *        default status block.
@@ -299,16 +264,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie);
 void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
 			   struct ecore_sb_cnt_info *p_sb_cnt_info);
 
-/**
- * @brief ecore_int_disable_post_isr_release - performs the cleanup post ISR
- *        release. The API need to be called after releasing all slowpath IRQs
- *        of the device.
- *
- * @param p_dev
- *
- */
-void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
-
 /**
  * @brief ecore_int_attn_clr_enable - sets whether the general behavior is
  *        preventing attentions from being reasserted, or following the
@@ -335,21 +290,6 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info *p_sb,
 					  struct ecore_sb_info_dbg *p_info);
 
-/**
- * @brief - Move a free Status block between PF and child VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param sb_id - The PF fastpath vector to be moved [re-assigned if claiming
- *                from VF, given-up if moving to VF]
- * @param b_to_vf - PF->VF == true, VF->PF == false
- *
- * @return ECORE_SUCCESS if SB successfully moved.
- */
-enum _ecore_status_t
-ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			  u16 sb_id, bool b_to_vf);
-
 /**
  * @brief - Doorbell Recovery handler.
  *          Run DB_REAL_DEAL doorbell recovery in case of PF overflow
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index bd7c5703f6..e0e39d309a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -119,39 +119,6 @@ struct ecore_iov_vf_init_params {
 	u8 rss_eng_id;
 };
 
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/* This is SW channel related only... */
-enum mbx_state {
-	VF_PF_UNKNOWN_STATE			= 0,
-	VF_PF_WAIT_FOR_START_REQUEST		= 1,
-	VF_PF_WAIT_FOR_NEXT_CHUNK_OF_REQUEST	= 2,
-	VF_PF_REQUEST_IN_PROCESSING		= 3,
-	VF_PF_RESPONSE_READY			= 4,
-};
-
-struct ecore_iov_sw_mbx {
-	enum mbx_state		mbx_state;
-
-	u32			request_size;
-	u32			request_offset;
-
-	u32			response_size;
-	u32			response_offset;
-};
-
-/**
- * @brief Get the vf sw mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return struct ecore_iov_sw_mbx*
- */
-struct ecore_iov_sw_mbx*
-ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
-			u16 rel_vf_id);
-#endif
-
 /* This struct is part of ecore_dev and contains data relevant to all hwfns;
  * Initialized only if SR-IOV cpabability is exposed in PCIe config space.
  */
@@ -176,16 +143,6 @@ struct ecore_hw_sriov_info {
 
 #ifdef CONFIG_ECORE_SRIOV
 #ifndef LINUX_REMOVE
-/**
- * @brief mark/clear all VFs before/after an incoming PCIe sriov
- *        disable.
- *
- * @param p_dev
- * @param to_disable
- */
-void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
-				  u8 to_disable);
-
 /**
  * @brief mark/clear chosen VF before/after an incoming PCIe
  *        sriov disable.
@@ -227,35 +184,6 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       int vfid);
 
-/**
- * @brief ecore_iov_release_hw_for_vf - called once upper layer
- *        knows VF is done with - can release any resources
- *        allocated for VF at this point. this must be done once
- *        we know VF is no longer loaded in VM.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param rel_vf_id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 u16 rel_vf_id);
-
-/**
- * @brief ecore_iov_set_vf_ctx - set a context for a given VF
- *
- * @param p_hwfn
- * @param vf_id
- * @param ctx
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
-					  u16 vf_id,
-					  void *ctx);
-
 /**
  * @brief FLR cleanup for all VFs
  *
@@ -267,20 +195,6 @@ enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt);
 
-/**
- * @brief FLR cleanup for single VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param rel_vf_id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				u16 rel_vf_id);
-
 /**
  * @brief Update the bulletin with link information. Notice this does NOT
  *        send a bulletin update, only updates the PF's bulletin.
@@ -297,32 +211,6 @@ void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
 			struct ecore_mcp_link_state *link,
 			struct ecore_mcp_link_capabilities *p_caps);
 
-/**
- * @brief Returns link information as perceived by VF.
- *
- * @param p_hwfn
- * @param p_vf
- * @param p_params - the link params visible to vf.
- * @param p_link - the link state visible to vf.
- * @param p_caps - the link default capabilities visible to vf.
- */
-void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps);
-
-/**
- * @brief return if the VF is pending FLR
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return bool
- */
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
-				 u16 rel_vf_id);
-
 /**
  * @brief Check if given VF ID @vfid is valid
  *        w.r.t. @b_enabled_only value
@@ -340,19 +228,6 @@ bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
 			     int rel_vf_id,
 			     bool b_enabled_only, bool b_non_malicious);
 
-/**
- * @brief Get VF's public info structure
- *
- * @param p_hwfn
- * @param vfid - Relative VF ID
- * @param b_enabled_only - false if want to access even if vf is disabled
- *
- * @return struct ecore_public_vf_info *
- */
-struct ecore_public_vf_info*
-ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
-			     u16 vfid, bool b_enabled_only);
-
 /**
  * @brief fills a bitmask of all VFs which have pending unhandled
  *        messages.
@@ -374,65 +249,6 @@ void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *ptt,
 					   int vfid);
-/**
- * @brief Set forced MAC address in PFs copy of bulletin board
- *        and configures FW/HW to support the configuration.
- *
- * @param p_hwfn
- * @param mac
- * @param vfid
- */
-void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
-				       u8 *mac, int vfid);
-
-/**
- * @brief Set MAC address in PFs copy of bulletin board without
- *        configuring FW/HW.
- *
- * @param p_hwfn
- * @param mac
- * @param vfid
- */
-enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
-						u8 *mac, int vfid);
-
-/**
- * @brief Set default behaviour of VF in case no vlans are configured for it
- *        whether to accept only untagged traffic or all.
- *        Must be called prior to the VF vport-start.
- *
- * @param p_hwfn
- * @param b_untagged_only
- * @param vfid
- *
- * @return ECORE_SUCCESS if configuration would stick.
- */
-enum _ecore_status_t
-ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
-					       bool b_untagged_only,
-					       int vfid);
-
-/**
- * @brief Get VFs opaque fid.
- *
- * @param p_hwfn
- * @param vfid
- * @param opaque_fid
- */
-void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
-				  u16 *opaque_fid);
-
-/**
- * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
- *        and configures FW/HW to support the configuration.
- *        Setting of pvid 0 would clear the feature.
- * @param p_hwfn
- * @param pvid
- * @param vfid
- */
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
-					u16 pvid, int vfid);
-
 /**
  * @brief Check if VF has VPORT instance. This can be used
  *	  to check if VPORT is active.
@@ -454,38 +270,6 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 						int vfid,
 						struct ecore_ptt *p_ptt);
 
-/**
- * @brief Check if given VF (@vfid) is marked as stopped
- *
- * @param p_hwfn
- * @param vfid
- *
- * @return bool : true if stopped
- */
-bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid);
-
-/**
- * @brief Configure VF anti spoofing
- *
- * @param p_hwfn
- * @param vfid
- * @param val - spoofchk value - true/false
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
-					    int vfid, bool val);
-
-/**
- * @brief Get VF's configured spoof value.
- *
- * @param p_hwfn
- * @param vfid
- *
- * @return bool - spoofchk value - true/false
- */
-bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid);
-
 /**
  * @brief Check for SRIOV sanity by PF.
  *
@@ -496,248 +280,8 @@ bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid);
  */
 bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid);
 
-/**
- * @brief Get the num of VF chains.
- *
- * @param p_hwfn
- *
- * @return u8
- */
-u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn);
-
-/**
- * @brief Get vf request mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- * @param pp_req_virt_addr
- * @param p_req_virt_size
- */
-void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
-					  u16 rel_vf_id,
-					  void **pp_req_virt_addr,
-					  u16 *p_req_virt_size);
-
-/**
- * @brief Get vf mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- * @param pp_reply_virt_addr
- * @param p_reply_virt_size
- */
-void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
-					    u16	rel_vf_id,
-					    void **pp_reply_virt_addr,
-					    u16	*p_reply_virt_size);
-
-/**
- * @brief Validate if the given length is a valid vfpf message
- *        length
- *
- * @param length
- *
- * @return bool
- */
-bool ecore_iov_is_valid_vfpf_msg_length(u32 length);
-
-/**
- * @brief Return the max pfvf message length
- *
- * @return u32
- */
-u32 ecore_iov_pfvf_msg_length(void);
-
-/**
- * @brief Returns MAC address if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return OSAL_NULL if mac isn't set; Otherwise, returns MAC.
- */
-u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
-			       u16 rel_vf_id);
-
-/**
- * @brief Returns forced MAC address if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return OSAL_NULL if mac isn't forced; Otherwise, returns MAC.
- */
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn,
-				      u16 rel_vf_id);
-
-/**
- * @brief Returns pvid if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return 0 if no pvid is configured, otherwise the pvid.
- */
-u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
-				       u16 rel_vf_id);
-/**
- * @brief Configure VFs tx rate
- *
- * @param p_hwfn
- * @param p_ptt
- * @param vfid
- * @param val - tx rate value in Mb/sec.
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 int vfid, int val);
-
-/**
- * @brief - Retrieves the statistics associated with a VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param vfid
- * @param p_stats - this will be filled with the VF statistics
- *
- * @return ECORE_SUCCESS iff statistics were retrieved. Error otherwise.
- */
-enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    int vfid,
-					    struct ecore_eth_stats *p_stats);
-
-/**
- * @brief - Retrieves num of rxqs chains
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return num of rxqs chains.
- */
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
-			     u16 rel_vf_id);
-
-/**
- * @brief - Retrieves num of active rxqs chains
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn,
-				    u16 rel_vf_id);
-
-/**
- * @brief - Retrieves ctx pointer
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
-			   u16 rel_vf_id);
-
-/**
- * @brief - Retrieves VF`s num sbs
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
-			    u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is waiting for acquire
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn,
-				      u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is acquired but not initialized
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
-					      u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is acquired and initialized
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
-				 u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF has started in FW
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
-			     u16 rel_vf_id);
-
-/**
- * @brief - Get VF's vport min rate configured.
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return - rate in Mbps
- */
-int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
-
-/**
- * @brief - Configure min rate for VF's vport.
- * @param p_dev
- * @param vfid
- * @param - rate in Mbps
- *
- * @return
- */
-enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
-						     int vfid, u32 rate);
 #endif
 
-/**
- * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce
- *    parameters of VFs for Rx and Tx queue.
- *    While the API allows setting coalescing per-qid, all queues sharing a SB
- *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
- *    otherwise configuration would break.
- *
- * @param p_hwfn
- * @param rx_coal - Rx Coalesce value in micro seconds.
- * @param tx_coal - TX Coalesce value in micro seconds.
- * @param vf_id
- * @param qid
- *
- * @return int
- **/
-enum _ecore_status_t
-ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
-					 u16 rx_coal, u16 tx_coal,
-					 u16 vf_id, u16 qid);
-
 /**
  * @brief - Given a VF index, return index of next [including that] active VF.
  *
@@ -751,19 +295,6 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
 				      u16 vxlan_port, u16 geneve_port);
 
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/**
- * @brief Set whether PF should communicate with VF using SW/HW channel
- *        Needs to be called for an enabled VF before acquire is over
- *        [latest good point for doing that is OSAL_IOV_VF_ACQUIRE()]
- *
- * @param p_hwfn
- * @param vfid - relative vf index
- * @param b_is_hw - true iff PF is to use HW channel for communication
- */
-void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
-				 bool b_is_hw);
-#endif
 #endif /* CONFIG_ECORE_SRIOV */
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index af234dec84..f6180bf450 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2281,108 +2281,5 @@ enum _ecore_status_t ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_get_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 *p_coal,
-			 void *handle)
-{
-	struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)handle;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_ptt *p_ptt;
-
-	if (IS_VF(p_hwfn->p_dev)) {
-		rc = ecore_vf_pf_get_coalesce(p_hwfn, p_coal, p_cid);
-		if (rc != ECORE_SUCCESS)
-			DP_NOTICE(p_hwfn, false,
-				  "Unable to read queue calescing\n");
-
-		return rc;
-	}
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_AGAIN;
-
-	if (p_cid->b_is_rx) {
-		rc = ecore_get_rxq_coalesce(p_hwfn, p_ptt, p_cid, p_coal);
-		if (rc != ECORE_SUCCESS)
-			goto out;
-	} else {
-		rc = ecore_get_txq_coalesce(p_hwfn, p_ptt, p_cid, p_coal);
-		if (rc != ECORE_SUCCESS)
-			goto out;
-	}
-
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   struct ecore_queue_cid *p_cid, u32 rate)
-{
-	u16 rl_id;
-	u8 vport;
-
-	vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id);
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-		   "About to rate limit qm vport %d for queue %d with rate %d\n",
-		   vport, p_cid->rel.queue_id, rate);
-
-	rl_id = vport; /* The "rl_id" is set as the "vport_id" */
-	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate);
-}
-
 #define RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT    100
 #define RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US    1
-
-enum _ecore_status_t
-ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
-				     u8 vport_id,
-				     u8 ind_table_index,
-				     u16 ind_table_value)
-{
-	struct eth_tstorm_rss_update_data update_data = { 0 };
-	void OSAL_IOMEM *addr = OSAL_NULL;
-	enum _ecore_status_t rc;
-	u8 abs_vport_id;
-	u32 cnt = 0;
-
-	OSAL_BUILD_BUG_ON(sizeof(update_data) != sizeof(u64));
-
-	rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
-	       TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
-
-	*(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
-
-	for (cnt = 0; update_data.valid &&
-	     cnt < RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT; cnt++) {
-		OSAL_UDELAY(RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US);
-		*(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
-	}
-
-	if (update_data.valid) {
-		DP_NOTICE(p_hwfn, true,
-			  "rss update valid status is not clear! valid=0x%x vport id=%d ind_Table_idx=%d ind_table_value=%d.\n",
-			  update_data.valid, vport_id, ind_table_index,
-			  ind_table_value);
-
-		return ECORE_AGAIN;
-	}
-
-	update_data.valid	    = 1;
-	update_data.ind_table_index = ind_table_index;
-	update_data.ind_table_value = ind_table_value;
-	update_data.vport_id	    = abs_vport_id;
-
-	DIRECT_REG_WR64(p_hwfn, addr, *(u64 *)(&update_data));
-
-	return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index bebf412edb..0f2baedc3e 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -490,28 +490,4 @@ enum _ecore_status_t
 ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
 				  struct ecore_spq_comp_cb *p_cb,
 				  struct ecore_ntuple_filter_params *p_params);
-
-/**
- * @brief - ecore_update_eth_rss_ind_table_entry
- *
- * This function being used to update RSS indirection table entry to FW RAM
- * instead of using the SP vport update ramrod with rss params.
- *
- * Notice:
- * This function supports only one outstanding command per engine. Ecore
- * clients which use this function should call ecore_mcp_ind_table_lock() prior
- * to it and ecore_mcp_ind_table_unlock() after it.
- *
- * @params p_hwfn
- * @params vport_id
- * @params ind_table_index
- * @params ind_table_value
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
-				     u8 vport_id,
-				     u8 ind_table_index,
-				     u16 ind_table_value);
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cab089d816..a4e4583ecd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -342,58 +342,6 @@ static void ecore_mcp_reread_offsets(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt)
-{
-	u32 prev_generic_por_0, seq, delay = ECORE_MCP_RESP_ITER_US, cnt = 0;
-	u32 retries = ECORE_MCP_RESET_RETRIES;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
-		delay = ECORE_EMUL_MCP_RESP_ITER_US;
-		retries = ECORE_EMUL_MCP_RESET_RETRIES;
-	}
-#endif
-	if (p_hwfn->mcp_info->b_block_cmd) {
-		DP_NOTICE(p_hwfn, false,
-			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
-		return ECORE_ABORTED;
-	}
-
-	/* Ensure that only a single thread is accessing the mailbox */
-	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
-
-	prev_generic_por_0 = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
-
-	/* Set drv command along with the updated sequence */
-	ecore_mcp_reread_offsets(p_hwfn, p_ptt);
-	seq = ++p_hwfn->mcp_info->drv_mb_seq;
-	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (DRV_MSG_CODE_MCP_RESET | seq));
-
-	/* Give the MFW up to 500 second (50*1000*10usec) to resume */
-	do {
-		OSAL_UDELAY(delay);
-
-		if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
-		    prev_generic_por_0)
-			break;
-	} while (cnt++ < retries);
-
-	if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
-	    prev_generic_por_0) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "MCP was reset after %d usec\n", cnt * delay);
-	} else {
-		DP_ERR(p_hwfn, "Failed to reset MCP\n");
-		rc = ECORE_AGAIN;
-	}
-
-	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
-
-	return rc;
-}
-
 #ifndef ASIC_ONLY
 static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn,
 				    struct ecore_mcp_mb_params *p_mb_params)
@@ -1844,17 +1792,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
-enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt)
-{
-	struct ecore_mdump_cmd_params mdump_cmd_params;
-
-	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
-	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
-
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
 static enum _ecore_status_t
 ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct mdump_config_stc *p_mdump_config)
@@ -1931,17 +1868,6 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt)
-{
-	struct ecore_mdump_cmd_params mdump_cmd_params;
-
-	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
-	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
-
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
 enum _ecore_status_t
 ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			   struct ecore_mdump_retain_data *p_mdump_retain)
@@ -1974,17 +1900,6 @@ ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt)
-{
-	struct ecore_mdump_cmd_params mdump_cmd_params;
-
-	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
-	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLR_RETAIN;
-
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
 static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt)
 {
@@ -2282,37 +2197,6 @@ int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
 	return 0;
 }
 
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u32 *p_media_type)
-{
-	*p_media_type = MEDIA_UNSPECIFIED;
-
-	/* TODO - Add support for VFs */
-	if (IS_VF(p_hwfn->p_dev))
-		return ECORE_INVAL;
-
-	if (!ecore_mcp_is_init(p_hwfn)) {
-#ifndef ASIC_ONLY
-		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-			DP_INFO(p_hwfn, "Emulation: Can't get media type\n");
-			return ECORE_NOTIMPL;
-		}
-#endif
-		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
-		return ECORE_BUSY;
-	}
-
-	if (!p_ptt)
-		return ECORE_INVAL;
-
-	*p_media_type = ecore_rd(p_hwfn, p_ptt,
-				 p_hwfn->mcp_info->port_addr +
-				 OFFSETOF(struct public_port, media_type));
-
-	return ECORE_SUCCESS;
-}
-
 enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
 						    struct ecore_ptt *p_ptt,
 						    u32 *p_transceiver_state,
@@ -2361,156 +2245,6 @@ static int is_transceiver_ready(u32 transceiver_state, u32 transceiver_type)
 	return 0;
 }
 
-enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *p_speed_mask)
-{
-	u32 transceiver_type = ETH_TRANSCEIVER_TYPE_NONE, transceiver_state;
-
-	ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state,
-				       &transceiver_type);
-
-
-	if (is_transceiver_ready(transceiver_state, transceiver_type) == 0)
-		return ECORE_INVAL;
-
-	switch (transceiver_type) {
-	case ETH_TRANSCEIVER_TYPE_1G_LX:
-	case ETH_TRANSCEIVER_TYPE_1G_SX:
-	case ETH_TRANSCEIVER_TYPE_1G_PCC:
-	case ETH_TRANSCEIVER_TYPE_1G_ACC:
-	case ETH_TRANSCEIVER_TYPE_1000BASET:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_10G_SR:
-	case ETH_TRANSCEIVER_TYPE_10G_LR:
-	case ETH_TRANSCEIVER_TYPE_10G_LRM:
-	case ETH_TRANSCEIVER_TYPE_10G_ER:
-	case ETH_TRANSCEIVER_TYPE_10G_PCC:
-	case ETH_TRANSCEIVER_TYPE_10G_ACC:
-	case ETH_TRANSCEIVER_TYPE_4x10G:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_40G_LR4:
-	case ETH_TRANSCEIVER_TYPE_40G_SR4:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_SR:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_LR:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
-		 NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_100G_AOC:
-	case ETH_TRANSCEIVER_TYPE_100G_SR4:
-	case ETH_TRANSCEIVER_TYPE_100G_LR4:
-	case ETH_TRANSCEIVER_TYPE_100G_ER4:
-	case ETH_TRANSCEIVER_TYPE_100G_ACC:
-		*p_speed_mask =
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_25G_SR:
-	case ETH_TRANSCEIVER_TYPE_25G_LR:
-	case ETH_TRANSCEIVER_TYPE_25G_AOC:
-	case ETH_TRANSCEIVER_TYPE_25G_ACC_S:
-	case ETH_TRANSCEIVER_TYPE_25G_ACC_M:
-	case ETH_TRANSCEIVER_TYPE_25G_ACC_L:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_25G_CA_N:
-	case ETH_TRANSCEIVER_TYPE_25G_CA_S:
-	case ETH_TRANSCEIVER_TYPE_25G_CA_L:
-	case ETH_TRANSCEIVER_TYPE_4x25G_CR:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_40G_CR4:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_100G_CR4:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR:
-		*p_speed_mask =
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_SR:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR:
-	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC:
-		*p_speed_mask =
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_XLPPI:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G;
-		break;
-
-	case ETH_TRANSCEIVER_TYPE_10G_BASET:
-		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
-			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
-		break;
-
-	default:
-		DP_INFO(p_hwfn, "Unknown transcevier type 0x%x\n",
-			transceiver_type);
-		*p_speed_mask = 0xff;
-		break;
-	}
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *p_board_config)
-{
-	u32 nvm_cfg_addr, nvm_cfg1_offset, port_cfg_addr;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	/* TODO - Add support for VFs */
-	if (IS_VF(p_hwfn->p_dev))
-		return ECORE_INVAL;
-
-	if (!ecore_mcp_is_init(p_hwfn)) {
-		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
-		return ECORE_BUSY;
-	}
-	if (!p_ptt) {
-		*p_board_config = NVM_CFG1_PORT_PORT_TYPE_UNDEFINED;
-		rc = ECORE_INVAL;
-	} else {
-		nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt,
-					MISC_REG_GEN_PURP_CR0);
-		nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt,
-					   nvm_cfg_addr + 4);
-		port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
-			offsetof(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
-		*p_board_config  =  ecore_rd(p_hwfn, p_ptt,
-					     port_cfg_addr +
-					     offsetof(struct nvm_cfg1_port,
-					     board_cfg));
-	}
-
-	return rc;
-}
-
 /* @DPDK */
 /* Old MFW has a global configuration for all PFs regarding RDMA support */
 static void
@@ -2670,41 +2404,6 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-const struct ecore_mcp_function_info
-*ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn)
-{
-	if (!p_hwfn || !p_hwfn->mcp_info)
-		return OSAL_NULL;
-	return &p_hwfn->mcp_info->func_info;
-}
-
-int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt, u32 personalities)
-{
-	enum ecore_pci_personality protocol = ECORE_PCI_DEFAULT;
-	struct public_func shmem_info;
-	int i, count = 0, num_pfs;
-
-	num_pfs = NUM_OF_ENG_PFS(p_hwfn->p_dev);
-
-	for (i = 0; i < num_pfs; i++) {
-		ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
-					 MCP_PF_ID_BY_REL(p_hwfn, i));
-		if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
-			continue;
-
-		if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
-					      &protocol) !=
-		    ECORE_SUCCESS)
-			continue;
-
-		if ((1 << ((u32)protocol)) & personalities)
-			count++;
-	}
-
-	return count;
-}
-
 enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u32 *p_flash_size)
@@ -2731,24 +2430,6 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
-{
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-
-	if (p_dev->recov_in_prog) {
-		DP_NOTICE(p_hwfn, false,
-			  "Avoid triggering a recovery since such a process"
-			  " is already in progress\n");
-		return ECORE_AGAIN;
-	}
-
-	DP_NOTICE(p_hwfn, false, "Triggering a recovery process\n");
-	ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_GENERAL_ATTN_35, 0x1);
-
-	return ECORE_SUCCESS;
-}
-
 static enum _ecore_status_t
 ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
@@ -2928,38 +2609,6 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_client client)
-{
-	u32 resp = 0, param = 0;
-	u32 drv_mb_param;
-	enum _ecore_status_t rc;
-
-	switch (client) {
-	case ECORE_OV_CLIENT_DRV:
-		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
-		break;
-	case ECORE_OV_CLIENT_USER:
-		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
-		break;
-	case ECORE_OV_CLIENT_VENDOR_SPEC:
-		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
-		break;
-	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_CURR_CFG,
-			   drv_mb_param, &resp, &param);
-	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
-
-	return rc;
-}
-
 enum _ecore_status_t
 ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt,
@@ -2992,13 +2641,6 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			 struct ecore_fc_npiv_tbl *p_table)
-{
-	return 0;
-}
-
 enum _ecore_status_t
 ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u16 mtu)
@@ -3015,28 +2657,6 @@ ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 *mac)
-{
-	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
-	enum _ecore_status_t rc;
-
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_SET_VMAC;
-	SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_VMAC_TYPE,
-		      DRV_MSG_CODE_VMAC_TYPE_MAC);
-	mb_params.param |= MCP_PF_ID(p_hwfn);
-	OSAL_MEMCPY(&union_data.raw_data, mac, ETH_ALEN);
-	mb_params.p_data_src = &union_data;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
-	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "Failed to send mac address, rc = %d\n", rc);
-
-	return rc;
-}
-
 enum _ecore_status_t
 ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    enum ecore_ov_eswitch eswitch)
@@ -3068,36 +2688,6 @@ ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn,
-				       struct ecore_ptt *p_ptt,
-				       enum ecore_led_mode mode)
-{
-	u32 resp = 0, param = 0, drv_mb_param;
-	enum _ecore_status_t rc;
-
-	switch (mode) {
-	case ECORE_LED_MODE_ON:
-		drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_ON;
-		break;
-	case ECORE_LED_MODE_OFF:
-		drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_OFF;
-		break;
-	case ECORE_LED_MODE_RESTORE:
-		drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_OPER;
-		break;
-	default:
-		DP_NOTICE(p_hwfn, true, "Invalid LED mode %d\n", mode);
-		return ECORE_INVAL;
-	}
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LED_MODE,
-			   drv_mb_param, &resp, &param);
-	if (rc != ECORE_SUCCESS)
-		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
-
-	return rc;
-}
-
 enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt,
 					     u32 mask_parities)
@@ -3176,482 +2766,37 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
-					u32 addr, u8 *p_buf, u32 *p_len)
+enum _ecore_status_t
+ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt, u32 *num_images)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
+	u32 drv_mb_param = 0, rsp;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
+		      DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES);
 
-	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
-				  (cmd == ECORE_PHY_CORE_READ) ?
-				  DRV_MSG_CODE_PHY_CORE_READ :
-				  DRV_MSG_CODE_PHY_RAW_READ,
-				  addr, &resp, &param, p_len, (u32 *)p_buf);
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+			   drv_mb_param, &rsp, num_images);
 	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
+		return rc;
 
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
+	if (rsp == FW_MSG_CODE_UNSUPPORTED)
+		rc = ECORE_NOTIMPL;
+	else if (rsp != FW_MSG_CODE_OK)
+		rc = ECORE_UNKNOWN_ERROR;
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf)
+enum _ecore_status_t
+ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 struct bist_nvm_image_att *p_image_att,
+				 u32 image_index)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	OSAL_MEMCPY(p_buf, &p_dev->mcp_nvm_resp, sizeof(p_dev->mcp_nvm_resp));
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_DEL_FILE, addr,
-			   &resp, &param);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
-						  u32 addr)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr,
-			   &resp, &param);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-/* rc receives ECORE_INVAL as default parameter because
- * it might not enter the while loop if the len is 0
- */
-enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
-					 u32 addr, u8 *p_buf, u32 len)
-{
-	u32 buf_idx, buf_size, nvm_cmd, nvm_offset;
-	u32 resp = FW_MSG_CODE_ERROR, param;
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	enum _ecore_status_t rc = ECORE_INVAL;
-	struct ecore_ptt *p_ptt;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	switch (cmd) {
-	case ECORE_PUT_FILE_DATA:
-		nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA;
-		break;
-	case ECORE_NVM_WRITE_NVRAM:
-		nvm_cmd = DRV_MSG_CODE_NVM_WRITE_NVRAM;
-		break;
-	case ECORE_EXT_PHY_FW_UPGRADE:
-		nvm_cmd = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE;
-		break;
-	default:
-		DP_NOTICE(p_hwfn, true, "Invalid nvm write command 0x%x\n",
-			  cmd);
-		rc = ECORE_INVAL;
-		goto out;
-	}
-
-	buf_idx = 0;
-	while (buf_idx < len) {
-		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
-				      MCP_DRV_NVM_BUF_LEN);
-		nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) |
-			      addr) +
-			     buf_idx;
-		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset,
-					  &resp, &param, buf_size,
-					  (u32 *)&p_buf[buf_idx]);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_dev, false,
-				  "ecore_mcp_nvm_write() failed, rc = %d\n",
-				  rc);
-			resp = FW_MSG_CODE_ERROR;
-			break;
-		}
-
-		if (resp != FW_MSG_CODE_OK &&
-		    resp != FW_MSG_CODE_NVM_OK &&
-		    resp != FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK) {
-			DP_NOTICE(p_dev, false,
-				  "nvm write failed, resp = 0x%08x\n", resp);
-			rc = ECORE_UNKNOWN_ERROR;
-			break;
-		}
-
-		/* This can be a lengthy process, and it's possible scheduler
-		 * isn't preemptible. Sleep a bit to prevent CPU hogging.
-		 */
-		if (buf_idx % 0x1000 >
-		    (buf_idx + buf_size) % 0x1000)
-			OSAL_MSLEEP(1);
-
-		buf_idx += buf_size;
-	}
-
-	p_dev->mcp_nvm_resp = resp;
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
-					 u32 addr, u8 *p_buf, u32 len)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	u32 resp = 0, param, nvm_cmd;
-	struct ecore_ptt *p_ptt;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	nvm_cmd = (cmd == ECORE_PHY_CORE_WRITE) ?  DRV_MSG_CODE_PHY_CORE_WRITE :
-			DRV_MSG_CODE_PHY_RAW_WRITE;
-	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, addr,
-				  &resp, &param, len, (u32 *)p_buf);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
-						   u32 addr)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_SECURE_MODE, addr,
-			   &resp, &param);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u32 port, u32 addr, u32 offset,
-					    u32 len, u8 *p_buf)
-{
-	u32 bytes_left, bytes_to_copy, buf_size, nvm_offset;
-	u32 resp, param;
-	enum _ecore_status_t rc;
-
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
-	addr = offset;
-	offset = 0;
-	bytes_left = len;
-	while (bytes_left > 0) {
-		bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
-					   MAX_I2C_TRANSACTION_SIZE);
-		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
-			       DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		nvm_offset |= ((addr + offset) <<
-				DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
-		nvm_offset |= (bytes_to_copy <<
-			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
-		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
-					  DRV_MSG_CODE_TRANSCEIVER_READ,
-					  nvm_offset, &resp, &param, &buf_size,
-					  (u32 *)(p_buf + offset));
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, false,
-				  "Failed to send a transceiver read command to the MFW. rc = %d.\n",
-				  rc);
-			return rc;
-		}
-
-		if (resp == FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT)
-			return ECORE_NODEV;
-		else if (resp != FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
-			return ECORE_UNKNOWN_ERROR;
-
-		offset += buf_size;
-		bytes_left -= buf_size;
-	}
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     u32 port, u32 addr, u32 offset,
-					     u32 len, u8 *p_buf)
-{
-	u32 buf_idx, buf_size, nvm_offset, resp, param;
-	enum _ecore_status_t rc;
-
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
-	buf_idx = 0;
-	while (buf_idx < len) {
-		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
-				      MAX_I2C_TRANSACTION_SIZE);
-		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
-				 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		nvm_offset |= ((offset + buf_idx) <<
-				 DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
-		nvm_offset |= (buf_size <<
-			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
-		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
-					  DRV_MSG_CODE_TRANSCEIVER_WRITE,
-					  nvm_offset, &resp, &param, buf_size,
-					  (u32 *)&p_buf[buf_idx]);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, false,
-				  "Failed to send a transceiver write command to the MFW. rc = %d.\n",
-				  rc);
-			return rc;
-		}
-
-		if (resp == FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT)
-			return ECORE_NODEV;
-		else if (resp != FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
-			return ECORE_UNKNOWN_ERROR;
-
-		buf_idx += buf_size;
-	}
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u16 gpio, u32 *gpio_val)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 drv_mb_param = 0, rsp = 0;
-
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ,
-			   drv_mb_param, &rsp, gpio_val);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
-		return ECORE_UNKNOWN_ERROR;
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt,
-					  u16 gpio, u16 gpio_val)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 drv_mb_param = 0, param, rsp = 0;
-
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET) |
-		(gpio_val << DRV_MB_PARAM_GPIO_VALUE_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE,
-			   drv_mb_param, &rsp, &param);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
-		return ECORE_UNKNOWN_ERROR;
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u16 gpio, u32 *gpio_direction,
-					 u32 *gpio_ctrl)
-{
-	u32 drv_mb_param = 0, rsp, val = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET;
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO,
-			   drv_mb_param, &rsp, &val);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	*gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >>
-			   DRV_MB_PARAM_GPIO_DIRECTION_OFFSET;
-	*gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >>
-		      DRV_MB_PARAM_GPIO_CTRL_OFFSET;
-
-	if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
-		return ECORE_UNKNOWN_ERROR;
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt)
-{
-	u32 drv_mb_param = 0, rsp, param;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-			   drv_mb_param, &rsp, &param);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
-	    (param != DRV_MB_PARAM_BIST_RC_PASSED))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
-					       struct ecore_ptt *p_ptt)
-{
-	u32 drv_mb_param, rsp, param;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-			   drv_mb_param, &rsp, &param);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
-	    (param != DRV_MB_PARAM_BIST_RC_PASSED))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
-	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *num_images)
-{
-	u32 drv_mb_param = 0, rsp = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-			   drv_mb_param, &rsp, num_images);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
-	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-	struct bist_nvm_image_att *p_image_att, u32 image_index)
-{
-	u32 buf_size, nvm_offset, resp, param;
-	enum _ecore_status_t rc;
-
-	nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
-				    DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-	nvm_offset |= (image_index <<
-		       DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET);
-	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-				  nvm_offset, &resp, &param, &buf_size,
-				  (u32 *)p_image_att);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
-	    (p_image_att->return_code != 1))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt, u32 *num_images)
-{
-	u32 drv_mb_param = 0, rsp;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
-		      DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-			   drv_mb_param, &rsp, num_images);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (rsp == FW_MSG_CODE_UNSUPPORTED)
-		rc = ECORE_NOTIMPL;
-	else if (rsp != FW_MSG_CODE_OK)
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt,
-				 struct bist_nvm_image_att *p_image_att,
-				 u32 image_index)
-{
-	u32 buf_size, nvm_offset = 0, resp, param;
-	enum _ecore_status_t rc;
+	u32 buf_size, nvm_offset = 0, resp, param;
+	enum _ecore_status_t rc;
 
 	SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_BIST_TEST_INDEX,
 		      DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX);
@@ -3800,111 +2945,6 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
-					     enum ecore_nvm_images image_id,
-					     u8 *p_buffer, u32 buffer_len)
-{
-	struct ecore_nvm_image_att image_att;
-	enum _ecore_status_t rc;
-
-	OSAL_MEM_ZERO(p_buffer, buffer_len);
-
-	rc = ecore_mcp_get_nvm_image_att(p_hwfn, image_id, &image_att);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	/* Validate sizes - both the image's and the supplied buffer's */
-	if (image_att.length <= 4) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-			   "Image [%d] is too small - only %d bytes\n",
-			   image_id, image_att.length);
-		return ECORE_INVAL;
-	}
-
-	if (image_att.length > buffer_len) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-			   "Image [%d] is too big - %08x bytes where only %08x are available\n",
-			   image_id, image_att.length, buffer_len);
-		return ECORE_NOMEM;
-	}
-
-	return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.start_addr,
-				  (u8 *)p_buffer, image_att.length);
-}
-
-enum _ecore_status_t
-ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt,
-			       struct ecore_temperature_info *p_temp_info)
-{
-	struct ecore_temperature_sensor *p_temp_sensor;
-	struct temperature_status_stc mfw_temp_info;
-	struct ecore_mcp_mb_params mb_params;
-	u32 val;
-	enum _ecore_status_t rc;
-	u8 i;
-
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
-	mb_params.p_data_dst = &mfw_temp_info;
-	mb_params.data_dst_size = sizeof(mfw_temp_info);
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
-	p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
-					      ECORE_MAX_NUM_OF_SENSORS);
-	for (i = 0; i < p_temp_info->num_sensors; i++) {
-		val = mfw_temp_info.sensor[i];
-		p_temp_sensor = &p_temp_info->sensors[i];
-		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
-						 SENSOR_LOCATION_OFFSET;
-		p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
-						THRESHOLD_HIGH_OFFSET;
-		p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
-					  CRITICAL_TEMPERATURE_OFFSET;
-		p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
-					      CURRENT_TEMP_OFFSET;
-	}
-
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_get_mba_versions(
-	struct ecore_hwfn *p_hwfn,
-	struct ecore_ptt *p_ptt,
-	struct ecore_mba_vers *p_mba_vers)
-{
-	u32 buf_size, resp, param;
-	enum _ecore_status_t rc;
-
-	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MBA_VERSION,
-				  0, &resp, &param, &buf_size,
-				  &p_mba_vers->mba_vers[0]);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if ((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_NVM_OK)
-		rc = ECORE_UNKNOWN_ERROR;
-
-	if (buf_size != MCP_DRV_NVM_BUF_LEN)
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u64 *num_events)
-{
-	u32 rsp;
-
-	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MEM_ECC_EVENTS,
-			     0, &rsp, (u32 *)num_events);
-}
-
 static enum resource_id_enum
 ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
 {
@@ -3984,25 +3024,6 @@ struct ecore_resc_alloc_out_params {
 
 #define ECORE_RECOVERY_PROLOG_SLEEP_MS	100
 
-enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
-	enum _ecore_status_t rc;
-
-	/* Allow ongoing PCIe transactions to complete */
-	OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS);
-
-	/* Clear the PF's internal FID_enable in the PXP */
-	rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_hwfn, false,
-			  "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n",
-			  rc);
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt,
@@ -4380,79 +3401,6 @@ enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
 			     features, &mcp_resp, &mcp_param);
 }
 
-enum _ecore_status_t
-ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			struct ecore_mcp_drv_attr *p_drv_attr)
-{
-	struct attribute_cmd_write_stc attr_cmd_write;
-	enum _attribute_commands_e mfw_attr_cmd;
-	struct ecore_mcp_mb_params mb_params;
-	enum _ecore_status_t rc;
-
-	switch (p_drv_attr->attr_cmd) {
-	case ECORE_MCP_DRV_ATTR_CMD_READ:
-		mfw_attr_cmd = ATTRIBUTE_CMD_READ;
-		break;
-	case ECORE_MCP_DRV_ATTR_CMD_WRITE:
-		mfw_attr_cmd = ATTRIBUTE_CMD_WRITE;
-		break;
-	case ECORE_MCP_DRV_ATTR_CMD_READ_CLEAR:
-		mfw_attr_cmd = ATTRIBUTE_CMD_READ_CLEAR;
-		break;
-	case ECORE_MCP_DRV_ATTR_CMD_CLEAR:
-		mfw_attr_cmd = ATTRIBUTE_CMD_CLEAR;
-		break;
-	default:
-		DP_NOTICE(p_hwfn, false, "Unknown attribute command %d\n",
-			  p_drv_attr->attr_cmd);
-		return ECORE_INVAL;
-	}
-
-	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
-	mb_params.cmd = DRV_MSG_CODE_ATTRIBUTE;
-	SET_MFW_FIELD(mb_params.param, DRV_MB_PARAM_ATTRIBUTE_KEY,
-		      p_drv_attr->attr_num);
-	SET_MFW_FIELD(mb_params.param, DRV_MB_PARAM_ATTRIBUTE_CMD,
-		      mfw_attr_cmd);
-	if (p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_WRITE) {
-		OSAL_MEM_ZERO(&attr_cmd_write, sizeof(attr_cmd_write));
-		attr_cmd_write.val = p_drv_attr->val;
-		attr_cmd_write.mask = p_drv_attr->mask;
-		attr_cmd_write.offset = p_drv_attr->offset;
-
-		mb_params.p_data_src = &attr_cmd_write;
-		mb_params.data_src_size = sizeof(attr_cmd_write);
-	}
-
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
-		DP_INFO(p_hwfn,
-			"The attribute command is not supported by the MFW\n");
-		return ECORE_NOTIMPL;
-	} else if (mb_params.mcp_resp != FW_MSG_CODE_OK) {
-		DP_INFO(p_hwfn,
-			"Failed to send an attribute command [mcp_resp 0x%x, attr_cmd %d, attr_num %d]\n",
-			mb_params.mcp_resp, p_drv_attr->attr_cmd,
-			p_drv_attr->attr_num);
-		return ECORE_INVAL;
-	}
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Attribute Command: cmd %d [mfw_cmd %d], num %d, in={val 0x%08x, mask 0x%08x, offset 0x%08x}, out={val 0x%08x}\n",
-		   p_drv_attr->attr_cmd, mfw_attr_cmd, p_drv_attr->attr_num,
-		   p_drv_attr->val, p_drv_attr->mask, p_drv_attr->offset,
-		   mb_params.mcp_param);
-
-	if (p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_READ ||
-	    p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_READ_CLEAR)
-		p_drv_attr->val = mb_params.mcp_param;
-
-	return ECORE_SUCCESS;
-}
-
 enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt)
 {
@@ -4521,30 +3469,3 @@ enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 
 	return ECORE_SUCCESS;
 }
-
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      u32 offset, u32 val)
-{
-	enum _ecore_status_t	   rc = ECORE_SUCCESS;
-	u32			   dword = val;
-	struct ecore_mcp_mb_params mb_params;
-
-	OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
-	mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG;
-	mb_params.param = offset;
-	mb_params.p_data_src = &dword;
-	mb_params.data_src_size = sizeof(dword);
-
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, false,
-			  "Failed to wol write request, rc = %d\n", rc);
-	}
-
-	if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) {
-		DP_NOTICE(p_hwfn, false,
-			  "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n",
-			  val, offset, mb_params.mcp_resp);
-		rc = ECORE_UNKNOWN_ERROR;
-	}
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 185cc23394..7dda431d99 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -253,17 +253,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 						    struct ecore_ptt *p_ptt);
 
-/**
- * @brief - Reset the MCP using mailbox command.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt);
-
 /**
  * @brief indicates whether the MFW objects [under mcp_info] are accessible
  *
@@ -331,18 +320,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 epoch);
 
-/**
- * @brief - Triggers a MFW crash dump procedure.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param epoch
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt);
-
 struct ecore_mdump_retain_data {
 	u32 valid;
 	u32 epoch;
@@ -545,17 +522,6 @@ struct ecore_mcp_drv_attr {
 	u32 offset;
 };
 
-/**
- * @brief Handle the drivers' attributes that are kept by the MFW.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param p_drv_attr
- */
-enum _ecore_status_t
-ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			struct ecore_mcp_drv_attr *p_drv_attr);
-
 /**
  * @brief Read ufp config from the shared memory.
  *
@@ -565,9 +531,6 @@ ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 void
 ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      u32 offset, u32 val);
-
 /**
  * @brief Get the engine affinity configuration.
  *
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index c3922ba43a..8bea0dc4a9 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -603,21 +603,6 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
 int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt, u32 *p_mbi_ver);
 
-/**
- * @brief Get media type value of the port.
- *
- * @param p_dev      - ecore dev pointer
- * @param p_ptt
- * @param mfw_ver    - media type value
- *
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successful.
- *      ECORE_BUSY - Operation failed
- */
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u32 *media_type);
-
 /**
  * @brief Get transceiver data of the port.
  *
@@ -635,37 +620,6 @@ enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
 						    u32 *p_transceiver_state,
 						    u32 *p_tranceiver_type);
 
-/**
- * @brief Get transceiver supported speed mask.
- *
- * @param p_dev      - ecore dev pointer
- * @param p_ptt
- * @param p_speed_mask - Bit mask of all supported speeds.
- *
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successful.
- *      ECORE_BUSY - Operation failed
- */
-
-enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *p_speed_mask);
-
-/**
- * @brief Get board configuration.
- *
- * @param p_dev      - ecore dev pointer
- * @param p_ptt
- * @param p_board_config - Board config.
- *
- * @return enum _ecore_status_t -
- *      ECORE_SUCCESS - Operation was successful.
- *      ECORE_BUSY - Operation failed
- */
-enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *p_board_config);
-
 /**
  * @brief - Sends a command to the MCP mailbox.
  *
@@ -694,34 +648,6 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
-#ifndef LINUX_REMOVE
-/**
- * @brief - return the mcp function info of the hw function
- *
- * @param p_hwfn
- *
- * @returns pointer to mcp function info
- */
-const struct ecore_mcp_function_info
-*ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn);
-#endif
-
-#ifndef LINUX_REMOVE
-/**
- * @brief - count number of function with a matching personality on engine.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param personalities - a bitmask of ecore_pci_personality values
- *
- * @returns the count of all devices on engine whose personality match one of
- *          the bitsmasks.
- */
-int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  u32 personalities);
-#endif
-
 /**
  * @brief Get the flash size value
  *
@@ -760,42 +686,6 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
-/**
- * @brief Trigger a recovery process
- *
- *  @param p_hwfn
- *  @param p_ptt
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt);
-
-/**
- * @brief A recovery handler must call this function as its first step.
- *        It is assumed that the handler is not run from an interrupt context.
- *
- *  @param p_dev
- *  @param p_ptt
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev);
-
-/**
- * @brief Notify MFW about the change in base device properties
- *
- *  @param p_hwfn
- *  @param p_ptt
- *  @param client - ecore client type
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt,
-				   enum ecore_ov_client client);
-
 /**
  * @brief Notify MFW about the driver state
  *
@@ -810,21 +700,6 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt,
 				 enum ecore_ov_driver_state drv_state);
 
-/**
- * @brief Read NPIV settings form the MFW
- *
- *  @param p_hwfn
- *  @param p_ptt
- *  @param p_table - Array to hold the FC NPIV data. Client need allocate the
- *                   required buffer. The field 'count' specifies number of NPIV
- *                   entries. A value of 0 means the table was not populated.
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			 struct ecore_fc_npiv_tbl *p_table);
-
 /**
  * @brief Send MTU size to MFW
  *
@@ -837,19 +712,6 @@ ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt, u16 mtu);
 
-/**
- * @brief Send MAC address to MFW
- *
- *  @param p_hwfn
- *  @param p_ptt
- *  @param mac - MAC address
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 *mac);
-
 /**
  * @brief Send eswitch mode to MFW
  *
@@ -863,104 +725,6 @@ enum _ecore_status_t
 ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    enum ecore_ov_eswitch eswitch);
 
-/**
- * @brief Set LED status
- *
- *  @param p_hwfn
- *  @param p_ptt
- *  @param mode - LED mode
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn,
-				       struct ecore_ptt *p_ptt,
-				       enum ecore_led_mode mode);
-
-/**
- * @brief Set secure mode
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
-						   u32 addr);
-
-/**
- * @brief Write to phy
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *  @param cmd - nvm command
- *  @param p_buf - nvm write buffer
- *  @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
-					 u32 addr, u8 *p_buf, u32 len);
-
-/**
- * @brief Write to nvm
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *  @param cmd - nvm command
- *  @param p_buf - nvm write buffer
- *  @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
-					 u32 addr, u8 *p_buf, u32 len);
-
-/**
- * @brief Put file begin
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
-						  u32 addr);
-
-/**
- * @brief Delete file
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev,
-					    u32 addr);
-
-/**
- * @brief Check latest response
- *
- *  @param p_dev
- *  @param p_buf - nvm write buffer
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf);
-
-/**
- * @brief Read from phy
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *  @param cmd - nvm command
- *  @param p_buf - nvm read buffer
- *  @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
-					u32 addr, u8 *p_buf, u32 *p_len);
-
 /**
  * @brief Read from nvm
  *
@@ -993,20 +757,6 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
 			    enum ecore_nvm_images image_id,
 			    struct ecore_nvm_image_att *p_image_att);
 
-/**
- * @brief Allows reading a whole nvram image
- *
- * @param p_hwfn
- * @param image_id - image requested for reading
- * @param p_buffer - allocated buffer into which to fill data
- * @param buffer_len - length of the allocated buffer.
- *
- * @return ECORE_SUCCESS if p_buffer now contains the nvram image.
- */
-enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
-					     enum ecore_nvm_images image_id,
-					     u8 *p_buffer, u32 buffer_len);
-
 /**
  * @brief - Sends an NVM write command request to the MFW with
  *          payload.
@@ -1057,183 +807,6 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_txn_size,
 					  u32 *o_buf);
 
-/**
- * @brief Read from sfp
- *
- *  @param p_hwfn - hw function
- *  @param p_ptt  - PTT required for register access
- *  @param port   - transceiver port
- *  @param addr   - I2C address
- *  @param offset - offset in sfp
- *  @param len    - buffer length
- *  @param p_buf  - buffer to read into
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    u32 port, u32 addr, u32 offset,
-					    u32 len, u8 *p_buf);
-
-/**
- * @brief Write to sfp
- *
- *  @param p_hwfn - hw function
- *  @param p_ptt  - PTT required for register access
- *  @param port   - transceiver port
- *  @param addr   - I2C address
- *  @param offset - offset in sfp
- *  @param len    - buffer length
- *  @param p_buf  - buffer to write from
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
-					     struct ecore_ptt *p_ptt,
-					     u32 port, u32 addr, u32 offset,
-					     u32 len, u8 *p_buf);
-
-/**
- * @brief Gpio read
- *
- *  @param p_hwfn    - hw function
- *  @param p_ptt     - PTT required for register access
- *  @param gpio      - gpio number
- *  @param gpio_val  - value read from gpio
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u16 gpio, u32 *gpio_val);
-
-/**
- * @brief Gpio write
- *
- *  @param p_hwfn    - hw function
- *  @param p_ptt     - PTT required for register access
- *  @param gpio      - gpio number
- *  @param gpio_val  - value to write to gpio
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt,
-					  u16 gpio, u16 gpio_val);
-
-/**
- * @brief Gpio get information
- *
- *  @param p_hwfn          - hw function
- *  @param p_ptt           - PTT required for register access
- *  @param gpio            - gpio number
- *  @param gpio_direction  - gpio is output (0) or input (1)
- *  @param gpio_ctrl       - gpio control is uninitialized (0),
- *                         path 0 (1), path 1 (2) or shared(3)
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u16 gpio, u32 *gpio_direction,
-					 u32 *gpio_ctrl);
-
-/**
- * @brief Bist register test
- *
- *  @param p_hwfn    - hw function
- *  @param p_ptt     - PTT required for register access
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
-						   struct ecore_ptt *p_ptt);
-
-/**
- * @brief Bist clock test
- *
- *  @param p_hwfn    - hw function
- *  @param p_ptt     - PTT required for register access
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt);
-
-/**
- * @brief Bist nvm test - get number of images
- *
- *  @param p_hwfn       - hw function
- *  @param p_ptt        - PTT required for register access
- *  @param num_images   - number of images if operation was
- *			  successful. 0 if not.
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
-						struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *num_images);
-
-/**
- * @brief Bist nvm test - get image attributes by index
- *
- *  @param p_hwfn      - hw function
- *  @param p_ptt       - PTT required for register access
- *  @param p_image_att - Attributes of image
- *  @param image_index - Index of image to get information for
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
-					struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt,
-					struct bist_nvm_image_att *p_image_att,
-					u32 image_index);
-
-/**
- * @brief ecore_mcp_get_temperature_info - get the status of the temperature
- *                                         sensors
- *
- *  @param p_hwfn        - hw function
- *  @param p_ptt         - PTT required for register access
- *  @param p_temp_status - A pointer to an ecore_temperature_info structure to
- *                         be filled with the temperature data
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt,
-			       struct ecore_temperature_info *p_temp_info);
-
-/**
- * @brief Get MBA versions - get MBA sub images versions
- *
- *  @param p_hwfn      - hw function
- *  @param p_ptt       - PTT required for register access
- *  @param p_mba_vers  - MBA versions array to fill
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_get_mba_versions(
-	struct ecore_hwfn *p_hwfn,
-	struct ecore_ptt *p_ptt,
-	struct ecore_mba_vers *p_mba_vers);
-
-/**
- * @brief Count memory ecc events
- *
- *  @param p_hwfn      - hw function
- *  @param p_ptt       - PTT required for register access
- *  @param num_events  - number of memory ecc events
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      u64 *num_events);
-
 struct ecore_mdump_info {
 	u32 reason;
 	u32 version;
@@ -1256,28 +829,6 @@ enum _ecore_status_t
 ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			 struct ecore_mdump_info *p_mdump_info);
 
-/**
- * @brief - Clears the MFW crash dump logs.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt);
-
-/**
- * @brief - Clear the mdump retained data.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt);
-
 /**
  * @brief - Processes the TLV request from MFW i.e., get the required TLV info
  *          from the ecore client and send it to the MFW.
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 44ced135d6..86fceb36ba 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -486,70 +486,6 @@ u16 ecore_sp_rl_gd_denom(u32 gd)
 	return gd ? (u16)OSAL_MIN_T(u32, (u16)(~0U), FW_GD_RESOLUTION(gd)) : 0;
 }
 
-enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
-					struct ecore_rl_update_params *params)
-{
-	struct ecore_spq_entry *p_ent = OSAL_NULL;
-	enum _ecore_status_t rc = ECORE_NOTIMPL;
-	struct rl_update_ramrod_data *rl_update;
-	struct ecore_sp_init_data init_data;
-
-	/* Get SPQ entry */
-	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = ecore_spq_get_cid(p_hwfn);
-	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
-	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
-	rc = ecore_sp_init_request(p_hwfn, &p_ent,
-				   COMMON_RAMROD_RL_UPDATE, PROTOCOLID_COMMON,
-				   &init_data);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rl_update = &p_ent->ramrod.rl_update;
-
-	rl_update->qcn_update_param_flg = params->qcn_update_param_flg;
-	rl_update->dcqcn_update_param_flg = params->dcqcn_update_param_flg;
-	rl_update->rl_init_flg = params->rl_init_flg;
-	rl_update->rl_start_flg = params->rl_start_flg;
-	rl_update->rl_stop_flg = params->rl_stop_flg;
-	rl_update->rl_id_first = params->rl_id_first;
-	rl_update->rl_id_last = params->rl_id_last;
-	rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg;
-	rl_update->dcqcn_reset_alpha_on_idle =
-		params->dcqcn_reset_alpha_on_idle;
-	rl_update->rl_bc_stage_th = params->rl_bc_stage_th;
-	rl_update->rl_timer_stage_th = params->rl_timer_stage_th;
-	rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate);
-	rl_update->rl_max_rate =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate));
-	rl_update->rl_r_ai =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai));
-	rl_update->rl_r_hai =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai));
-	rl_update->dcqcn_g =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd));
-	rl_update->dcqcn_k_us = OSAL_CPU_TO_LE32(params->dcqcn_k_us);
-	rl_update->dcqcn_timeuot_us =
-		OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us);
-	rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us);
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x,dcqcn_reset_alpha_on_idle %x, rl_bc_stage_th %x, rl_timer_stage_th %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
-		   rl_update->qcn_update_param_flg,
-		   rl_update->dcqcn_update_param_flg,
-		   rl_update->rl_init_flg, rl_update->rl_start_flg,
-		   rl_update->rl_stop_flg, rl_update->rl_id_first,
-		   rl_update->rl_id_last, rl_update->rl_dc_qcn_flg,
-		   rl_update->dcqcn_reset_alpha_on_idle,
-		   rl_update->rl_bc_stage_th, rl_update->rl_timer_stage_th,
-		   rl_update->rl_bc_rate, rl_update->rl_max_rate,
-		   rl_update->rl_r_ai, rl_update->rl_r_hai,
-		   rl_update->dcqcn_g, rl_update->dcqcn_k_us,
-		   rl_update->dcqcn_timeuot_us, rl_update->qcn_timeuot_us);
-
-	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
 /* Set pf update ramrod command params */
 enum _ecore_status_t
 ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
@@ -620,31 +556,6 @@ enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn)
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
-{
-	struct ecore_spq_entry *p_ent = OSAL_NULL;
-	struct ecore_sp_init_data init_data;
-	enum _ecore_status_t rc;
-
-	/* Get SPQ entry */
-	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
-	init_data.cid = ecore_spq_get_cid(p_hwfn);
-	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
-	init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
-	rc = ecore_sp_init_request(p_hwfn, &p_ent,
-				   COMMON_RAMROD_EMPTY, PROTOCOLID_COMMON,
-				   &init_data);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
-		p_ent->ramrod.pf_update.mf_vlan |=
-			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
-
-	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
 enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 524fe57a14..7d9ec82c7c 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -101,16 +101,6 @@ enum _ecore_status_t ecore_sp_pf_update_dcbx(struct ecore_hwfn *p_hwfn);
 
 enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn);
 
-/**
- * @brief ecore_sp_heartbeat_ramrod - Send empty Ramrod
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn);
-
 struct ecore_rl_update_params {
 	u8 qcn_update_param_flg;
 	u8 dcqcn_update_param_flg;
@@ -133,17 +123,6 @@ struct ecore_rl_update_params {
 	u32 qcn_timeuot_us;
 };
 
-/**
- * @brief ecore_sp_rl_update - Update rate limiters
- *
- * @param p_hwfn
- * @param params
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
-					struct ecore_rl_update_params *params);
-
 /**
  * @brief ecore_sp_pf_update_stag - PF STAG value update Ramrod
  *
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index ed8cc695fe..a7a0a40a74 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -772,39 +772,6 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
 	}
 }
 
-void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
-				  u8 to_disable)
-{
-	u16 i;
-
-	if (!IS_ECORE_SRIOV(p_dev))
-		return;
-
-	for (i = 0; i < p_dev->p_iov_info->total_vfs; i++)
-		ecore_iov_set_vf_to_disable(p_dev, i, to_disable);
-}
-
-#ifndef LINUX_REMOVE
-/* @@@TBD Consider taking outside of ecore... */
-enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
-					  u16		    vf_id,
-					  void		    *ctx)
-{
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	struct ecore_vf_info *vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true);
-
-	if (vf != OSAL_NULL) {
-		vf->ctx = ctx;
-#ifdef CONFIG_ECORE_SW_CHANNEL
-		vf->vf_mbx.sw_mbx.mbx_state = VF_PF_WAIT_FOR_START_REQUEST;
-#endif
-	} else {
-		rc = ECORE_UNKNOWN_ERROR;
-	}
-	return rc;
-}
-#endif
-
 static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn      *p_hwfn,
 					 struct ecore_ptt	*p_ptt,
 					 u8			abs_vfid)
@@ -1269,70 +1236,6 @@ static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 u16 rel_vf_id)
-{
-	struct ecore_mcp_link_capabilities caps;
-	struct ecore_mcp_link_params params;
-	struct ecore_mcp_link_state link;
-	struct ecore_vf_info *vf = OSAL_NULL;
-
-	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!vf) {
-		DP_ERR(p_hwfn, "ecore_iov_release_hw_for_vf : vf is NULL\n");
-		return ECORE_UNKNOWN_ERROR;
-	}
-
-	if (vf->bulletin.p_virt)
-		OSAL_MEMSET(vf->bulletin.p_virt, 0,
-			    sizeof(*vf->bulletin.p_virt));
-
-	OSAL_MEMSET(&vf->p_vf_info, 0, sizeof(vf->p_vf_info));
-
-	/* Get the link configuration back in bulletin so
-	 * that when VFs are re-enabled they get the actual
-	 * link configuration.
-	 */
-	OSAL_MEMCPY(&params, ecore_mcp_get_link_params(p_hwfn), sizeof(params));
-	OSAL_MEMCPY(&link, ecore_mcp_get_link_state(p_hwfn), sizeof(link));
-	OSAL_MEMCPY(&caps, ecore_mcp_get_link_capabilities(p_hwfn),
-		    sizeof(caps));
-	ecore_iov_set_link(p_hwfn, rel_vf_id, &params, &link, &caps);
-
-	/* Forget the VF's acquisition message */
-	OSAL_MEMSET(&vf->acquire, 0, sizeof(vf->acquire));
-
-	/* disablng interrupts and resetting permission table was done during
-	 * vf-close, however, we could get here without going through vf_close
-	 */
-	/* Disable Interrupts for VF */
-	ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
-
-	/* Reset Permission table */
-	ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
-
-	vf->num_rxqs = 0;
-	vf->num_txqs = 0;
-	ecore_iov_free_vf_igu_sbs(p_hwfn, p_ptt, vf);
-
-	if (vf->b_init) {
-		vf->b_init = false;
-		p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] &=
-					~(1ULL << (vf->relative_vf_id / 64));
-
-		if (IS_LEAD_HWFN(p_hwfn))
-			p_hwfn->p_dev->p_iov_info->num_vfs--;
-	}
-
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt);
-#endif
-
-	return ECORE_SUCCESS;
-}
-
 static bool ecore_iov_tlv_supported(u16 tlvtype)
 {
 	return tlvtype > CHANNEL_TLV_NONE && tlvtype < CHANNEL_TLV_MAX;
@@ -1573,20 +1476,6 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
 	ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
 }
 
-struct ecore_public_vf_info
-*ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
-			      u16 relative_vf_id,
-			      bool b_enabled_only)
-{
-	struct ecore_vf_info *vf = OSAL_NULL;
-
-	vf = ecore_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
-	if (!vf)
-		return OSAL_NULL;
-
-	return &vf->p_vf_info;
-}
-
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
@@ -3820,93 +3709,6 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 			       sizeof(struct pfvf_def_resp_tlv), status);
 }
 
-enum _ecore_status_t
-ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
-					 u16 rx_coal, u16 tx_coal,
-					 u16 vf_id, u16 qid)
-{
-	struct ecore_queue_cid *p_cid;
-	struct ecore_vf_info *vf;
-	struct ecore_ptt *p_ptt;
-	int rc = 0;
-	u32 i;
-
-	if (!ecore_iov_is_valid_vfid(p_hwfn, vf_id, true, true)) {
-		DP_NOTICE(p_hwfn, true,
-			  "VF[%d] - Can not set coalescing: VF is not active\n",
-			  vf_id);
-		return ECORE_INVAL;
-	}
-
-	vf = &p_hwfn->pf_iov_info->vfs_array[vf_id];
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_AGAIN;
-
-	if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
-				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
-	    rx_coal) {
-		DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
-		       vf->abs_vf_id, qid);
-		goto out;
-	}
-
-	if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
-				    ECORE_IOV_VALIDATE_Q_ENABLE) &&
-	    tx_coal) {
-		DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
-		       vf->abs_vf_id, qid);
-		goto out;
-	}
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
-		   vf->abs_vf_id, rx_coal, tx_coal, qid);
-
-	if (rx_coal) {
-		p_cid = ecore_iov_get_vf_rx_queue_cid(&vf->vf_queues[qid]);
-
-		rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
-		if (rc != ECORE_SUCCESS) {
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%d]: Unable to set rx queue = %d coalesce\n",
-				   vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
-			goto out;
-		}
-		vf->rx_coal = rx_coal;
-	}
-
-	/* TODO - in future, it might be possible to pass this in a per-cid
-	 * granularity. For now, do this for all Tx queues.
-	 */
-	if (tx_coal) {
-		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
-
-		for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
-			if (p_queue->cids[i].p_cid == OSAL_NULL)
-				continue;
-
-			if (!p_queue->cids[i].b_is_tx)
-				continue;
-
-			rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
-						    p_queue->cids[i].p_cid);
-			if (rc != ECORE_SUCCESS) {
-				DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-					   "VF[%d]: Unable to set tx queue coalesce\n",
-					   vf->abs_vf_id);
-				goto out;
-			}
-		}
-		vf->tx_coal = tx_coal;
-	}
-
-out:
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -4116,24 +3918,6 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt, u16 rel_vf_id)
-{
-	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
-
-	/* Wait instead of polling the BRB <-> PRS interface */
-	OSAL_MSLEEP(100);
-
-	ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, rel_vf_id, ack_vfs);
-
-	rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
-	return rc;
-}
-
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 {
 	bool found = false;
@@ -4184,28 +3968,6 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 	return found;
 }
 
-void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *p_params,
-			struct ecore_mcp_link_state *p_link,
-			struct ecore_mcp_link_capabilities *p_caps)
-{
-	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
-	struct ecore_bulletin_content *p_bulletin;
-
-	if (!p_vf)
-		return;
-
-	p_bulletin = p_vf->bulletin.p_virt;
-
-	if (p_params)
-		__ecore_vf_get_link_params(p_params, p_bulletin);
-	if (p_link)
-		__ecore_vf_get_link_state(p_link, p_bulletin);
-	if (p_caps)
-		__ecore_vf_get_link_caps(p_caps, p_bulletin);
-}
-
 void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt, int vfid)
 {
@@ -4466,12 +4228,6 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	return !!(p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
-		   (1ULL << (rel_vf_id % 64)));
-}
-
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 {
 	struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
@@ -4516,172 +4272,6 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
-				       u8 *mac, int vfid)
-{
-	struct ecore_vf_info *vf_info;
-	u64 feature;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set forced MAC, invalid vfid [%d]\n", vfid);
-		return;
-	}
-	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set forced MAC to malicious VF [%d]\n",
-			  vfid);
-		return;
-	}
-
-	if (p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change ||
-	    vf_info->p_vf_info.is_trusted_configured) {
-		feature = 1 << VFPF_BULLETIN_MAC_ADDR;
-		/* Trust mode will disable Forced MAC */
-		vf_info->bulletin.p_virt->valid_bitmap &=
-			~(1 << MAC_ADDR_FORCED);
-	} else {
-		feature = 1 << MAC_ADDR_FORCED;
-		/* Forced MAC will disable MAC_ADDR */
-		vf_info->bulletin.p_virt->valid_bitmap &=
-			~(1 << VFPF_BULLETIN_MAC_ADDR);
-	}
-
-	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac,
-		    mac, ETH_ALEN);
-
-	vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
-	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-}
-
-enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
-						u8 *mac, int vfid)
-{
-	struct ecore_vf_info *vf_info;
-	u64 feature;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set MAC, invalid vfid [%d]\n", vfid);
-		return ECORE_INVAL;
-	}
-	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set MAC to malicious VF [%d]\n",
-			  vfid);
-		return ECORE_INVAL;
-	}
-
-	if (vf_info->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Can not set MAC, Forced MAC is configured\n");
-		return ECORE_INVAL;
-	}
-
-	feature = 1 << VFPF_BULLETIN_MAC_ADDR;
-	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
-
-	vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
-	if (p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change ||
-	    vf_info->p_vf_info.is_trusted_configured)
-		ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-
-	return ECORE_SUCCESS;
-}
-
-#ifndef LINUX_REMOVE
-enum _ecore_status_t
-ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
-					       bool b_untagged_only, int vfid)
-{
-	struct ecore_vf_info *vf_info;
-	u64 feature;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set untagged default, invalid vfid [%d]\n",
-			  vfid);
-		return ECORE_INVAL;
-	}
-	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set untagged default to malicious VF [%d]\n",
-			  vfid);
-		return ECORE_INVAL;
-	}
-
-	/* Since this is configurable only during vport-start, don't take it
-	 * if we're past that point.
-	 */
-	if (vf_info->state == VF_ENABLED) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Can't support untagged change for vfid[%d] -"
-			   " VF is already active\n",
-			   vfid);
-		return ECORE_INVAL;
-	}
-
-	/* Set configuration; This will later be taken into account during the
-	 * VF initialization.
-	 */
-	feature = (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT) |
-	    (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED);
-	vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
-	vf_info->bulletin.p_virt->default_only_untagged = b_untagged_only ? 1
-	    : 0;
-
-	return ECORE_SUCCESS;
-}
-
-void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
-				  u16 *opaque_fid)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	*opaque_fid = vf_info->opaque_fid;
-}
-#endif
-
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
-					u16 pvid, int vfid)
-{
-	struct ecore_vf_info *vf_info;
-	u64 feature;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set forced MAC, invalid vfid [%d]\n",
-			  vfid);
-		return;
-	}
-	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set forced vlan to malicious VF [%d]\n",
-			  vfid);
-		return;
-	}
-
-	feature = 1 << VLAN_ADDR_FORCED;
-	vf_info->bulletin.p_virt->pvid = pvid;
-	if (pvid)
-		vf_info->bulletin.p_virt->valid_bitmap |= feature;
-	else
-		vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
-
-	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-}
-
 void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
 				      int vfid, u16 vxlan_port, u16 geneve_port)
 {
@@ -4715,360 +4305,3 @@ bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 
 	return !!p_vf_info->vport_instance;
 }
-
-bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
-{
-	struct ecore_vf_info *p_vf_info;
-
-	p_vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!p_vf_info)
-		return true;
-
-	return p_vf_info->state == VF_STOPPED;
-}
-
-bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return false;
-
-	return vf_info->spoof_chk;
-}
-
-enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
-					    int vfid, bool val)
-{
-	struct ecore_vf_info *vf;
-	enum _ecore_status_t rc = ECORE_INVAL;
-
-	if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
-		DP_NOTICE(p_hwfn, true,
-			  "SR-IOV sanity check failed, can't set spoofchk\n");
-		goto out;
-	}
-
-	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf)
-		goto out;
-
-	if (!ecore_iov_vf_has_vport_instance(p_hwfn, vfid)) {
-		/* After VF VPORT start PF will configure spoof check */
-		vf->req_spoofchk_val = val;
-		rc = ECORE_SUCCESS;
-		goto out;
-	}
-
-	rc = __ecore_iov_spoofchk_set(p_hwfn, vf, val);
-
-out:
-	return rc;
-}
-
-u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
-{
-	u8 max_chains_per_vf = p_hwfn->hw_info.max_chains_per_vf;
-
-	max_chains_per_vf = (max_chains_per_vf) ? max_chains_per_vf
-	    : ECORE_MAX_VF_CHAINS_PER_PF;
-
-	return max_chains_per_vf;
-}
-
-void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
-					  u16 rel_vf_id,
-					  void **pp_req_virt_addr,
-					  u16 *p_req_virt_size)
-{
-	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
-	if (!vf_info)
-		return;
-
-	if (pp_req_virt_addr)
-		*pp_req_virt_addr = vf_info->vf_mbx.req_virt;
-
-	if (p_req_virt_size)
-		*p_req_virt_size = sizeof(*vf_info->vf_mbx.req_virt);
-}
-
-void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
-					    u16 rel_vf_id,
-					    void **pp_reply_virt_addr,
-					    u16 *p_reply_virt_size)
-{
-	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
-	if (!vf_info)
-		return;
-
-	if (pp_reply_virt_addr)
-		*pp_reply_virt_addr = vf_info->vf_mbx.reply_virt;
-
-	if (p_reply_virt_size)
-		*p_reply_virt_size = sizeof(*vf_info->vf_mbx.reply_virt);
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
-						 u16 rel_vf_id)
-{
-	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
-	if (!vf_info)
-		return OSAL_NULL;
-
-	return &vf_info->vf_mbx.sw_mbx;
-}
-#endif
-
-bool ecore_iov_is_valid_vfpf_msg_length(u32 length)
-{
-	return (length >= sizeof(struct vfpf_first_tlv) &&
-		(length <= sizeof(union vfpf_tlvs)));
-}
-
-u32 ecore_iov_pfvf_msg_length(void)
-{
-	return sizeof(union pfvf_tlvs);
-}
-
-u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
-				      u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf || !p_vf->bulletin.p_virt)
-		return OSAL_NULL;
-
-	if (!(p_vf->bulletin.p_virt->valid_bitmap &
-		(1 << VFPF_BULLETIN_MAC_ADDR)))
-		return OSAL_NULL;
-
-	return p_vf->bulletin.p_virt->mac;
-}
-
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf || !p_vf->bulletin.p_virt)
-		return OSAL_NULL;
-
-	if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)))
-		return OSAL_NULL;
-
-	return p_vf->bulletin.p_virt->mac;
-}
-
-u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
-				       u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf || !p_vf->bulletin.p_virt)
-		return 0;
-
-	if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
-		return 0;
-
-	return p_vf->bulletin.p_virt->pvid;
-}
-
-enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 int vfid, int val)
-{
-	struct ecore_vf_info *vf;
-	u8 abs_vp_id = 0;
-	u16 rl_id;
-	enum _ecore_status_t rc;
-
-	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-
-	if (!vf)
-		return ECORE_INVAL;
-
-	rc = ecore_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */
-	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val);
-}
-
-enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
-						     int vfid, u32 rate)
-{
-	struct ecore_vf_info *vf;
-	int i;
-
-	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
-		if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
-			DP_NOTICE(p_hwfn, true,
-				  "SR-IOV sanity check failed, can't set min rate\n");
-			return ECORE_INVAL;
-		}
-	}
-
-	vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
-	if (!vf) {
-		DP_NOTICE(p_dev, true,
-			  "Getting vf info failed, can't set min rate\n");
-		return ECORE_INVAL;
-	}
-
-	return ecore_configure_vport_wfq(p_dev, vf->vport_id, rate);
-}
-
-enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt,
-					    int vfid,
-					    struct ecore_eth_stats *p_stats)
-{
-	struct ecore_vf_info *vf;
-
-	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf)
-		return ECORE_INVAL;
-
-	if (vf->state != VF_ENABLED)
-		return ECORE_INVAL;
-
-	__ecore_get_vport_stats(p_hwfn, p_ptt, p_stats,
-				vf->abs_vf_id + 0x10, false);
-
-	return ECORE_SUCCESS;
-}
-
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return 0;
-
-	return p_vf->num_rxqs;
-}
-
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return 0;
-
-	return p_vf->num_active_rxqs;
-}
-
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return OSAL_NULL;
-
-	return p_vf->ctx;
-}
-
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return 0;
-
-	return p_vf->num_sbs;
-}
-
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return false;
-
-	return (p_vf->state == VF_FREE);
-}
-
-bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
-					      u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return false;
-
-	return (p_vf->state == VF_ACQUIRED);
-}
-
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return false;
-
-	return (p_vf->state == VF_ENABLED);
-}
-
-bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
-			     u16 rel_vf_id)
-{
-	struct ecore_vf_info *p_vf;
-
-	p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-	if (!p_vf)
-		return false;
-
-	return (p_vf->state != VF_FREE && p_vf->state != VF_STOPPED);
-}
-
-int
-ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
-{
-	struct ecore_wfq_data *vf_vp_wfq;
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return 0;
-
-	vf_vp_wfq = &p_hwfn->qm_info.wfq_data[vf_info->vport_id];
-
-	if (vf_vp_wfq->configured)
-		return vf_vp_wfq->min_speed;
-	else
-		return 0;
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
-				 bool b_is_hw)
-{
-	struct ecore_vf_info *vf_info;
-
-	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-	if (!vf_info)
-		return;
-
-	vf_info->b_hw_channel = b_is_hw;
-}
-#endif
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index db03bc494f..68a22283d1 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1926,55 +1926,7 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 	return true;
 }
 
-void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
-				     u16 *p_vxlan_port,
-				     u16 *p_geneve_port)
-{
-	struct ecore_bulletin_content *p_bulletin;
-
-	p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
-
-	*p_vxlan_port = p_bulletin->vxlan_udp_port;
-	*p_geneve_port = p_bulletin->geneve_udp_port;
-}
-
-bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
-{
-	struct ecore_bulletin_content *bulletin;
-
-	bulletin = &hwfn->vf_iov_info->bulletin_shadow;
-
-	if (!(bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
-		return false;
-
-	if (dst_pvid)
-		*dst_pvid = bulletin->pvid;
-
-	return true;
-}
-
 bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn)
 {
 	return p_hwfn->vf_iov_info->b_pre_fp_hsi;
 }
-
-void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
-			     u16 *fw_major, u16 *fw_minor, u16 *fw_rev,
-			     u16 *fw_eng)
-{
-	struct pf_vf_pfdev_info *info;
-
-	info = &p_hwfn->vf_iov_info->acquire_resp.pfdev_info;
-
-	*fw_major = info->fw_major;
-	*fw_minor = info->fw_minor;
-	*fw_rev = info->fw_rev;
-	*fw_eng = info->fw_eng;
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw)
-{
-	p_hwfn->vf_iov_info->b_hw_channel = b_is_hw;
-}
-#endif
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 43951a9a34..68286355bf 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -125,16 +125,6 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
 bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 				      u8 *p_is_forced);
 
-/**
- * @brief Check if force vlan is set and copy the forced vlan
- *        from bulletin board
- *
- * @param hwfn
- * @param dst_pvid
- * @return bool
- */
-bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
-
 /**
  * @brief Check if VF is based on PF whose driver is pre-fp-hsi version;
  *        This affects the fastpath implementation of the driver.
@@ -147,35 +137,5 @@ bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn);
 
 #endif
 
-/**
- * @brief Set firmware version information in dev_info from VFs acquire
- *  response tlv
- *
- * @param p_hwfn
- * @param fw_major
- * @param fw_minor
- * @param fw_rev
- * @param fw_eng
- */
-void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
-			     u16 *fw_major,
-			     u16 *fw_minor,
-			     u16 *fw_rev,
-			     u16 *fw_eng);
-void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
-				     u16 *p_vxlan_port, u16 *p_geneve_port);
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/**
- * @brief set the VF to use a SW/HW channel when communicating with PF.
- *        NOTICE: today the likely first place to call this from VF
- *        would be OSAL_VF_FILL_ACQUIRE_RESC_REQ(); Might want to consider
- *        something a bit more appropriate.
- *
- * @param p_hwfn
- * @param b_is_hw - true iff VF is to use a HW-channel
- */
-void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw);
-#endif
 #endif
 #endif
diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index 2297d245c4..ae4ebd186a 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -828,15 +828,6 @@ static u32 qed_read_unaligned_dword(u8 *buf)
 	return dword;
 }
 
-/* Sets the value of the specified GRC param */
-static void qed_grc_set_param(struct ecore_hwfn *p_hwfn,
-			      enum dbg_grc_params grc_param, u32 val)
-{
-	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
-
-	dev_data->grc.param_val[grc_param] = val;
-}
-
 /* Returns the value of the specified GRC param */
 static u32 qed_grc_get_param(struct ecore_hwfn *p_hwfn,
 			     enum dbg_grc_params grc_param)
@@ -4893,69 +4884,6 @@ bool qed_read_fw_info(struct ecore_hwfn *p_hwfn,
 	return false;
 }
 
-enum dbg_status qed_dbg_grc_config(struct ecore_hwfn *p_hwfn,
-				   enum dbg_grc_params grc_param, u32 val)
-{
-	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
-	enum dbg_status status;
-	int i;
-
-	DP_VERBOSE(p_hwfn->p_dev,
-		   ECORE_MSG_DEBUG,
-		   "dbg_grc_config: paramId = %d, val = %d\n", grc_param, val);
-
-	status = qed_dbg_dev_init(p_hwfn);
-	if (status != DBG_STATUS_OK)
-		return status;
-
-	/* Initializes the GRC parameters (if not initialized). Needed in order
-	 * to set the default parameter values for the first time.
-	 */
-	qed_dbg_grc_init_params(p_hwfn);
-
-	if (grc_param >= MAX_DBG_GRC_PARAMS)
-		return DBG_STATUS_INVALID_ARGS;
-	if (val < s_grc_param_defs[grc_param].min ||
-	    val > s_grc_param_defs[grc_param].max)
-		return DBG_STATUS_INVALID_ARGS;
-
-	if (s_grc_param_defs[grc_param].is_preset) {
-		/* Preset param */
-
-		/* Disabling a preset is not allowed. Call
-		 * dbg_grc_set_params_default instead.
-		 */
-		if (!val)
-			return DBG_STATUS_INVALID_ARGS;
-
-		/* Update all params with the preset values */
-		for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) {
-			struct grc_param_defs *defs = &s_grc_param_defs[i];
-			u32 preset_val;
-			/* Skip persistent params */
-			if (defs->is_persistent)
-				continue;
-
-			/* Find preset value */
-			if (grc_param == DBG_GRC_PARAM_EXCLUDE_ALL)
-				preset_val =
-				    defs->exclude_all_preset_val;
-			else if (grc_param == DBG_GRC_PARAM_CRASH)
-				preset_val =
-				    defs->crash_preset_val[dev_data->chip_id];
-			else
-				return DBG_STATUS_INVALID_ARGS;
-
-			qed_grc_set_param(p_hwfn, i, preset_val);
-		}
-	} else {
-		/* Regular param - set its value */
-		qed_grc_set_param(p_hwfn, grc_param, val);
-	}
-
-	return DBG_STATUS_OK;
-}
-
 /* Assign default GRC param values */
 void qed_dbg_grc_set_params_default(struct ecore_hwfn *p_hwfn)
 {
@@ -5362,79 +5290,6 @@ static enum dbg_status qed_dbg_ilt_dump(struct ecore_hwfn *p_hwfn,
 	return DBG_STATUS_OK;
 }
 
-enum dbg_status qed_dbg_read_attn(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  enum block_id block_id,
-				  enum dbg_attn_type attn_type,
-				  bool clear_status,
-				  struct dbg_attn_block_result *results)
-{
-	enum dbg_status status = qed_dbg_dev_init(p_hwfn);
-	u8 reg_idx, num_attn_regs, num_result_regs = 0;
-	const struct dbg_attn_reg *attn_reg_arr;
-
-	if (status != DBG_STATUS_OK)
-		return status;
-
-	if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr ||
-	    !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr ||
-	    !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr)
-		return DBG_STATUS_DBG_ARRAY_NOT_SET;
-
-	attn_reg_arr = qed_get_block_attn_regs(p_hwfn,
-					       block_id,
-					       attn_type, &num_attn_regs);
-
-	for (reg_idx = 0; reg_idx < num_attn_regs; reg_idx++) {
-		const struct dbg_attn_reg *reg_data = &attn_reg_arr[reg_idx];
-		struct dbg_attn_reg_result *reg_result;
-		u32 sts_addr, sts_val;
-		u16 modes_buf_offset;
-		bool eval_mode;
-
-		/* Check mode */
-		eval_mode = GET_FIELD(reg_data->mode.data,
-				      DBG_MODE_HDR_EVAL_MODE) > 0;
-		modes_buf_offset = GET_FIELD(reg_data->mode.data,
-					     DBG_MODE_HDR_MODES_BUF_OFFSET);
-		if (eval_mode && !qed_is_mode_match(p_hwfn, &modes_buf_offset))
-			continue;
-
-		/* Mode match - read attention status register */
-		sts_addr = DWORDS_TO_BYTES(clear_status ?
-					   reg_data->sts_clr_address :
-					   GET_FIELD(reg_data->data,
-						     DBG_ATTN_REG_STS_ADDRESS));
-		sts_val = ecore_rd(p_hwfn, p_ptt, sts_addr);
-		if (!sts_val)
-			continue;
-
-		/* Non-zero attention status - add to results */
-		reg_result = &results->reg_results[num_result_regs];
-		SET_FIELD(reg_result->data,
-			  DBG_ATTN_REG_RESULT_STS_ADDRESS, sts_addr);
-		SET_FIELD(reg_result->data,
-			  DBG_ATTN_REG_RESULT_NUM_REG_ATTN,
-			  GET_FIELD(reg_data->data, DBG_ATTN_REG_NUM_REG_ATTN));
-		reg_result->block_attn_offset = reg_data->block_attn_offset;
-		reg_result->sts_val = sts_val;
-		reg_result->mask_val = ecore_rd(p_hwfn,
-					      p_ptt,
-					      DWORDS_TO_BYTES
-					      (reg_data->mask_address));
-		num_result_regs++;
-	}
-
-	results->block_id = (u8)block_id;
-	results->names_offset =
-	    qed_get_block_attn_data(p_hwfn, block_id, attn_type)->names_offset;
-	SET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE, attn_type);
-	SET_FIELD(results->data,
-		  DBG_ATTN_BLOCK_RESULT_NUM_REGS, num_result_regs);
-
-	return DBG_STATUS_OK;
-}
-
 /******************************* Data Types **********************************/
 
 /* REG fifo element */
@@ -6067,19 +5922,6 @@ static u32 qed_print_section_params(u32 *dump_buf,
 	return dump_offset;
 }
 
-/* Returns the block name that matches the specified block ID,
- * or NULL if not found.
- */
-static const char *qed_dbg_get_block_name(struct ecore_hwfn *p_hwfn,
-					  enum block_id block_id)
-{
-	const struct dbg_block_user *block =
-	    (const struct dbg_block_user *)
-	    p_hwfn->dbg_arrays[BIN_BUF_DBG_BLOCKS_USER_DATA].ptr + block_id;
-
-	return (const char *)block->name;
-}
-
 static struct dbg_tools_user_data *qed_dbg_get_user_data(struct ecore_hwfn
 							 *p_hwfn)
 {
@@ -7180,15 +7022,6 @@ enum dbg_status qed_print_idle_chk_results(struct ecore_hwfn *p_hwfn,
 				       num_errors, num_warnings);
 }
 
-void qed_dbg_mcp_trace_set_meta_data(struct ecore_hwfn *p_hwfn,
-				     const u32 *meta_buf)
-{
-	struct dbg_tools_user_data *dev_user_data =
-		qed_dbg_get_user_data(p_hwfn);
-
-	dev_user_data->mcp_trace_user_meta_buf = meta_buf;
-}
-
 enum dbg_status
 qed_get_mcp_trace_results_buf_size(struct ecore_hwfn *p_hwfn,
 				   u32 *dump_buf,
@@ -7211,31 +7044,6 @@ enum dbg_status qed_print_mcp_trace_results(struct ecore_hwfn *p_hwfn,
 					results_buf, &parsed_buf_size, true);
 }
 
-enum dbg_status qed_print_mcp_trace_results_cont(struct ecore_hwfn *p_hwfn,
-						 u32 *dump_buf,
-						 char *results_buf)
-{
-	u32 parsed_buf_size;
-
-	return qed_parse_mcp_trace_dump(p_hwfn, dump_buf, results_buf,
-					&parsed_buf_size, false);
-}
-
-enum dbg_status qed_print_mcp_trace_line(struct ecore_hwfn *p_hwfn,
-					 u8 *dump_buf,
-					 u32 num_dumped_bytes,
-					 char *results_buf)
-{
-	u32 parsed_results_bytes;
-
-	return qed_parse_mcp_trace_buf(p_hwfn,
-				       dump_buf,
-				       num_dumped_bytes,
-				       0,
-				       num_dumped_bytes,
-				       results_buf, &parsed_results_bytes);
-}
-
 /* Frees the specified MCP Trace meta data */
 void qed_mcp_trace_free_meta_data(struct ecore_hwfn *p_hwfn)
 {
@@ -7350,90 +7158,6 @@ qed_print_fw_asserts_results(__rte_unused struct ecore_hwfn *p_hwfn,
 					 results_buf, &parsed_buf_size);
 }
 
-enum dbg_status qed_dbg_parse_attn(struct ecore_hwfn *p_hwfn,
-				   struct dbg_attn_block_result *results)
-{
-	const u32 *block_attn_name_offsets;
-	const char *attn_name_base;
-	const char *block_name;
-	enum dbg_attn_type attn_type;
-	u8 num_regs, i, j;
-
-	num_regs = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_NUM_REGS);
-	attn_type = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE);
-	block_name = qed_dbg_get_block_name(p_hwfn, results->block_id);
-	if (!block_name)
-		return DBG_STATUS_INVALID_ARGS;
-
-	if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr ||
-	    !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr ||
-	    !p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr)
-		return DBG_STATUS_DBG_ARRAY_NOT_SET;
-
-	block_attn_name_offsets =
-	    (u32 *)p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr +
-	    results->names_offset;
-
-	attn_name_base = p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr;
-
-	/* Go over registers with a non-zero attention status */
-	for (i = 0; i < num_regs; i++) {
-		struct dbg_attn_bit_mapping *bit_mapping;
-		struct dbg_attn_reg_result *reg_result;
-		u8 num_reg_attn, bit_idx = 0;
-
-		reg_result = &results->reg_results[i];
-		num_reg_attn = GET_FIELD(reg_result->data,
-					 DBG_ATTN_REG_RESULT_NUM_REG_ATTN);
-		bit_mapping = (struct dbg_attn_bit_mapping *)
-		    p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr +
-		    reg_result->block_attn_offset;
-
-		/* Go over attention status bits */
-		for (j = 0; j < num_reg_attn; j++, bit_idx++) {
-			u16 attn_idx_val = GET_FIELD(bit_mapping[j].data,
-						     DBG_ATTN_BIT_MAPPING_VAL);
-			const char *attn_name, *attn_type_str, *masked_str;
-			u32 attn_name_offset;
-			u32 sts_addr;
-
-			/* Check if bit mask should be advanced (due to unused
-			 * bits).
-			 */
-			if (GET_FIELD(bit_mapping[j].data,
-				      DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT)) {
-				bit_idx += (u8)attn_idx_val;
-				continue;
-			}
-
-			/* Check current bit index */
-			if (!(reg_result->sts_val & OSAL_BIT(bit_idx)))
-				continue;
-
-			/* An attention bit with value=1 was found
-			 * Find attention name
-			 */
-			attn_name_offset =
-				block_attn_name_offsets[attn_idx_val];
-			attn_name = attn_name_base + attn_name_offset;
-			attn_type_str =
-				(attn_type ==
-				 ATTN_TYPE_INTERRUPT ? "Interrupt" :
-				 "Parity");
-			masked_str = reg_result->mask_val & OSAL_BIT(bit_idx) ?
-				     " [masked]" : "";
-			sts_addr = GET_FIELD(reg_result->data,
-					     DBG_ATTN_REG_RESULT_STS_ADDRESS);
-			DP_NOTICE(p_hwfn, false,
-				  "%s (%s) : %s [address 0x%08x, bit %d]%s\n",
-				  block_name, attn_type_str, attn_name,
-				  sts_addr * 4, bit_idx, masked_str);
-		}
-	}
-
-	return DBG_STATUS_OK;
-}
-
 /* Wrapper for unifying the idle_chk and mcp_trace api */
 static enum dbg_status
 qed_print_idle_chk_results_wrapper(struct ecore_hwfn *p_hwfn,
@@ -7683,22 +7407,6 @@ int qed_dbg_igu_fifo_size(struct ecore_dev *edev)
 	return qed_dbg_feature_size(edev, DBG_FEATURE_IGU_FIFO);
 }
 
-static int qed_dbg_nvm_image_length(struct ecore_hwfn *p_hwfn,
-				    enum ecore_nvm_images image_id, u32 *length)
-{
-	struct ecore_nvm_image_att image_att;
-	int rc;
-
-	*length = 0;
-	rc = ecore_mcp_get_nvm_image_att(p_hwfn, image_id, &image_att);
-	if (rc)
-		return rc;
-
-	*length = image_att.length;
-
-	return rc;
-}
-
 int qed_dbg_protection_override(struct ecore_dev *edev, void *buffer,
 				u32 *num_dumped_bytes)
 {
@@ -7777,225 +7485,6 @@ enum debug_print_features {
 	ILT_DUMP = 13,
 };
 
-static u32 qed_calc_regdump_header(struct ecore_dev *edev,
-				   enum debug_print_features feature,
-				   int engine, u32 feature_size, u8 omit_engine)
-{
-	u32 res = 0;
-
-	SET_FIELD(res, REGDUMP_HEADER_SIZE, feature_size);
-	if (res != feature_size)
-		DP_NOTICE(edev, false,
-			  "Feature %d is too large (size 0x%x) and will corrupt the dump\n",
-			  feature, feature_size);
-
-	SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature);
-	SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine);
-	SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine);
-
-	return res;
-}
-
-int qed_dbg_all_data(struct ecore_dev *edev, void *buffer)
-{
-	u8 cur_engine, omit_engine = 0, org_engine;
-	struct ecore_hwfn *p_hwfn =
-		&edev->hwfns[edev->dbg_params.engine_for_debug];
-	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
-	int grc_params[MAX_DBG_GRC_PARAMS], i;
-	u32 offset = 0, feature_size;
-	int rc;
-
-	for (i = 0; i < MAX_DBG_GRC_PARAMS; i++)
-		grc_params[i] = dev_data->grc.param_val[i];
-
-	if (!ECORE_IS_CMT(edev))
-		omit_engine = 1;
-
-	OSAL_MUTEX_ACQUIRE(&edev->dbg_lock);
-
-	org_engine = qed_get_debug_engine(edev);
-	for (cur_engine = 0; cur_engine < edev->num_hwfns; cur_engine++) {
-		/* Collect idle_chks and grcDump for each hw function */
-		DP_VERBOSE(edev, ECORE_MSG_DEBUG,
-			   "obtaining idle_chk and grcdump for current engine\n");
-		qed_set_debug_engine(edev, cur_engine);
-
-		/* First idle_chk */
-		rc = qed_dbg_idle_chk(edev, (u8 *)buffer + offset +
-				      REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, IDLE_CHK, cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_idle_chk failed. rc = %d\n", rc);
-		}
-
-		/* Second idle_chk */
-		rc = qed_dbg_idle_chk(edev, (u8 *)buffer + offset +
-				      REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, IDLE_CHK, cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_idle_chk failed. rc = %d\n", rc);
-		}
-
-		/* reg_fifo dump */
-		rc = qed_dbg_reg_fifo(edev, (u8 *)buffer + offset +
-				      REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, REG_FIFO, cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_reg_fifo failed. rc = %d\n", rc);
-		}
-
-		/* igu_fifo dump */
-		rc = qed_dbg_igu_fifo(edev, (u8 *)buffer + offset +
-				      REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, IGU_FIFO, cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_igu_fifo failed. rc = %d", rc);
-		}
-
-		/* protection_override dump */
-		rc = qed_dbg_protection_override(edev, (u8 *)buffer + offset +
-						 REGDUMP_HEADER_SIZE,
-						 &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, PROTECTION_OVERRIDE,
-						    cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev,
-			       "qed_dbg_protection_override failed. rc = %d\n",
-			       rc);
-		}
-
-		/* fw_asserts dump */
-		rc = qed_dbg_fw_asserts(edev, (u8 *)buffer + offset +
-					REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, FW_ASSERTS,
-						    cur_engine, feature_size,
-						    omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_fw_asserts failed. rc = %d\n",
-			       rc);
-		}
-
-		/* GRC dump - must be last because when mcp stuck it will
-		 * clutter idle_chk, reg_fifo, ...
-		 */
-		for (i = 0; i < MAX_DBG_GRC_PARAMS; i++)
-			dev_data->grc.param_val[i] = grc_params[i];
-
-		rc = qed_dbg_grc(edev, (u8 *)buffer + offset +
-				 REGDUMP_HEADER_SIZE, &feature_size);
-		if (!rc) {
-			*(u32 *)((u8 *)buffer + offset) =
-			    qed_calc_regdump_header(edev, GRC_DUMP,
-						    cur_engine,
-						    feature_size, omit_engine);
-			offset += (feature_size + REGDUMP_HEADER_SIZE);
-		} else {
-			DP_ERR(edev, "qed_dbg_grc failed. rc = %d", rc);
-		}
-	}
-
-	qed_set_debug_engine(edev, org_engine);
-
-	/* mcp_trace */
-	rc = qed_dbg_mcp_trace(edev, (u8 *)buffer + offset +
-			       REGDUMP_HEADER_SIZE, &feature_size);
-	if (!rc) {
-		*(u32 *)((u8 *)buffer + offset) =
-		    qed_calc_regdump_header(edev, MCP_TRACE, cur_engine,
-					    feature_size, omit_engine);
-		offset += (feature_size + REGDUMP_HEADER_SIZE);
-	} else {
-		DP_ERR(edev, "qed_dbg_mcp_trace failed. rc = %d\n", rc);
-	}
-
-	OSAL_MUTEX_RELEASE(&edev->dbg_lock);
-
-	return 0;
-}
-
-int qed_dbg_all_data_size(struct ecore_dev *edev)
-{
-	struct ecore_hwfn *p_hwfn =
-		&edev->hwfns[edev->dbg_params.engine_for_debug];
-	u32 regs_len = 0, image_len = 0, ilt_len = 0, total_ilt_len = 0;
-	u8 cur_engine, org_engine;
-
-	edev->disable_ilt_dump = false;
-	org_engine = qed_get_debug_engine(edev);
-	for (cur_engine = 0; cur_engine < edev->num_hwfns; cur_engine++) {
-		/* Engine specific */
-		DP_VERBOSE(edev, ECORE_MSG_DEBUG,
-			   "calculating idle_chk and grcdump register length for current engine\n");
-		qed_set_debug_engine(edev, cur_engine);
-		regs_len += REGDUMP_HEADER_SIZE + qed_dbg_idle_chk_size(edev) +
-			    REGDUMP_HEADER_SIZE + qed_dbg_idle_chk_size(edev) +
-			    REGDUMP_HEADER_SIZE + qed_dbg_grc_size(edev) +
-			    REGDUMP_HEADER_SIZE + qed_dbg_reg_fifo_size(edev) +
-			    REGDUMP_HEADER_SIZE + qed_dbg_igu_fifo_size(edev) +
-			    REGDUMP_HEADER_SIZE +
-			    qed_dbg_protection_override_size(edev) +
-			    REGDUMP_HEADER_SIZE + qed_dbg_fw_asserts_size(edev);
-
-		ilt_len = REGDUMP_HEADER_SIZE + qed_dbg_ilt_size(edev);
-		if (ilt_len < ILT_DUMP_MAX_SIZE) {
-			total_ilt_len += ilt_len;
-			regs_len += ilt_len;
-		}
-	}
-
-	qed_set_debug_engine(edev, org_engine);
-
-	/* Engine common */
-	regs_len += REGDUMP_HEADER_SIZE + qed_dbg_mcp_trace_size(edev);
-	qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_NVM_CFG1, &image_len);
-	if (image_len)
-		regs_len += REGDUMP_HEADER_SIZE + image_len;
-	qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_DEFAULT_CFG,
-				 &image_len);
-	if (image_len)
-		regs_len += REGDUMP_HEADER_SIZE + image_len;
-	qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_NVM_META, &image_len);
-	if (image_len)
-		regs_len += REGDUMP_HEADER_SIZE + image_len;
-	qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_MDUMP, &image_len);
-	if (image_len)
-		regs_len += REGDUMP_HEADER_SIZE + image_len;
-
-	if (regs_len > REGDUMP_MAX_SIZE) {
-		DP_VERBOSE(edev, ECORE_MSG_DEBUG,
-			   "Dump exceeds max size 0x%x, disable ILT dump\n",
-			   REGDUMP_MAX_SIZE);
-		edev->disable_ilt_dump = true;
-		regs_len -= total_ilt_len;
-	}
-
-	return regs_len;
-}
-
 int qed_dbg_feature(struct ecore_dev *edev, void *buffer,
 		    enum ecore_dbg_features feature, u32 *num_dumped_bytes)
 {
@@ -8098,24 +7587,3 @@ void qed_dbg_pf_init(struct ecore_dev *edev)
 	/* Set the hwfn to be 0 as default */
 	edev->dbg_params.engine_for_debug = 0;
 }
-
-void qed_dbg_pf_exit(struct ecore_dev *edev)
-{
-	struct ecore_dbg_feature *feature = NULL;
-	enum ecore_dbg_features feature_idx;
-
-	PMD_INIT_FUNC_TRACE(edev);
-
-	/* debug features' buffers may be allocated if debug feature was used
-	 * but dump wasn't called
-	 */
-	for (feature_idx = 0; feature_idx < DBG_FEATURE_NUM; feature_idx++) {
-		feature = &edev->dbg_features[feature_idx];
-		if (feature->dump_buf) {
-			OSAL_VFREE(edev, feature->dump_buf);
-			feature->dump_buf = NULL;
-		}
-	}
-
-	OSAL_MUTEX_DEALLOC(&edev->dbg_lock);
-}
diff --git a/drivers/net/qede/qede_debug.h b/drivers/net/qede/qede_debug.h
index 93e1bd7109..90b55f1289 100644
--- a/drivers/net/qede/qede_debug.h
+++ b/drivers/net/qede/qede_debug.h
@@ -33,8 +33,6 @@ int qed_dbg_ilt_size(struct ecore_dev *edev);
 int qed_dbg_mcp_trace(struct ecore_dev *edev, void *buffer,
 		      u32 *num_dumped_bytes);
 int qed_dbg_mcp_trace_size(struct ecore_dev *edev);
-int qed_dbg_all_data(struct ecore_dev *edev, void *buffer);
-int qed_dbg_all_data_size(struct ecore_dev *edev);
 u8 qed_get_debug_engine(struct ecore_dev *edev);
 void qed_set_debug_engine(struct ecore_dev *edev, int engine_number);
 int qed_dbg_feature(struct ecore_dev *edev, void *buffer,
@@ -43,7 +41,6 @@ int
 qed_dbg_feature_size(struct ecore_dev *edev, enum ecore_dbg_features feature);
 
 void qed_dbg_pf_init(struct ecore_dev *edev);
-void qed_dbg_pf_exit(struct ecore_dev *edev);
 
 /***************************** Public Functions *******************************/
 
@@ -98,21 +95,6 @@ void qed_read_regs(struct ecore_hwfn *p_hwfn,
  */
 bool qed_read_fw_info(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt, struct fw_info *fw_info);
-/**
- * @brief qed_dbg_grc_config - Sets the value of a GRC parameter.
- *
- * @param p_hwfn -	HW device data
- * @param grc_param -	GRC parameter
- * @param val -		Value to set.
- *
- * @return error if one of the following holds:
- *	- the version wasn't set
- *	- grc_param is invalid
- *	- val is outside the allowed boundaries
- */
-enum dbg_status qed_dbg_grc_config(struct ecore_hwfn *p_hwfn,
-				   enum dbg_grc_params grc_param, u32 val);
-
 /**
  * @brief qed_dbg_grc_set_params_default - Reverts all GRC parameters to their
  *	default value.
@@ -389,28 +371,6 @@ enum dbg_status qed_dbg_fw_asserts_dump(struct ecore_hwfn *p_hwfn,
 					u32 buf_size_in_dwords,
 					u32 *num_dumped_dwords);
 
-/**
- * @brief qed_dbg_read_attn - Reads the attention registers of the specified
- * block and type, and writes the results into the specified buffer.
- *
- * @param p_hwfn -	 HW device data
- * @param p_ptt -	 Ptt window used for writing the registers.
- * @param block -	 Block ID.
- * @param attn_type -	 Attention type.
- * @param clear_status - Indicates if the attention status should be cleared.
- * @param results -	 OUT: Pointer to write the read results into
- *
- * @return error if one of the following holds:
- *	- the version wasn't set
- * Otherwise, returns ok.
- */
-enum dbg_status qed_dbg_read_attn(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  enum block_id block,
-				  enum dbg_attn_type attn_type,
-				  bool clear_status,
-				  struct dbg_attn_block_result *results);
-
 /**
  * @brief qed_dbg_print_attn - Prints attention registers values in the
  *	specified results struct.
@@ -529,18 +489,6 @@ enum dbg_status qed_print_idle_chk_results(struct ecore_hwfn *p_hwfn,
 					   u32 *num_errors,
 					   u32 *num_warnings);
 
-/**
- * @brief qed_dbg_mcp_trace_set_meta_data - Sets the MCP Trace meta data.
- *
- * Needed in case the MCP Trace dump doesn't contain the meta data (e.g. due to
- * no NVRAM access).
- *
- * @param data - pointer to MCP Trace meta data
- * @param size - size of MCP Trace meta data in dwords
- */
-void qed_dbg_mcp_trace_set_meta_data(struct ecore_hwfn *p_hwfn,
-				     const u32 *meta_buf);
-
 /**
  * @brief qed_get_mcp_trace_results_buf_size - Returns the required buffer size
  *	for MCP Trace results (in bytes).
@@ -573,37 +521,6 @@ enum dbg_status qed_print_mcp_trace_results(struct ecore_hwfn *p_hwfn,
 					    u32 num_dumped_dwords,
 					    char *results_buf);
 
-/**
- * @brief qed_print_mcp_trace_results_cont - Prints MCP Trace results, and
- * keeps the MCP trace meta data allocated, to support continuous MCP Trace
- * parsing. After the continuous parsing ends, mcp_trace_free_meta_data should
- * be called to free the meta data.
- *
- * @param p_hwfn -	      HW device data
- * @param dump_buf -	      mcp trace dump buffer, starting from the header.
- * @param results_buf -	      buffer for printing the mcp trace results.
- *
- * @return error if the parsing fails, ok otherwise.
- */
-enum dbg_status qed_print_mcp_trace_results_cont(struct ecore_hwfn *p_hwfn,
-						 u32 *dump_buf,
-						 char *results_buf);
-
-/**
- * @brief print_mcp_trace_line - Prints MCP Trace results for a single line
- *
- * @param p_hwfn -	      HW device data
- * @param dump_buf -	      mcp trace dump buffer, starting from the header.
- * @param num_dumped_bytes -  number of bytes that were dumped.
- * @param results_buf -	      buffer for printing the mcp trace results.
- *
- * @return error if the parsing fails, ok otherwise.
- */
-enum dbg_status qed_print_mcp_trace_line(struct ecore_hwfn *p_hwfn,
-					 u8 *dump_buf,
-					 u32 num_dumped_bytes,
-					 char *results_buf);
-
 /**
  * @brief mcp_trace_free_meta_data - Frees the MCP Trace meta data.
  * Should be called after continuous MCP Trace parsing.
@@ -742,18 +659,4 @@ enum dbg_status qed_print_fw_asserts_results(struct ecore_hwfn *p_hwfn,
 					     u32 num_dumped_dwords,
 					     char *results_buf);
 
-/**
- * @brief qed_dbg_parse_attn - Parses and prints attention registers values in
- * the specified results struct.
- *
- * @param p_hwfn -  HW device data
- * @param results - Pointer to the attention read results
- *
- * @return error if one of the following holds:
- *	- the version wasn't set
- * Otherwise, returns ok.
- */
-enum dbg_status qed_dbg_parse_attn(struct ecore_hwfn *p_hwfn,
-				   struct dbg_attn_block_result *results);
-
 #endif
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 13e9665bb4..6513c6db81 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -47,19 +47,6 @@ sfc_kvargs_cleanup(struct sfc_adapter *sa)
 	rte_kvargs_free(sa->kvargs);
 }
 
-static int
-sfc_kvarg_match_value(const char *value, const char * const *values,
-		      unsigned int n_values)
-{
-	unsigned int i;
-
-	for (i = 0; i < n_values; ++i)
-		if (strcasecmp(value, values[i]) == 0)
-			return 1;
-
-	return 0;
-}
-
 int
 sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
 		   arg_handler_t handler, void *opaque_arg)
@@ -70,30 +57,6 @@ sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
 	return -rte_kvargs_process(sa->kvargs, key_match, handler, opaque_arg);
 }
 
-int
-sfc_kvarg_bool_handler(__rte_unused const char *key,
-		       const char *value_str, void *opaque)
-{
-	const char * const true_strs[] = {
-		"1", "y", "yes", "on", "true"
-	};
-	const char * const false_strs[] = {
-		"0", "n", "no", "off", "false"
-	};
-	bool *value = opaque;
-
-	if (sfc_kvarg_match_value(value_str, true_strs,
-				  RTE_DIM(true_strs)))
-		*value = true;
-	else if (sfc_kvarg_match_value(value_str, false_strs,
-				       RTE_DIM(false_strs)))
-		*value = false;
-	else
-		return -EINVAL;
-
-	return 0;
-}
-
 int
 sfc_kvarg_long_handler(__rte_unused const char *key,
 		       const char *value_str, void *opaque)
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 0c3660890c..e39f1191a9 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -74,8 +74,6 @@ void sfc_kvargs_cleanup(struct sfc_adapter *sa);
 int sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
 		       arg_handler_t handler, void *opaque_arg);
 
-int sfc_kvarg_bool_handler(const char *key, const char *value_str,
-			   void *opaque);
 int sfc_kvarg_long_handler(const char *key, const char *value_str,
 			   void *opaque);
 int sfc_kvarg_string_handler(const char *key, const char *value_str,
diff --git a/drivers/net/softnic/parser.c b/drivers/net/softnic/parser.c
index ebcb10268a..3d94b3bfa9 100644
--- a/drivers/net/softnic/parser.c
+++ b/drivers/net/softnic/parser.c
@@ -38,44 +38,6 @@ get_hex_val(char c)
 	}
 }
 
-int
-softnic_parser_read_arg_bool(const char *p)
-{
-	p = skip_white_spaces(p);
-	int result = -EINVAL;
-
-	if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
-		((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
-		p += 3;
-		result = 1;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'n')) ||
-		((p[0] == 'O') && (p[1] == 'N'))) {
-		p += 2;
-		result = 1;
-	}
-
-	if (((p[0] == 'n') && (p[1] == 'o')) ||
-		((p[0] == 'N') && (p[1] == 'O'))) {
-		p += 2;
-		result = 0;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
-		((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
-		p += 3;
-		result = 0;
-	}
-
-	p = skip_white_spaces(p);
-
-	if (p[0] != '\0')
-		return -EINVAL;
-
-	return result;
-}
-
 int
 softnic_parser_read_int32(int32_t *value, const char *p)
 {
@@ -170,22 +132,6 @@ softnic_parser_read_uint32(uint32_t *value, const char *p)
 	return 0;
 }
 
-int
-softnic_parser_read_uint32_hex(uint32_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = softnic_parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT32_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 softnic_parser_read_uint16(uint16_t *value, const char *p)
 {
@@ -202,22 +148,6 @@ softnic_parser_read_uint16(uint16_t *value, const char *p)
 	return 0;
 }
 
-int
-softnic_parser_read_uint16_hex(uint16_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = softnic_parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT16_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 softnic_parser_read_uint8(uint8_t *value, const char *p)
 {
@@ -234,22 +164,6 @@ softnic_parser_read_uint8(uint8_t *value, const char *p)
 	return 0;
 }
 
-int
-softnic_parser_read_uint8_hex(uint8_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = softnic_parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT8_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 softnic_parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens)
 {
@@ -310,44 +224,6 @@ softnic_parse_hex_string(char *src, uint8_t *dst, uint32_t *size)
 	return 0;
 }
 
-int
-softnic_parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels)
-{
-	uint32_t n_max_labels = *n_labels, count = 0;
-
-	/* Check for void list of labels */
-	if (strcmp(string, "<void>") == 0) {
-		*n_labels = 0;
-		return 0;
-	}
-
-	/* At least one label should be present */
-	for ( ; (*string != '\0'); ) {
-		char *next;
-		int value;
-
-		if (count >= n_max_labels)
-			return -1;
-
-		if (count > 0) {
-			if (string[0] != ':')
-				return -1;
-
-			string++;
-		}
-
-		value = strtol(string, &next, 10);
-		if (next == string)
-			return -1;
-		string = next;
-
-		labels[count++] = (uint32_t)value;
-	}
-
-	*n_labels = count;
-	return 0;
-}
-
 static struct rte_ether_addr *
 my_ether_aton(const char *a)
 {
@@ -427,97 +303,3 @@ softnic_parse_mac_addr(const char *token, struct rte_ether_addr *addr)
 	memcpy(addr, tmp, sizeof(struct rte_ether_addr));
 	return 0;
 }
-
-int
-softnic_parse_cpu_core(const char *entry,
-	struct softnic_cpu_core_params *p)
-{
-	size_t num_len;
-	char num[8];
-
-	uint32_t s = 0, c = 0, h = 0, val;
-	uint8_t s_parsed = 0, c_parsed = 0, h_parsed = 0;
-	const char *next = skip_white_spaces(entry);
-	char type;
-
-	if (p == NULL)
-		return -EINVAL;
-
-	/* Expect <CORE> or [sX][cY][h]. At least one parameter is required. */
-	while (*next != '\0') {
-		/* If everything parsed nothing should left */
-		if (s_parsed && c_parsed && h_parsed)
-			return -EINVAL;
-
-		type = *next;
-		switch (type) {
-		case 's':
-		case 'S':
-			if (s_parsed || c_parsed || h_parsed)
-				return -EINVAL;
-			s_parsed = 1;
-			next++;
-			break;
-		case 'c':
-		case 'C':
-			if (c_parsed || h_parsed)
-				return -EINVAL;
-			c_parsed = 1;
-			next++;
-			break;
-		case 'h':
-		case 'H':
-			if (h_parsed)
-				return -EINVAL;
-			h_parsed = 1;
-			next++;
-			break;
-		default:
-			/* If it start from digit it must be only core id. */
-			if (!isdigit(*next) || s_parsed || c_parsed || h_parsed)
-				return -EINVAL;
-
-			type = 'C';
-		}
-
-		for (num_len = 0; *next != '\0'; next++, num_len++) {
-			if (num_len == RTE_DIM(num))
-				return -EINVAL;
-
-			if (!isdigit(*next))
-				break;
-
-			num[num_len] = *next;
-		}
-
-		if (num_len == 0 && type != 'h' && type != 'H')
-			return -EINVAL;
-
-		if (num_len != 0 && (type == 'h' || type == 'H'))
-			return -EINVAL;
-
-		num[num_len] = '\0';
-		val = strtol(num, NULL, 10);
-
-		h = 0;
-		switch (type) {
-		case 's':
-		case 'S':
-			s = val;
-			break;
-		case 'c':
-		case 'C':
-			c = val;
-			break;
-		case 'h':
-		case 'H':
-			h = 1;
-			break;
-		}
-	}
-
-	p->socket_id = s;
-	p->core_id = c;
-	p->thread_id = h;
-	return 0;
-}
diff --git a/drivers/net/softnic/parser.h b/drivers/net/softnic/parser.h
index 6f408b2485..2c14af32dd 100644
--- a/drivers/net/softnic/parser.h
+++ b/drivers/net/softnic/parser.h
@@ -31,8 +31,6 @@ skip_digits(const char *src)
 	return i;
 }
 
-int softnic_parser_read_arg_bool(const char *p);
-
 int softnic_parser_read_int32(int32_t *value, const char *p);
 
 int softnic_parser_read_uint64(uint64_t *value, const char *p);
@@ -41,17 +39,12 @@ int softnic_parser_read_uint16(uint16_t *value, const char *p);
 int softnic_parser_read_uint8(uint8_t *value, const char *p);
 
 int softnic_parser_read_uint64_hex(uint64_t *value, const char *p);
-int softnic_parser_read_uint32_hex(uint32_t *value, const char *p);
-int softnic_parser_read_uint16_hex(uint16_t *value, const char *p);
-int softnic_parser_read_uint8_hex(uint8_t *value, const char *p);
 
 int softnic_parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
 
 int softnic_parse_ipv4_addr(const char *token, struct in_addr *ipv4);
 int softnic_parse_ipv6_addr(const char *token, struct in6_addr *ipv6);
 int softnic_parse_mac_addr(const char *token, struct rte_ether_addr *addr);
-int softnic_parse_mpls_labels(char *string,
-		uint32_t *labels, uint32_t *n_labels);
 
 struct softnic_cpu_core_params {
 	uint32_t socket_id;
@@ -59,9 +52,6 @@ struct softnic_cpu_core_params {
 	uint32_t thread_id;
 };
 
-int softnic_parse_cpu_core(const char *entry,
-		struct softnic_cpu_core_params *p);
-
 int softnic_parse_tokenize_string(char *string,
 		char *tokens[], uint32_t *n_tokens);
 
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index a1a4ca5650..0198e1e35d 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -21,21 +21,6 @@ softnic_cryptodev_init(struct pmd_internals *p)
 	return 0;
 }
 
-void
-softnic_cryptodev_free(struct pmd_internals *p)
-{
-	for ( ; ; ) {
-		struct softnic_cryptodev *cryptodev;
-
-		cryptodev = TAILQ_FIRST(&p->cryptodev_list);
-		if (cryptodev == NULL)
-			break;
-
-		TAILQ_REMOVE(&p->cryptodev_list, cryptodev, node);
-		free(cryptodev);
-	}
-}
-
 struct softnic_cryptodev *
 softnic_cryptodev_find(struct pmd_internals *p,
 	const char *name)
diff --git a/drivers/net/softnic/rte_eth_softnic_internals.h b/drivers/net/softnic/rte_eth_softnic_internals.h
index 9c8737c9e2..414b79e068 100644
--- a/drivers/net/softnic/rte_eth_softnic_internals.h
+++ b/drivers/net/softnic/rte_eth_softnic_internals.h
@@ -793,9 +793,6 @@ softnic_tap_create(struct pmd_internals *p,
 int
 softnic_cryptodev_init(struct pmd_internals *p);
 
-void
-softnic_cryptodev_free(struct pmd_internals *p);
-
 struct softnic_cryptodev *
 softnic_cryptodev_find(struct pmd_internals *p,
 	const char *name);
@@ -1052,14 +1049,6 @@ softnic_pipeline_table_rule_delete_default(struct pmd_internals *p,
 	const char *pipeline_name,
 	uint32_t table_id);
 
-int
-softnic_pipeline_table_rule_stats_read(struct pmd_internals *p,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	struct rte_table_action_stats_counters *stats,
-	int clear);
-
 int
 softnic_pipeline_table_mtr_profile_add(struct pmd_internals *p,
 	const char *pipeline_name,
@@ -1073,15 +1062,6 @@ softnic_pipeline_table_mtr_profile_delete(struct pmd_internals *p,
 	uint32_t table_id,
 	uint32_t meter_profile_id);
 
-int
-softnic_pipeline_table_rule_mtr_read(struct pmd_internals *p,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	uint32_t tc_mask,
-	struct rte_table_action_mtr_counters *stats,
-	int clear);
-
 int
 softnic_pipeline_table_dscp_table_update(struct pmd_internals *p,
 	const char *pipeline_name,
@@ -1089,14 +1069,6 @@ softnic_pipeline_table_dscp_table_update(struct pmd_internals *p,
 	uint64_t dscp_mask,
 	struct rte_table_action_dscp_table *dscp_table);
 
-int
-softnic_pipeline_table_rule_ttl_read(struct pmd_internals *p,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	struct rte_table_action_ttl_counters *stats,
-	int clear);
-
 /**
  * Thread
  */
diff --git a/drivers/net/softnic/rte_eth_softnic_thread.c b/drivers/net/softnic/rte_eth_softnic_thread.c
index a8c26a5b23..cfddb44cb2 100644
--- a/drivers/net/softnic/rte_eth_softnic_thread.c
+++ b/drivers/net/softnic/rte_eth_softnic_thread.c
@@ -1672,66 +1672,6 @@ softnic_pipeline_table_rule_delete_default(struct pmd_internals *softnic,
 	return status;
 }
 
-int
-softnic_pipeline_table_rule_stats_read(struct pmd_internals *softnic,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	struct rte_table_action_stats_counters *stats,
-	int clear)
-{
-	struct pipeline *p;
-	struct pipeline_msg_req *req;
-	struct pipeline_msg_rsp *rsp;
-	int status;
-
-	/* Check input params */
-	if (pipeline_name == NULL ||
-		data == NULL ||
-		stats == NULL)
-		return -1;
-
-	p = softnic_pipeline_find(softnic, pipeline_name);
-	if (p == NULL ||
-		table_id >= p->n_tables)
-		return -1;
-
-	if (!pipeline_is_running(p)) {
-		struct rte_table_action *a = p->table[table_id].a;
-
-		status = rte_table_action_stats_read(a,
-			data,
-			stats,
-			clear);
-
-		return status;
-	}
-
-	/* Allocate request */
-	req = pipeline_msg_alloc();
-	if (req == NULL)
-		return -1;
-
-	/* Write request */
-	req->type = PIPELINE_REQ_TABLE_RULE_STATS_READ;
-	req->id = table_id;
-	req->table_rule_stats_read.data = data;
-	req->table_rule_stats_read.clear = clear;
-
-	/* Send request and wait for response */
-	rsp = pipeline_msg_send_recv(p, req);
-
-	/* Read response */
-	status = rsp->status;
-	if (status)
-		memcpy(stats, &rsp->table_rule_stats_read.stats, sizeof(*stats));
-
-	/* Free response */
-	pipeline_msg_free(rsp);
-
-	return status;
-}
-
 int
 softnic_pipeline_table_mtr_profile_add(struct pmd_internals *softnic,
 	const char *pipeline_name,
@@ -1864,69 +1804,6 @@ softnic_pipeline_table_mtr_profile_delete(struct pmd_internals *softnic,
 	return status;
 }
 
-int
-softnic_pipeline_table_rule_mtr_read(struct pmd_internals *softnic,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	uint32_t tc_mask,
-	struct rte_table_action_mtr_counters *stats,
-	int clear)
-{
-	struct pipeline *p;
-	struct pipeline_msg_req *req;
-	struct pipeline_msg_rsp *rsp;
-	int status;
-
-	/* Check input params */
-	if (pipeline_name == NULL ||
-		data == NULL ||
-		stats == NULL)
-		return -1;
-
-	p = softnic_pipeline_find(softnic, pipeline_name);
-	if (p == NULL ||
-		table_id >= p->n_tables)
-		return -1;
-
-	if (!pipeline_is_running(p)) {
-		struct rte_table_action *a = p->table[table_id].a;
-
-		status = rte_table_action_meter_read(a,
-				data,
-				tc_mask,
-				stats,
-				clear);
-
-		return status;
-	}
-
-	/* Allocate request */
-	req = pipeline_msg_alloc();
-	if (req == NULL)
-		return -1;
-
-	/* Write request */
-	req->type = PIPELINE_REQ_TABLE_RULE_MTR_READ;
-	req->id = table_id;
-	req->table_rule_mtr_read.data = data;
-	req->table_rule_mtr_read.tc_mask = tc_mask;
-	req->table_rule_mtr_read.clear = clear;
-
-	/* Send request and wait for response */
-	rsp = pipeline_msg_send_recv(p, req);
-
-	/* Read response */
-	status = rsp->status;
-	if (status)
-		memcpy(stats, &rsp->table_rule_mtr_read.stats, sizeof(*stats));
-
-	/* Free response */
-	pipeline_msg_free(rsp);
-
-	return status;
-}
-
 int
 softnic_pipeline_table_dscp_table_update(struct pmd_internals *softnic,
 	const char *pipeline_name,
@@ -1993,66 +1870,6 @@ softnic_pipeline_table_dscp_table_update(struct pmd_internals *softnic,
 	return status;
 }
 
-int
-softnic_pipeline_table_rule_ttl_read(struct pmd_internals *softnic,
-	const char *pipeline_name,
-	uint32_t table_id,
-	void *data,
-	struct rte_table_action_ttl_counters *stats,
-	int clear)
-{
-	struct pipeline *p;
-	struct pipeline_msg_req *req;
-	struct pipeline_msg_rsp *rsp;
-	int status;
-
-	/* Check input params */
-	if (pipeline_name == NULL ||
-		data == NULL ||
-		stats == NULL)
-		return -1;
-
-	p = softnic_pipeline_find(softnic, pipeline_name);
-	if (p == NULL ||
-		table_id >= p->n_tables)
-		return -1;
-
-	if (!pipeline_is_running(p)) {
-		struct rte_table_action *a = p->table[table_id].a;
-
-		status = rte_table_action_ttl_read(a,
-				data,
-				stats,
-				clear);
-
-		return status;
-	}
-
-	/* Allocate request */
-	req = pipeline_msg_alloc();
-	if (req == NULL)
-		return -1;
-
-	/* Write request */
-	req->type = PIPELINE_REQ_TABLE_RULE_TTL_READ;
-	req->id = table_id;
-	req->table_rule_ttl_read.data = data;
-	req->table_rule_ttl_read.clear = clear;
-
-	/* Send request and wait for response */
-	rsp = pipeline_msg_send_recv(p, req);
-
-	/* Read response */
-	status = rsp->status;
-	if (status)
-		memcpy(stats, &rsp->table_rule_ttl_read.stats, sizeof(*stats));
-
-	/* Free response */
-	pipeline_msg_free(rsp);
-
-	return status;
-}
-
 /**
  * Data plane threads: message handling
  */
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.c b/drivers/net/txgbe/base/txgbe_eeprom.c
index 72cd3ff307..fedaecf26d 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.c
+++ b/drivers/net/txgbe/base/txgbe_eeprom.c
@@ -274,42 +274,6 @@ s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data)
 	return err;
 }
 
-/**
- *  txgbe_ee_read_buffer - Read EEPROM byte(s) using hostif
- *  @hw: pointer to hardware structure
- *  @addr: offset of bytes in the EEPROM to read
- *  @len: number of bytes
- *  @data: byte(s) read from the EEPROM
- *
- *  Reads a 8 bit byte(s) from the EEPROM using the hostif.
- **/
-s32 txgbe_ee_read_buffer(struct txgbe_hw *hw,
-				     u32 addr, u32 len, void *data)
-{
-	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
-	u8 *buf = (u8 *)data;
-	int err;
-
-	err = hw->mac.acquire_swfw_sync(hw, mask);
-	if (err)
-		return err;
-
-	while (len) {
-		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
-				? len : TXGBE_PMMBX_DATA_SIZE);
-
-		err = txgbe_hic_sr_read(hw, addr, buf, seg);
-		if (err)
-			break;
-
-		len -= seg;
-		buf += seg;
-	}
-
-	hw->mac.release_swfw_sync(hw, mask);
-	return err;
-}
-
 /**
  *  txgbe_ee_write - Write EEPROM word using hostif
  *  @hw: pointer to hardware structure
@@ -420,42 +384,6 @@ s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data)
 	return err;
 }
 
-/**
- *  txgbe_ee_write_buffer - Write EEPROM byte(s) using hostif
- *  @hw: pointer to hardware structure
- *  @addr: offset of bytes in the EEPROM to write
- *  @len: number of bytes
- *  @data: word(s) write to the EEPROM
- *
- *  Write a 8 bit byte(s) to the EEPROM using the hostif.
- **/
-s32 txgbe_ee_write_buffer(struct txgbe_hw *hw,
-				      u32 addr, u32 len, void *data)
-{
-	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
-	u8 *buf = (u8 *)data;
-	int err;
-
-	err = hw->mac.acquire_swfw_sync(hw, mask);
-	if (err)
-		return err;
-
-	while (len) {
-		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
-				? len : TXGBE_PMMBX_DATA_SIZE);
-
-		err = txgbe_hic_sr_write(hw, addr, buf, seg);
-		if (err)
-			break;
-
-		len -= seg;
-		buf += seg;
-	}
-
-	hw->mac.release_swfw_sync(hw, mask);
-	return err;
-}
-
 /**
  *  txgbe_calc_eeprom_checksum - Calculates and returns the checksum
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index d0e142dba5..78b8af978b 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -51,14 +51,12 @@ s32 txgbe_ee_readw_sw(struct txgbe_hw *hw, u32 offset, u16 *data);
 s32 txgbe_ee_readw_buffer(struct txgbe_hw *hw, u32 offset, u32 words,
 				void *data);
 s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data);
-s32 txgbe_ee_read_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
 
 s32 txgbe_ee_write16(struct txgbe_hw *hw, u32 offset, u16 data);
 s32 txgbe_ee_writew_sw(struct txgbe_hw *hw, u32 offset, u16 data);
 s32 txgbe_ee_writew_buffer(struct txgbe_hw *hw, u32 offset, u32 words,
 				void *data);
 s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data);
-s32 txgbe_ee_write_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
 
 
 #endif /* _TXGBE_EEPROM_H_ */
diff --git a/drivers/raw/ifpga/base/opae_eth_group.c b/drivers/raw/ifpga/base/opae_eth_group.c
index be28954e05..97c20a8068 100644
--- a/drivers/raw/ifpga/base/opae_eth_group.c
+++ b/drivers/raw/ifpga/base/opae_eth_group.c
@@ -152,16 +152,6 @@ static int eth_group_reset_mac(struct eth_group_device *dev, u8 index,
 	return ret;
 }
 
-static void eth_group_mac_uinit(struct eth_group_device *dev)
-{
-	u8 i;
-
-	for (i = 0; i < dev->mac_num; i++) {
-		if (eth_group_reset_mac(dev, i, true))
-			dev_err(dev, "fail to disable mac %d\n", i);
-	}
-}
-
 static int eth_group_mac_init(struct eth_group_device *dev)
 {
 	int ret;
@@ -272,12 +262,6 @@ static int eth_group_hw_init(struct eth_group_device *dev)
 	return ret;
 }
 
-static void eth_group_hw_uinit(struct eth_group_device *dev)
-{
-	eth_group_mac_uinit(dev);
-	eth_group_phy_uinit(dev);
-}
-
 struct eth_group_device *eth_group_probe(void *base)
 {
 	struct eth_group_device *dev;
@@ -305,12 +289,3 @@ struct eth_group_device *eth_group_probe(void *base)
 
 	return dev;
 }
-
-void eth_group_release(struct eth_group_device *dev)
-{
-	if (dev) {
-		eth_group_hw_uinit(dev);
-		dev->status = ETH_GROUP_DEV_NOUSED;
-		opae_free(dev);
-	}
-}
diff --git a/drivers/raw/ifpga/base/opae_eth_group.h b/drivers/raw/ifpga/base/opae_eth_group.h
index 4868bd0e11..8dc23663b8 100644
--- a/drivers/raw/ifpga/base/opae_eth_group.h
+++ b/drivers/raw/ifpga/base/opae_eth_group.h
@@ -94,7 +94,6 @@ struct eth_group_device {
 };
 
 struct eth_group_device *eth_group_probe(void *base);
-void eth_group_release(struct eth_group_device *dev);
 int eth_group_read_reg(struct eth_group_device *dev,
 		u8 type, u8 index, u16 addr, u32 *data);
 int eth_group_write_reg(struct eth_group_device *dev,
diff --git a/drivers/raw/ifpga/base/opae_hw_api.c b/drivers/raw/ifpga/base/opae_hw_api.c
index d5cd5fe608..e2fdece4b4 100644
--- a/drivers/raw/ifpga/base/opae_hw_api.c
+++ b/drivers/raw/ifpga/base/opae_hw_api.c
@@ -84,50 +84,6 @@ opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
 	return acc;
 }
 
-/**
- * opae_acc_reg_read - read accelerator's register from its reg region.
- * @acc: accelerator to read.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: read operation width, e.g 4 byte = 32bit read.
- * @data: data to store the value read from the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
-		      u64 offset, unsigned int byte, void *data)
-{
-	if (!acc || !data)
-		return -EINVAL;
-
-	if (acc->ops && acc->ops->read)
-		return acc->ops->read(acc, region_idx, offset, byte, data);
-
-	return -ENOENT;
-}
-
-/**
- * opae_acc_reg_write - write to accelerator's register from its reg region.
- * @acc: accelerator to write.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: write operation width, e.g 4 byte = 32bit write.
- * @data: data stored the value to write to the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
-		       u64 offset, unsigned int byte, void *data)
-{
-	if (!acc || !data)
-		return -EINVAL;
-
-	if (acc->ops && acc->ops->write)
-		return acc->ops->write(acc, region_idx, offset, byte, data);
-
-	return -ENOENT;
-}
-
 /**
  * opae_acc_get_info - get information of an accelerator.
  * @acc: targeted accelerator
@@ -635,50 +591,6 @@ opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id)
 	return NULL;
 }
 
-/**
- * opae_manager_read_mac_rom - read the content of the MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of retimer
- * @addr: buffer of the MAC address
- *
- * Return: return the bytes of read successfully
- */
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
-		struct opae_ether_addr *addr)
-{
-	if (!mgr || !mgr->network_ops)
-		return -EINVAL;
-
-	if (mgr->network_ops->read_mac_rom)
-		return mgr->network_ops->read_mac_rom(mgr,
-				port * sizeof(struct opae_ether_addr),
-				addr, sizeof(struct opae_ether_addr));
-
-	return -ENOENT;
-}
-
-/**
- * opae_manager_write_mac_rom - write data into MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of the retimer
- * @addr: data of the MAC address
- *
- * Return: return written bytes
- */
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
-		struct opae_ether_addr *addr)
-{
-	if (!mgr || !mgr->network_ops)
-		return -EINVAL;
-
-	if (mgr->network_ops && mgr->network_ops->write_mac_rom)
-		return mgr->network_ops->write_mac_rom(mgr,
-				port * sizeof(struct opae_ether_addr),
-				addr, sizeof(struct opae_ether_addr));
-
-	return -ENOENT;
-}
-
 /**
  * opae_manager_get_eth_group_nums - get eth group numbers
  * @mgr: opae_manager for eth group
@@ -741,54 +653,6 @@ int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
 	return -ENOENT;
 }
 
-/**
- * opae_manager_eth_group_read_reg - read ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: read buffer
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
-		u8 type, u8 index, u16 addr, u32 *data)
-{
-	if (!mgr || !mgr->network_ops)
-		return -EINVAL;
-
-	if (mgr->network_ops->eth_group_reg_read)
-		return mgr->network_ops->eth_group_reg_read(mgr, group_id,
-				type, index, addr, data);
-
-	return -ENOENT;
-}
-
-/**
- * opae_manager_eth_group_write_reg - write ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: data will write to register
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
-		u8 type, u8 index, u16 addr, u32 data)
-{
-	if (!mgr || !mgr->network_ops)
-		return -EINVAL;
-
-	if (mgr->network_ops->eth_group_reg_write)
-		return mgr->network_ops->eth_group_reg_write(mgr, group_id,
-				type, index, addr, data);
-
-	return -ENOENT;
-}
-
 /**
  * opae_manager_get_retimer_info - get retimer info like PKVL chip
  * @mgr: opae_manager for retimer
@@ -866,62 +730,6 @@ opae_mgr_get_sensor_by_name(struct opae_manager *mgr,
 	return NULL;
 }
 
-/**
- * opae_manager_get_sensor_value_by_name - find the sensor by name and read out
- * the value
- * @mgr: opae_manager for sensor.
- * @name: the name of the sensor
- * @value: the readout sensor value
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_sensor_value_by_name(struct opae_manager *mgr,
-		const char *name, unsigned int *value)
-{
-	struct opae_sensor_info *sensor;
-
-	if (!mgr)
-		return -EINVAL;
-
-	sensor = opae_mgr_get_sensor_by_name(mgr, name);
-	if (!sensor)
-		return -ENODEV;
-
-	if (mgr->ops && mgr->ops->get_sensor_value)
-		return mgr->ops->get_sensor_value(mgr, sensor, value);
-
-	return -ENOENT;
-}
-
-/**
- * opae_manager_get_sensor_value_by_id - find the sensor by id and readout the
- * value
- * @mgr: opae_manager for sensor
- * @id: the id of the sensor
- * @value: the readout sensor value
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_sensor_value_by_id(struct opae_manager *mgr,
-		unsigned int id, unsigned int *value)
-{
-	struct opae_sensor_info *sensor;
-
-	if (!mgr)
-		return -EINVAL;
-
-	sensor = opae_mgr_get_sensor_by_id(mgr, id);
-	if (!sensor)
-		return -ENODEV;
-
-	if (mgr->ops && mgr->ops->get_sensor_value)
-		return mgr->ops->get_sensor_value(mgr, sensor, value);
-
-	return -ENOENT;
-}
-
 /**
  * opae_manager_get_sensor_value - get the current
  * sensor value
@@ -944,23 +752,3 @@ opae_mgr_get_sensor_value(struct opae_manager *mgr,
 
 	return -ENOENT;
 }
-
-/**
- * opae_manager_get_board_info - get board info
- * sensor value
- * @info: opae_board_info for the card
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_board_info(struct opae_manager *mgr,
-		struct opae_board_info **info)
-{
-	if (!mgr || !info)
-		return -EINVAL;
-
-	if (mgr->ops && mgr->ops->get_board_info)
-		return mgr->ops->get_board_info(mgr, info);
-
-	return -ENOENT;
-}
diff --git a/drivers/raw/ifpga/base/opae_hw_api.h b/drivers/raw/ifpga/base/opae_hw_api.h
index e99ee4564c..32b603fc8a 100644
--- a/drivers/raw/ifpga/base/opae_hw_api.h
+++ b/drivers/raw/ifpga/base/opae_hw_api.h
@@ -92,10 +92,6 @@ struct opae_sensor_info *opae_mgr_get_sensor_by_name(struct opae_manager *mgr,
 		const char *name);
 struct opae_sensor_info *opae_mgr_get_sensor_by_id(struct opae_manager *mgr,
 		unsigned int id);
-int opae_mgr_get_sensor_value_by_name(struct opae_manager *mgr,
-		const char *name, unsigned int *value);
-int opae_mgr_get_sensor_value_by_id(struct opae_manager *mgr,
-		unsigned int id, unsigned int *value);
 int opae_mgr_get_sensor_value(struct opae_manager *mgr,
 		struct opae_sensor_info *sensor,
 		unsigned int *value);
@@ -200,28 +196,6 @@ opae_acc_get_mgr(struct opae_accelerator *acc)
 	return acc ? acc->mgr : NULL;
 }
 
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
-		      u64 offset, unsigned int byte, void *data);
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
-		       u64 offset, unsigned int byte, void *data);
-
-#define opae_acc_reg_read64(acc, region, offset, data) \
-	opae_acc_reg_read(acc, region, offset, 8, data)
-#define opae_acc_reg_write64(acc, region, offset, data) \
-	opae_acc_reg_write(acc, region, offset, 8, data)
-#define opae_acc_reg_read32(acc, region, offset, data) \
-	opae_acc_reg_read(acc, region, offset, 4, data)
-#define opae_acc_reg_write32(acc, region, offset, data) \
-	opae_acc_reg_write(acc, region, offset, 4, data)
-#define opae_acc_reg_read16(acc, region, offset, data) \
-	opae_acc_reg_read(acc, region, offset, 2, data)
-#define opae_acc_reg_write16(acc, region, offset, data) \
-	opae_acc_reg_write(acc, region, offset, 2, data)
-#define opae_acc_reg_read8(acc, region, offset, data) \
-	opae_acc_reg_read(acc, region, offset, 1, data)
-#define opae_acc_reg_write8(acc, region, offset, data) \
-	opae_acc_reg_write(acc, region, offset, 1, data)
-
 /*for data stream read/write*/
 int opae_acc_data_read(struct opae_accelerator *acc, unsigned int flags,
 		       u64 offset, unsigned int byte, void *data);
@@ -337,10 +311,6 @@ struct opae_ether_addr {
 } __rte_packed;
 
 /* OPAE vBNG network API*/
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
-		struct opae_ether_addr *addr);
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
-		struct opae_ether_addr *addr);
 int opae_manager_get_retimer_info(struct opae_manager *mgr,
 		struct opae_retimer_info *info);
 int opae_manager_get_retimer_status(struct opae_manager *mgr,
@@ -348,10 +318,4 @@ int opae_manager_get_retimer_status(struct opae_manager *mgr,
 int opae_manager_get_eth_group_nums(struct opae_manager *mgr);
 int opae_manager_get_eth_group_info(struct opae_manager *mgr,
 		u8 group_id, struct opae_eth_group_info *info);
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
-		u8 type, u8 index, u16 addr, u32 data);
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
-		u8 type, u8 index, u16 addr, u32 *data);
-int opae_mgr_get_board_info(struct opae_manager *mgr,
-		struct opae_board_info **info);
 #endif /* _OPAE_HW_API_H_*/
diff --git a/drivers/raw/ifpga/base/opae_i2c.c b/drivers/raw/ifpga/base/opae_i2c.c
index 598eab5742..5ea7ca3672 100644
--- a/drivers/raw/ifpga/base/opae_i2c.c
+++ b/drivers/raw/ifpga/base/opae_i2c.c
@@ -104,12 +104,6 @@ int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
 	return ret;
 }
 
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
-		u8 *buf, u32 count)
-{
-	return i2c_read(dev, 0, slave_addr, offset, buf, count);
-}
-
 int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
 		u8 *buf, u32 count)
 {
@@ -117,12 +111,6 @@ int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
 			buf, count);
 }
 
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
-		u8 *buf, u32 count)
-{
-	return i2c_write(dev, 0, slave_addr, offset, buf, count);
-}
-
 int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
 		u8 *buf, u32 count)
 {
diff --git a/drivers/raw/ifpga/base/opae_i2c.h b/drivers/raw/ifpga/base/opae_i2c.h
index 4f6b0b28bb..a21277b7cc 100644
--- a/drivers/raw/ifpga/base/opae_i2c.h
+++ b/drivers/raw/ifpga/base/opae_i2c.h
@@ -121,12 +121,8 @@ int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
 		u32 offset, u8 *buf, u32 count);
 int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
 		u32 offset, u8 *buffer, int len);
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
-		u8 *buf, u32 count);
 int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
 		u8 *buf, u32 count);
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
-		u8 *buf, u32 count);
 int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
 		u8 *buf, u32 count);
 #endif
diff --git a/drivers/raw/ifpga/base/opae_ifpga_hw_api.c b/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
index 89c7b49203..ad5a9f2b6c 100644
--- a/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
+++ b/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
@@ -31,23 +31,6 @@ int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
 	return ifpga_set_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
 }
 
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
-				struct fpga_fme_info *fme_info)
-{
-	struct ifpga_fme_hw *fme;
-
-	if (!mgr || !mgr->data || !fme_info)
-		return -EINVAL;
-
-	fme = mgr->data;
-
-	spinlock_lock(&fme->lock);
-	fme_info->capability = fme->capability;
-	spinlock_unlock(&fme->lock);
-
-	return 0;
-}
-
 int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
 				   struct fpga_fme_err_irq_set *err_irq_set)
 {
@@ -61,85 +44,3 @@ int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
 	return ifpga_set_irq(fme->parent, FEATURE_FIU_ID_FME, 0,
 			     IFPGA_FME_FEATURE_ID_GLOBAL_ERR, err_irq_set);
 }
-
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
-			       struct feature_prop *prop)
-{
-	struct ifpga_port_hw *port;
-
-	if (!br || !br->data)
-		return -EINVAL;
-
-	port = br->data;
-
-	return ifpga_get_prop(port->parent, FEATURE_FIU_ID_PORT,
-			      port->port_id, prop);
-}
-
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
-			       struct feature_prop *prop)
-{
-	struct ifpga_port_hw *port;
-
-	if (!br || !br->data)
-		return -EINVAL;
-
-	port = br->data;
-
-	return ifpga_set_prop(port->parent, FEATURE_FIU_ID_PORT,
-			      port->port_id, prop);
-}
-
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
-			       struct fpga_port_info *port_info)
-{
-	struct ifpga_port_hw *port;
-
-	if (!br || !br->data || !port_info)
-		return -EINVAL;
-
-	port = br->data;
-
-	spinlock_lock(&port->lock);
-	port_info->capability = port->capability;
-	port_info->num_uafu_irqs = port->num_uafu_irqs;
-	spinlock_unlock(&port->lock);
-
-	return 0;
-}
-
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
-				      struct fpga_port_region_info *info)
-{
-	struct ifpga_port_hw *port;
-
-	if (!br || !br->data || !info)
-		return -EINVAL;
-
-	/* Only support STP region now */
-	if (info->index != PORT_REGION_INDEX_STP)
-		return -EINVAL;
-
-	port = br->data;
-
-	spinlock_lock(&port->lock);
-	info->addr = port->stp_addr;
-	info->size = port->stp_size;
-	spinlock_unlock(&port->lock);
-
-	return 0;
-}
-
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
-				  struct fpga_port_err_irq_set *err_irq_set)
-{
-	struct ifpga_port_hw *port;
-
-	if (!br || !br->data)
-		return -EINVAL;
-
-	port = br->data;
-
-	return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
-			     IFPGA_PORT_FEATURE_ID_ERROR, err_irq_set);
-}
diff --git a/drivers/raw/ifpga/base/opae_ifpga_hw_api.h b/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
index bab33862ee..104ab97edc 100644
--- a/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
+++ b/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
@@ -217,10 +217,6 @@ int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
 				struct feature_prop *prop);
 int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
 				struct feature_prop *prop);
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
-			       struct feature_prop *prop);
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
-			       struct feature_prop *prop);
 
 /*
  * Retrieve information about the fpga fme.
@@ -231,9 +227,6 @@ struct fpga_fme_info {
 #define FPGA_FME_CAP_ERR_IRQ	(1 << 0) /* Support fme error interrupt */
 };
 
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
-				struct fpga_fme_info *fme_info);
-
 /* Set eventfd information for ifpga FME error interrupt */
 struct fpga_fme_err_irq_set {
 	s32 evtfd;		/* Eventfd handler */
@@ -254,8 +247,6 @@ struct fpga_port_info {
 	u32 num_uafu_irqs;	/* The number of uafu interrupts */
 };
 
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
-			       struct fpga_port_info *port_info);
 /*
  * Retrieve region information about the fpga port.
  * Driver needs to fill the index of struct fpga_port_region_info.
@@ -267,15 +258,9 @@ struct fpga_port_region_info {
 	u8 *addr;	/* Base address of the region */
 };
 
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
-				      struct fpga_port_region_info *info);
-
 /* Set eventfd information for ifpga port error interrupt */
 struct fpga_port_err_irq_set {
 	s32 evtfd;		/* Eventfd handler */
 };
 
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
-				  struct fpga_port_err_irq_set *err_irq_set);
-
 #endif /* _OPAE_IFPGA_HW_API_H_ */
diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 2c4877c37d..1ef5cfbda0 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -111,8 +111,6 @@ int mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind,
 
 /* mlx5_regex_fastpath.c */
 int mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id);
-void mlx5_regexdev_teardown_fastpath(struct mlx5_regex_priv *priv,
-				     uint32_t qp_id);
 uint16_t mlx5_regexdev_enqueue(struct rte_regexdev *dev, uint16_t qp_id,
 		       struct rte_regex_ops **ops, uint16_t nb_ops);
 uint16_t mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id,
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 254954776f..f38a3772cb 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -393,28 +393,3 @@ mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id)
 	setup_sqs(qp);
 	return 0;
 }
-
-static void
-free_buffers(struct mlx5_regex_qp *qp)
-{
-	if (qp->metadata) {
-		mlx5_glue->dereg_mr(qp->metadata);
-		rte_free(qp->metadata->addr);
-	}
-	if (qp->outputs) {
-		mlx5_glue->dereg_mr(qp->outputs);
-		rte_free(qp->outputs->addr);
-	}
-}
-
-void
-mlx5_regexdev_teardown_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id)
-{
-	struct mlx5_regex_qp *qp = &priv->qps[qp_id];
-
-	if (qp) {
-		free_buffers(qp);
-		if (qp->jobs)
-			rte_free(qp->jobs);
-	}
-}
diff --git a/drivers/regex/mlx5/mlx5_rxp.c b/drivers/regex/mlx5/mlx5_rxp.c
index 7936a5235b..21e4847744 100644
--- a/drivers/regex/mlx5/mlx5_rxp.c
+++ b/drivers/regex/mlx5/mlx5_rxp.c
@@ -50,8 +50,6 @@ write_shared_rules(struct mlx5_regex_priv *priv,
 		   uint8_t db_to_program);
 static int
 rxp_db_setup(struct mlx5_regex_priv *priv);
-static void
-rxp_dump_csrs(struct ibv_context *ctx, uint8_t id);
 static int
 rxp_write_rules_via_cp(struct ibv_context *ctx,
 		       struct mlx5_rxp_rof_entry *rules,
@@ -64,49 +62,6 @@ rxp_start_engine(struct ibv_context *ctx, uint8_t id);
 static int
 rxp_stop_engine(struct ibv_context *ctx, uint8_t id);
 
-static void __rte_unused
-rxp_dump_csrs(struct ibv_context *ctx __rte_unused, uint8_t id __rte_unused)
-{
-	uint32_t reg, i;
-
-	/* Main CSRs*/
-	for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
-		if (mlx5_devx_regex_register_read(ctx, id,
-						  (MLX5_RXP_CSR_WIDTH * i) +
-						  MLX5_RXP_CSR_BASE_ADDRESS,
-						  &reg)) {
-			DRV_LOG(ERR, "Failed to read Main CSRs Engine %d!", id);
-			return;
-		}
-		DRV_LOG(DEBUG, "RXP Main CSRs (Eng%d) register (%d): %08x",
-			id, i, reg);
-	}
-	/* RTRU CSRs*/
-	for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
-		if (mlx5_devx_regex_register_read(ctx, id,
-						  (MLX5_RXP_CSR_WIDTH * i) +
-						 MLX5_RXP_RTRU_CSR_BASE_ADDRESS,
-						  &reg)) {
-			DRV_LOG(ERR, "Failed to read RTRU CSRs Engine %d!", id);
-			return;
-		}
-		DRV_LOG(DEBUG, "RXP RTRU CSRs (Eng%d) register (%d): %08x",
-			id, i, reg);
-	}
-	/* STAT CSRs */
-	for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
-		if (mlx5_devx_regex_register_read(ctx, id,
-						  (MLX5_RXP_CSR_WIDTH * i) +
-						MLX5_RXP_STATS_CSR_BASE_ADDRESS,
-						  &reg)) {
-			DRV_LOG(ERR, "Failed to read STAT CSRs Engine %d!", id);
-			return;
-		}
-		DRV_LOG(DEBUG, "RXP STAT CSRs (Eng%d) register (%d): %08x",
-			id, i, reg);
-	}
-}
-
 int
 mlx5_regex_info_get(struct rte_regexdev *dev __rte_unused,
 		    struct rte_regexdev_info *info)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..7515dc44b3 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -56,64 +56,6 @@ otx2_ree_err_intr_unregister(const struct rte_regexdev *dev)
 	vf->err_intr_registered = 0;
 }
 
-static int
-ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
-			 uintptr_t base)
-{
-	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
-	struct rte_intr_handle *handle = &pci_dev->intr_handle;
-	int ret;
-
-	/* Disable error interrupts */
-	otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
-	/* Register error interrupt handler */
-	ret = otx2_register_irq(handle, ree_lf_err_intr_handler, (void *)base,
-				msix_off);
-	if (ret)
-		return ret;
-
-	/* Enable error interrupts */
-	otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1S);
-
-	return 0;
-}
-
-int
-otx2_ree_err_intr_register(const struct rte_regexdev *dev)
-{
-	struct otx2_ree_data *data = dev->data->dev_private;
-	struct otx2_ree_vf *vf = &data->vf;
-	uint32_t i, j, ret;
-	uintptr_t base;
-
-	for (i = 0; i < vf->nb_queues; i++) {
-		if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
-			otx2_err("Invalid REE LF MSI-X offset: 0x%x",
-				    vf->lf_msixoff[i]);
-			return -EINVAL;
-		}
-	}
-
-	for (i = 0; i < vf->nb_queues; i++) {
-		base = OTX2_REE_LF_BAR2(vf, i);
-		ret = ree_lf_err_intr_register(dev, vf->lf_msixoff[i], base);
-		if (ret)
-			goto intr_unregister;
-	}
-
-	vf->err_intr_registered = 1;
-	return 0;
-
-intr_unregister:
-	/* Unregister the ones already registered */
-	for (j = 0; j < i; j++) {
-		base = OTX2_REE_LF_BAR2(vf, j);
-		ree_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
-	}
-	return ret;
-}
-
 int
 otx2_ree_iq_enable(const struct rte_regexdev *dev, const struct otx2_ree_qp *qp,
 		   uint8_t pri, uint32_t size_div2)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
index dedf5f3282..4733febc0e 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
@@ -188,8 +188,6 @@ union otx2_ree_match {
 
 void otx2_ree_err_intr_unregister(const struct rte_regexdev *dev);
 
-int otx2_ree_err_intr_register(const struct rte_regexdev *dev);
-
 int otx2_ree_iq_enable(const struct rte_regexdev *dev,
 		       const struct otx2_ree_qp *qp,
 		       uint8_t pri, uint32_t size_div128);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.c b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
index 6d58d367d4..726994e195 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
@@ -189,34 +189,6 @@ otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
 	return 0;
 }
 
-int
-otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
-		      uint64_t val)
-{
-	struct otx2_ree_data *data = dev->data->dev_private;
-	struct otx2_ree_vf *vf = &data->vf;
-	struct ree_rd_wr_reg_msg *msg;
-	struct otx2_mbox *mbox;
-
-	mbox = vf->otx2_dev.mbox;
-	msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
-						sizeof(*msg), sizeof(*msg));
-	if (msg == NULL) {
-		otx2_err("Could not allocate mailbox message");
-		return -EFAULT;
-	}
-
-	msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
-	msg->hdr.sig = OTX2_MBOX_REQ_SIG;
-	msg->hdr.pcifunc = vf->otx2_dev.pf_func;
-	msg->is_write = 1;
-	msg->reg_offset = reg;
-	msg->val = val;
-	msg->blkaddr = vf->block_address;
-
-	return ree_send_mbox_msg(vf);
-}
-
 int
 otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
 		uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.h b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
index 953efa6724..c36e6a5b7a 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.h
+++ b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
@@ -22,9 +22,6 @@ int otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
 int otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
 			 uint64_t *val);
 
-int otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
-			  uint64_t val);
-
 int otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
 		 uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len);
 
diff --git a/examples/ip_pipeline/cryptodev.c b/examples/ip_pipeline/cryptodev.c
index b0d9f3d217..4c986e421d 100644
--- a/examples/ip_pipeline/cryptodev.c
+++ b/examples/ip_pipeline/cryptodev.c
@@ -38,14 +38,6 @@ cryptodev_find(const char *name)
 	return NULL;
 }
 
-struct cryptodev *
-cryptodev_next(struct cryptodev *cryptodev)
-{
-	return (cryptodev == NULL) ?
-			TAILQ_FIRST(&cryptodev_list) :
-			TAILQ_NEXT(cryptodev, node);
-}
-
 struct cryptodev *
 cryptodev_create(const char *name, struct cryptodev_params *params)
 {
diff --git a/examples/ip_pipeline/cryptodev.h b/examples/ip_pipeline/cryptodev.h
index d00434379e..c91b8a69f7 100644
--- a/examples/ip_pipeline/cryptodev.h
+++ b/examples/ip_pipeline/cryptodev.h
@@ -29,9 +29,6 @@ cryptodev_init(void);
 struct cryptodev *
 cryptodev_find(const char *name);
 
-struct cryptodev *
-cryptodev_next(struct cryptodev *cryptodev);
-
 struct cryptodev_params {
 	const char *dev_name;
 	uint32_t dev_id; /**< Valid only when *dev_name* is NULL. */
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356..d09609b9e9 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -248,24 +248,3 @@ link_create(const char *name, struct link_params *params)
 
 	return link;
 }
-
-int
-link_is_up(const char *name)
-{
-	struct rte_eth_link link_params;
-	struct link *link;
-
-	/* Check input params */
-	if (name == NULL)
-		return 0;
-
-	link = link_find(name);
-	if (link == NULL)
-		return 0;
-
-	/* Resource */
-	if (rte_eth_link_get(link->port_id, &link_params) < 0)
-		return 0;
-
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
-}
diff --git a/examples/ip_pipeline/link.h b/examples/ip_pipeline/link.h
index 34ff1149e0..a4f6aa0e73 100644
--- a/examples/ip_pipeline/link.h
+++ b/examples/ip_pipeline/link.h
@@ -60,7 +60,4 @@ struct link_params {
 struct link *
 link_create(const char *name, struct link_params *params);
 
-int
-link_is_up(const char *name);
-
 #endif /* _INCLUDE_LINK_H_ */
diff --git a/examples/ip_pipeline/parser.c b/examples/ip_pipeline/parser.c
index dfd71a71d3..9f4f91d213 100644
--- a/examples/ip_pipeline/parser.c
+++ b/examples/ip_pipeline/parser.c
@@ -39,44 +39,6 @@ get_hex_val(char c)
 	}
 }
 
-int
-parser_read_arg_bool(const char *p)
-{
-	p = skip_white_spaces(p);
-	int result = -EINVAL;
-
-	if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
-		((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
-		p += 3;
-		result = 1;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'n')) ||
-		((p[0] == 'O') && (p[1] == 'N'))) {
-		p += 2;
-		result = 1;
-	}
-
-	if (((p[0] == 'n') && (p[1] == 'o')) ||
-		((p[0] == 'N') && (p[1] == 'O'))) {
-		p += 2;
-		result = 0;
-	}
-
-	if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
-		((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
-		p += 3;
-		result = 0;
-	}
-
-	p = skip_white_spaces(p);
-
-	if (p[0] != '\0')
-		return -EINVAL;
-
-	return result;
-}
-
 int
 parser_read_uint64(uint64_t *value, const char *p)
 {
@@ -153,22 +115,6 @@ parser_read_uint32(uint32_t *value, const char *p)
 	return 0;
 }
 
-int
-parser_read_uint32_hex(uint32_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT32_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 parser_read_uint16(uint16_t *value, const char *p)
 {
@@ -185,22 +131,6 @@ parser_read_uint16(uint16_t *value, const char *p)
 	return 0;
 }
 
-int
-parser_read_uint16_hex(uint16_t *value, const char *p)
-{
-	uint64_t val = 0;
-	int ret = parser_read_uint64_hex(&val, p);
-
-	if (ret < 0)
-		return ret;
-
-	if (val > UINT16_MAX)
-		return -ERANGE;
-
-	*value = val;
-	return 0;
-}
-
 int
 parser_read_uint8(uint8_t *value, const char *p)
 {
@@ -293,44 +223,6 @@ parse_hex_string(char *src, uint8_t *dst, uint32_t *size)
 	return 0;
 }
 
-int
-parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels)
-{
-	uint32_t n_max_labels = *n_labels, count = 0;
-
-	/* Check for void list of labels */
-	if (strcmp(string, "<void>") == 0) {
-		*n_labels = 0;
-		return 0;
-	}
-
-	/* At least one label should be present */
-	for ( ; (*string != '\0'); ) {
-		char *next;
-		int value;
-
-		if (count >= n_max_labels)
-			return -1;
-
-		if (count > 0) {
-			if (string[0] != ':')
-				return -1;
-
-			string++;
-		}
-
-		value = strtol(string, &next, 10);
-		if (next == string)
-			return -1;
-		string = next;
-
-		labels[count++] = (uint32_t) value;
-	}
-
-	*n_labels = count;
-	return 0;
-}
-
 static struct rte_ether_addr *
 my_ether_aton(const char *a)
 {
@@ -410,97 +302,3 @@ parse_mac_addr(const char *token, struct rte_ether_addr *addr)
 	memcpy(addr, tmp, sizeof(struct rte_ether_addr));
 	return 0;
 }
-
-int
-parse_cpu_core(const char *entry,
-	struct cpu_core_params *p)
-{
-	size_t num_len;
-	char num[8];
-
-	uint32_t s = 0, c = 0, h = 0, val;
-	uint8_t s_parsed = 0, c_parsed = 0, h_parsed = 0;
-	const char *next = skip_white_spaces(entry);
-	char type;
-
-	if (p == NULL)
-		return -EINVAL;
-
-	/* Expect <CORE> or [sX][cY][h]. At least one parameter is required. */
-	while (*next != '\0') {
-		/* If everything parsed nothing should left */
-		if (s_parsed && c_parsed && h_parsed)
-			return -EINVAL;
-
-		type = *next;
-		switch (type) {
-		case 's':
-		case 'S':
-			if (s_parsed || c_parsed || h_parsed)
-				return -EINVAL;
-			s_parsed = 1;
-			next++;
-			break;
-		case 'c':
-		case 'C':
-			if (c_parsed || h_parsed)
-				return -EINVAL;
-			c_parsed = 1;
-			next++;
-			break;
-		case 'h':
-		case 'H':
-			if (h_parsed)
-				return -EINVAL;
-			h_parsed = 1;
-			next++;
-			break;
-		default:
-			/* If it start from digit it must be only core id. */
-			if (!isdigit(*next) || s_parsed || c_parsed || h_parsed)
-				return -EINVAL;
-
-			type = 'C';
-		}
-
-		for (num_len = 0; *next != '\0'; next++, num_len++) {
-			if (num_len == RTE_DIM(num))
-				return -EINVAL;
-
-			if (!isdigit(*next))
-				break;
-
-			num[num_len] = *next;
-		}
-
-		if (num_len == 0 && type != 'h' && type != 'H')
-			return -EINVAL;
-
-		if (num_len != 0 && (type == 'h' || type == 'H'))
-			return -EINVAL;
-
-		num[num_len] = '\0';
-		val = strtol(num, NULL, 10);
-
-		h = 0;
-		switch (type) {
-		case 's':
-		case 'S':
-			s = val;
-			break;
-		case 'c':
-		case 'C':
-			c = val;
-			break;
-		case 'h':
-		case 'H':
-			h = 1;
-			break;
-		}
-	}
-
-	p->socket_id = s;
-	p->core_id = c;
-	p->thread_id = h;
-	return 0;
-}
diff --git a/examples/ip_pipeline/parser.h b/examples/ip_pipeline/parser.h
index 4538f675d4..826ed8d136 100644
--- a/examples/ip_pipeline/parser.h
+++ b/examples/ip_pipeline/parser.h
@@ -31,16 +31,12 @@ skip_digits(const char *src)
 	return i;
 }
 
-int parser_read_arg_bool(const char *p);
-
 int parser_read_uint64(uint64_t *value, const char *p);
 int parser_read_uint32(uint32_t *value, const char *p);
 int parser_read_uint16(uint16_t *value, const char *p);
 int parser_read_uint8(uint8_t *value, const char *p);
 
 int parser_read_uint64_hex(uint64_t *value, const char *p);
-int parser_read_uint32_hex(uint32_t *value, const char *p);
-int parser_read_uint16_hex(uint16_t *value, const char *p);
 int parser_read_uint8_hex(uint8_t *value, const char *p);
 
 int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
@@ -48,7 +44,6 @@ int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
 int parse_ipv4_addr(const char *token, struct in_addr *ipv4);
 int parse_ipv6_addr(const char *token, struct in6_addr *ipv6);
 int parse_mac_addr(const char *token, struct rte_ether_addr *addr);
-int parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels);
 
 struct cpu_core_params {
 	uint32_t socket_id;
@@ -56,8 +51,6 @@ struct cpu_core_params {
 	uint32_t thread_id;
 };
 
-int parse_cpu_core(const char *entry, struct cpu_core_params *p);
-
 int parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens);
 
 #endif
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 84bbcf2b2d..424f281213 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -315,27 +315,6 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	return link;
 }
 
-int
-link_is_up(struct obj *obj, const char *name)
-{
-	struct rte_eth_link link_params;
-	struct link *link;
-
-	/* Check input params */
-	if (!obj || !name)
-		return 0;
-
-	link = link_find(obj, name);
-	if (link == NULL)
-		return 0;
-
-	/* Resource */
-	if (rte_eth_link_get(link->port_id, &link_params) < 0)
-		return 0;
-
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
-}
-
 struct link *
 link_find(struct obj *obj, const char *name)
 {
diff --git a/examples/pipeline/obj.h b/examples/pipeline/obj.h
index e6351fd279..3d729e43b0 100644
--- a/examples/pipeline/obj.h
+++ b/examples/pipeline/obj.h
@@ -95,9 +95,6 @@ link_create(struct obj *obj,
 	    const char *name,
 	    struct link_params *params);
 
-int
-link_is_up(struct obj *obj, const char *name);
-
 struct link *
 link_find(struct obj *obj, const char *name);
 
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 03a4f2dd2d..4480e5249a 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -238,14 +238,6 @@ static int huge_wrap_sigsetjmp(void)
 	return sigsetjmp(huge_jmpenv, 1);
 }
 
-#ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES
-/* Callback for numa library. */
-void numa_error(char *where)
-{
-	RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno));
-}
-#endif
-
 /*
  * Mmap all hugepages of hugepage table: it first open a file in
  * hugetlbfs, then mmap() hugepage_sz data in it. If orig is set, the
diff --git a/lib/librte_vhost/fd_man.c b/lib/librte_vhost/fd_man.c
index 55d4856f9e..942c5f145b 100644
--- a/lib/librte_vhost/fd_man.c
+++ b/lib/librte_vhost/fd_man.c
@@ -100,21 +100,6 @@ fdset_add_fd(struct fdset *pfdset, int idx, int fd,
 	pfd->revents = 0;
 }
 
-void
-fdset_init(struct fdset *pfdset)
-{
-	int i;
-
-	if (pfdset == NULL)
-		return;
-
-	for (i = 0; i < MAX_FDS; i++) {
-		pfdset->fd[i].fd = -1;
-		pfdset->fd[i].dat = NULL;
-	}
-	pfdset->num = 0;
-}
-
 /**
  * Register the fd in the fdset with read/write handler and context.
  */
diff --git a/lib/librte_vhost/fd_man.h b/lib/librte_vhost/fd_man.h
index 3ab5cfdd60..f0157eeeed 100644
--- a/lib/librte_vhost/fd_man.h
+++ b/lib/librte_vhost/fd_man.h
@@ -39,8 +39,6 @@ struct fdset {
 };
 
 
-void fdset_init(struct fdset *pfdset);
-
 int fdset_add(struct fdset *pfdset, int fd,
 	fd_cb rcb, fd_cb wcb, void *dat);
 
-- 
2.26.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH] app/testpmd: fix MTU after device configure
  @ 2020-11-16 18:50  3%   ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-16 18:50 UTC (permalink / raw)
  To: Wenzhuo Lu, Beilei Xing, Bernard Iremonger
  Cc: dev, Qi Zhang, Steve Yang, Thomas Monjalon, Andrew Rybchenko,
	Konstantin Ananyev, Olivier Matz, Lance Richardson,
	David Marchand

On 11/13/2020 11:44 AM, Ferruh Yigit wrote:
> In 'rte_eth_dev_configure()', if 'DEV_RX_OFFLOAD_JUMBO_FRAME' is not set
> the max frame size is limited to 'RTE_ETHER_MAX_LEN' (1518).
> This is mistake because for the PMDs that has frame size bigger than
> "RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN" (18 bytes), the MTU becomes
> less than 1500, causing a valid frame with 1500 bytes payload to be
> dropped.
> 
> Since 'rte_eth_dev_set_mtu()' works as expected, it is called after
> 'rte_eth_dev_configure()' to fix the MTU.
> It may look redundant to set MTU after 'rte_eth_dev_configure()', both
> with default values, but it is not, the resulting MTU config can be
> different in the device based on frame overhead of the PMD.
> 
> And instead of setting the MTU to default value, it is first get via
> 'rte_eth_dev_get_mtu()' and set again, this is to cover cases MTU
> changed from testpmd command line.
> 
> 'rte_eth_dev_set_mtu()', '-ENOTSUP' error is ignored to prevent
> irrelevant warning messages for the virtual PMDs.
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> Cc: Steve Yang <stevex.yang@intel.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>
> Cc: Lance Richardson <lance.richardson@broadcom.com>
> ---
>   app/test-pmd/testpmd.c | 19 +++++++++++++++++++
>   1 file changed, 19 insertions(+)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 33fc0fddf5..48e9647fc7 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2537,6 +2537,8 @@ start_port(portid_t pid)
>   		}
>   
>   		if (port->need_reconfig > 0) {
> +			uint16_t mtu = RTE_ETHER_MTU;
> +
>   			port->need_reconfig = 0;
>   
>   			if (flow_isolate_all) {
> @@ -2570,6 +2572,23 @@ start_port(portid_t pid)
>   				port->need_reconfig = 1;
>   				return -1;
>   			}
> +
> +			/*
> +			 * Workaround for rte_eth_dev_configure(), max_rx_pkt_len
> +			 * set MTU wrong for the PMDs that have frame overhead
> +			 * bigger than RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN.
> +			 * For a PMD that has 26 bytes overhead, rte_eth_dev_configure()
> +			 * can set MTU to max 1492, not to expected 1500 bytes.
> +			 * Using rte_eth_dev_set_mtu() to be able to set MTU correctly,
> +			 * default MTU value is 1500.
> +			 */
> +			diag = rte_eth_dev_get_mtu(pi, &mtu);
> +			if (diag)
> +				printf("Failed to get MTU for port %d\n", pi);
> +			diag = rte_eth_dev_set_mtu(pi, mtu);
> +			if (diag != 0 && diag != -ENOTSUP)
> +				printf("Failed to set MTU to %u for port %d\n",
> +						mtu, pi);
>   		}
>   		if (port->need_reconfig_queues > 0) {
>   			port->need_reconfig_queues = 0;
> 

@David highlighted that 'scatter' tests are failing in the lab with this commit,
https://lab.dpdk.org/results/dashboard/patchsets/14492/

With above commit only 'mtu' is taken into account, so in testpmd both 
"--max-pkt-len=N" parameter and "port config all max-pkt-len #" command are no 
more working as expected. This seems the reason of the failure.

Technically it is possible to fix dts testcase by adding following commands:
port stop all
port config mtu 0 9000
port start all

But, there may be other side affects from "max-pkt-len" is not working in 
testpmd as expected. Reverting this one too can be safest option.

For now we need to live with the issue this patch is fixing, hopefully we can 
fix it in next release with fixing all testpmd, ethdev and drivers, there is a 
question about ethdev change if it will be an ABI break or not, we will see it.

And there is a longer term target to deprecate 'max_rx_pkt_len' and 'mtu' to 
unify them:
https://patches.dpdk.org/patch/81591/

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
  @ 2020-11-16 16:23  3%   ` Ferruh Yigit
  2020-11-22 13:28  0%     ` Jack Min
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-16 16:23 UTC (permalink / raw)
  To: Xiaoyu Min, Jingjing Wu, Beilei Xing
  Cc: dev, Xiaoyu Min, Thomas Monjalon, Andrew Rybchenko, Ori Kam, Dekel Peled

On 11/16/2020 7:55 AM, Xiaoyu Min wrote:
> From: Xiaoyu Min <jackmin@nvidia.com>
> 
> The rte_flow_item_vlan items are refined.
> The structs do not exactly represent the packet bits captured on the
> wire anymore so should only copy real header instead of the whole struct.
> 
> Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
> 
> Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items")
> 
> Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
> ---
>   drivers/net/iavf/iavf_fdir.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> index d683a468c1..7054bde0b9 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -541,7 +541,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
>   				VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
>   
>   				rte_memcpy(hdr->buffer,
> -					eth_spec, sizeof(*eth_spec));
> +					eth_spec, sizeof(struct rte_ether_hdr));

This requires 'struct rte_flow_item_eth' should have 'struct rte_ether_hdr' as 
first element, and I suspect this usage exists in a few more locations, but I 
wonder if this assumption is real and documented in somewhere?
I am not just talking about 'struct rte_flow_item_eth', but for all 
'rte_flow_item_*'...



btw, while checking for the 'struct rte_flow_item_eth', pahole shows it is using 
20 bytes, and I suspect this is not the intention with the reserved field:

struct rte_flow_item_eth {
	struct rte_ether_addr      dst;                  /*     0     6 */
	struct rte_ether_addr      src;                  /*     6     6 */
	uint16_t                   type;                 /*    12     2 */

	/* Bitfield combined with previous fields */

	uint32_t                   has_vlan:1;           /*    12:15  4 */

	/* XXX 31 bits hole, try to pack */

	uint32_t                   reserved:31;          /*    16: 1  4 */

	/* size: 20, cachelines: 1, members: 5 */
	/* bit holes: 1, sum bit holes: 31 bits */
	/* bit_padding: 1 bits */
	/* last cacheline: 20 bytes */
};

'has_vlan' seems combined with previous field to make together 32 bits. So the 
'reserved' field is occupying a new 32 bits all by itself.

What about changing the struct as following, while we can change the ABI:
struct rte_flow_item_eth {
	struct rte_ether_addr      dst;                  /*     0     6 */
	struct rte_ether_addr      src;                  /*     6     6 */
	uint16_t                   type;                 /*    12     2 */
	uint16_t                   has_vlan:1;           /*    14:15  2 */
	uint16_t                   reserved:15;          /*    14: 0  2 */

	/* size: 16, cachelines: 1, members: 5 */
	/* last cacheline: 16 bytes */
};





^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH] devtools: fix x86-default env when installing
@ 2020-11-12 13:38  4% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-12 13:38 UTC (permalink / raw)
  To: dev; +Cc: thomas, stable

While testing Thomas patch on this script verbosity, I noticed that we
load the x86-default environment after installing this target.
I did not see any problem with it, yet we should load corresponding
environment before installing a target.

Fixes: bd253daa7717 ("devtools: fix test of ninja install")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 devtools/test-meson-builds.sh | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 469251b6ef..7b0d05ac3f 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -253,17 +253,15 @@ done
 
 # Test installation of the x86-default target, to be used for checking
 # the sample apps build using the pkg-config file for cflags and libs
+load_env cc
 build_path=$(readlink -f $builds_dir/build-x86-default)
 export DESTDIR=$build_path/install
 # No need to reinstall if ABI checks are enabled
 if [ -z "$DPDK_ABI_REF_VERSION" ]; then
 	install_target $build_path $DESTDIR
 fi
-
-load_env cc
 pc_file=$(find $DESTDIR -name libdpdk.pc)
 export PKG_CONFIG_PATH=$(dirname $pc_file):$PKG_CONFIG_PATH
-
 # if pkg-config defines the necessary flags, test building some examples
 if pkg-config --define-prefix libdpdk >/dev/null 2>&1; then
 	export PKGCONF="pkg-config --define-prefix"
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v10 4/7] doc: update documentation to reflect new options
  @ 2020-11-10 22:55  1%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-11-10 22:55 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Replace old option syntax -w with -a and update any wording
around blacklisting.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/cryptodevs/dpaa2_sec.rst           |  6 ++--
 doc/guides/cryptodevs/dpaa_sec.rst            |  6 ++--
 doc/guides/cryptodevs/qat.rst                 | 12 ++++----
 doc/guides/eventdevs/octeontx2.rst            | 20 ++++++-------
 doc/guides/freebsd_gsg/build_sample_apps.rst  |  2 +-
 doc/guides/linux_gsg/build_sample_apps.rst    |  2 +-
 doc/guides/linux_gsg/eal_args.include.rst     | 14 +++++-----
 doc/guides/linux_gsg/linux_drivers.rst        |  4 +--
 doc/guides/mempool/octeontx2.rst              |  4 +--
 doc/guides/nics/bnxt.rst                      | 18 ++++++------
 doc/guides/nics/cxgbe.rst                     | 12 ++++----
 doc/guides/nics/dpaa.rst                      |  6 ++--
 doc/guides/nics/dpaa2.rst                     |  6 ++--
 doc/guides/nics/enic.rst                      |  6 ++--
 doc/guides/nics/fail_safe.rst                 | 20 ++++++-------
 doc/guides/nics/features.rst                  |  2 +-
 doc/guides/nics/i40e.rst                      | 16 +++++------
 doc/guides/nics/ice.rst                       | 28 +++++++++++++------
 doc/guides/nics/ixgbe.rst                     |  4 +--
 doc/guides/nics/mlx4.rst                      | 18 ++++++------
 doc/guides/nics/mlx5.rst                      | 14 +++++-----
 doc/guides/nics/nfb.rst                       |  2 +-
 doc/guides/nics/octeontx2.rst                 | 22 +++++++--------
 doc/guides/nics/sfc_efx.rst                   |  2 +-
 doc/guides/nics/tap.rst                       |  2 +-
 doc/guides/nics/thunderx.rst                  |  4 +--
 .../prog_guide/env_abstraction_layer.rst      |  8 +++---
 doc/guides/prog_guide/multi_proc_support.rst  |  4 +--
 doc/guides/prog_guide/poll_mode_drv.rst       |  6 ++--
 .../prog_guide/switch_representation.rst      |  6 ++--
 doc/guides/rel_notes/release_20_11.rst        |  5 ++++
 doc/guides/sample_app_ug/bbdev_app.rst        | 14 +++++-----
 .../sample_app_ug/eventdev_pipeline.rst       |  4 +--
 doc/guides/sample_app_ug/ipsec_secgw.rst      | 12 ++++----
 doc/guides/sample_app_ug/l3_forward.rst       |  8 ++++--
 .../sample_app_ug/l3_forward_access_ctrl.rst  |  2 +-
 .../sample_app_ug/l3_forward_power_man.rst    |  3 +-
 doc/guides/sample_app_ug/vdpa.rst             |  2 +-
 doc/guides/tools/cryptoperf.rst               |  6 ++--
 doc/guides/tools/flow-perf.rst                |  2 +-
 doc/guides/tools/testregex.rst                |  2 +-
 41 files changed, 178 insertions(+), 158 deletions(-)

diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
 * LS2088A/LS2048A
 * LS1088A/LS1048A
 
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
 
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
 
  .. code-block:: console
 
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
 * LS1046A/LS1026A
 * LS1043A/LS1023A
 
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
 
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
 
  .. code-block:: console
 
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 566423948f79..cf16f0350303 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
   optimisations in the GEN3 device. And if a GCM session is initialised on a
   GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
   enqueued to the device and will be marked as failed. The simplest way to
-  mitigate this is to use the bdf whitelist to avoid mixing devices of different
+  mitigate this is to use the PCI allowlist to avoid mixing devices of different
   generations in the same process if planning to use for GCM.
 * The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
   the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
         QAT VF may expose two crypto devices, sym and asym, it may happen that the
         number of devices will be bigger than MAX_DEVS and the process will show an error
         during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
-        increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+        increased or -a, allow domain:bus:devid:func option may be used.
 
 
 QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
 possible to enqueue is smaller.
 To use this feature the user must set the parameter on process start as a device additional parameter::
 
-  -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+  -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
 
 All parameters can be used with the same device regardless of order. Parameters are separated
 by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -638,19 +638,19 @@ Testing
 QAT SYM crypto PMD can be tested by running the test application::
 
     cd ./<build_dir>/app/test
-    ./dpdk-test -l1 -n1 -w <your qat bdf>
+    ./dpdk-test -l1 -n1 -a <your qat bdf>
     RTE>>cryptodev_qat_autotest
 
 QAT ASYM crypto PMD can be tested by running the test application::
 
     cd ./<build_dir>/app/test
-    ./dpdk-test -l1 -n1 -w <your qat bdf>
+    ./dpdk-test -l1 -n1 -a <your qat bdf>
     RTE>>cryptodev_qat_asym_autotest
 
 QAT compression PMD can be tested by running the test application::
 
     cd ./<build_dir>/app/test
-    ./dpdk-test -l1 -n1 -w <your qat bdf>
+    ./dpdk-test -l1 -n1 -a <your qat bdf>
     RTE>>compressdev_autotest
 
 
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 242d283965f9..485a375c4f2c 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
   upper limit for in-flight events.
   For example::
 
-    -w 0002:0e:00.0,xae_cnt=16384
+    -a 0002:0e:00.0,xae_cnt=16384
 
 - ``Force legacy mode``
 
@@ -63,7 +63,7 @@ Runtime Config Options
   single workslot mode in SSO and disable the default dual workslot mode.
   For example::
 
-    -w 0002:0e:00.0,single_ws=1
+    -a 0002:0e:00.0,single_ws=1
 
 - ``Event Group QoS support``
 
@@ -78,7 +78,7 @@ Runtime Config Options
   default.
   For example::
 
-    -w 0002:0e:00.0,qos=[1-50-50-50]
+    -a 0002:0e:00.0,qos=[1-50-50-50]
 
 - ``Selftest``
 
@@ -87,7 +87,7 @@ Runtime Config Options
   The tests are run once the vdev creation is successfully complete.
   For example::
 
-    -w 0002:0e:00.0,selftest=1
+    -a 0002:0e:00.0,selftest=1
 
 - ``TIM disable NPA``
 
@@ -96,7 +96,7 @@ Runtime Config Options
   parameter disables NPA and uses software mempool to manage chunks
   For example::
 
-    -w 0002:0e:00.0,tim_disable_npa=1
+    -a 0002:0e:00.0,tim_disable_npa=1
 
 - ``TIM modify chunk slots``
 
@@ -107,7 +107,7 @@ Runtime Config Options
   to SSO. The default value is 255 and the max value is 4095.
   For example::
 
-    -w 0002:0e:00.0,tim_chnk_slots=1023
+    -a 0002:0e:00.0,tim_chnk_slots=1023
 
 - ``TIM enable arm/cancel statistics``
 
@@ -115,7 +115,7 @@ Runtime Config Options
   event timer adapter.
   For example::
 
-    -w 0002:0e:00.0,tim_stats_ena=1
+    -a 0002:0e:00.0,tim_stats_ena=1
 
 - ``TIM limit max rings reserved``
 
@@ -125,7 +125,7 @@ Runtime Config Options
   rings.
   For example::
 
-    -w 0002:0e:00.0,tim_rings_lmt=5
+    -a 0002:0e:00.0,tim_rings_lmt=5
 
 - ``TIM ring control internal parameters``
 
@@ -135,7 +135,7 @@ Runtime Config Options
   default values.
   For Example::
 
-    -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+    -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
 
 - ``Lock NPA contexts in NDC``
 
@@ -145,7 +145,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:0e:00.0,npa_lock_mask=0xf
+      -a 0002:0e:00.0,npa_lock_mask=0xf
 
 Debugging Options
 -----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
     is a list of cores to use instead of a core mask.
 
 *   ``-b <domain:bus:devid.func>``:
-    Blacklisting of ports; prevent EAL from using specified PCI device
+    Blocklisting of ports; prevent EAL from using specified PCI device
     (multiple ``-b`` options are allowed).
 
 *   ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
   Number of memory channels per processor socket.
 
 * ``-b <domain:bus:devid.func>``:
-  Blacklisting of ports; prevent EAL from using specified PCI device
+  Blocklisting of ports; prevent EAL from using specified PCI device
   (multiple ``-b`` options are allowed).
 
 * ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
 Device-related options
 ~~~~~~~~~~~~~~~~~~~~~~
 
-*   ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+*   ``-b, --block <[domain:]bus:devid.func>``
 
-    Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
-    allowed.
+    Skip probing a PCI device to prevent EAL from using it.
+    Multiple -b options are allowed.
 
 .. Note::
-    PCI blacklist cannot be used with ``-w`` option.
+    PCI skip probe cannot be used with the only list ``-a`` option.
 
-*   ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+*   ``-a, --allow <[domain:]bus:devid.func>``
 
-    Add a PCI device in white list.
+    Add a PCI device in to the list of probed devices.
 
 .. Note::
-    PCI whitelist cannot be used with ``-b`` option.
+    PCI only list cannot be used with the skip probe ``-b`` option.
 
 *   ``--vdev <device arguments>``
 
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
     3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
 
     4. Start the PF:
-        <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+        <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
          --vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
 
     5. Start the VF:
-        <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+        <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
          --vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
 
 Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
   for the application.
   For example::
 
-    -w 0002:02:00.0,max_pools=512
+    -a 0002:02:00.0,max_pools=512
 
   With the above configuration, the driver will set up only 512 mempools for
   the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,npa_lock_mask=0xf
+      -a 0002:02:00.0,npa_lock_mask=0xf
 
 Debugging Options
 ~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index ab093c3f4df6..d9a7d8793092 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
 Unicast MAC Filter
 ^^^^^^^^^^^^^^^^^^
 
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
 
 .. code-block:: console
 
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
 Multicast MAC Filter
 ^^^^^^^^^^^^^^^^^^^^
 
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
 
 .. code-block:: console
 
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
     testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
 
 Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
 
 Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
 than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
 optimizes flow insertions and deletions.
 
 This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
 
 Notes
 -----
@@ -745,7 +745,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
 of VF IDs of the VFs for which the representors are needed by using the
 ``devargs`` option ``representor``.::
 
-  -w DBDF,representor=[0,1,4]
+  -a DBDF,representor=[0,1,4]
 
 Note that currently hot-plugging of representor ports is not supported so all
 the required representors must be specified on the creation of the PF or the
@@ -770,12 +770,12 @@ same host domain, additional dev args have been added to the PMD.
 
 The sample command line with the new ``devargs`` looks like this::
 
-  -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+  -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
 	rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
 
 .. code-block:: console
 
-	testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+	testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
 	representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
 	rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
 
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 3fa77d7458c0..f01cd65603f6 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_net_cxgbe registers
 itself as a PCI driver that allocates one Ethernet device per detected
 port.
 
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
 
 .. _t5-nics:
 
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
 
 .. code-block:: console
 
-   dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+   dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
 
 Common Runtime Options
 ~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
 
   .. code-block:: console
 
-     dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+     dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
 
 - ``filtermask`` (default **0**)
 
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
 
   .. code-block:: console
 
-     dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+     dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
 
 .. _driver-compilation:
 
@@ -760,7 +760,7 @@ devices managed by librte_net_cxgbe in FreeBSD operating system.
 
    .. code-block:: console
 
-      ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+      ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
 
    Example output:
 
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ae1642b15ec3..917482dbe2a5 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
   this pool.
 
 
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
 
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
 
  .. code-block:: console
 
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index c9deb53349ab..f98c31e4695e 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
 Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
 which are lower than logging ``level``.
 
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
 
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
 
  .. code-block:: console
 
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
 By default, PMD enables overlay offload if hardware supports it. To disable
 it, set ``devargs`` parameter ``disable-overlay=1``. For example::
 
-    -w 12:00.0,disable-overlay=1
+    -a 12:00.0,disable-overlay=1
 
 By default, the NIC uses 4789 as the VXLAN port. The user may change
 it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
   PMD consider the vectorized handler when selecting the receive handler.
   For example::
 
-    -w 12:00.0,enable-avx2-rx=1
+    -a 12:00.0,enable-avx2-rx=1
 
   As the current implementation is intended for field trials, by default, the
   vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
 PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
 ``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
-    -w 12:00.0,ig-vlan-rewrite=untag
+    -a 12:00.0,ig-vlan-rewrite=untag
 
 - **SR-IOV**
 
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index 27ff306b1a9b..ae9f08ec8d1d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
 
   This parameter allows the user to define a sub-device. The ``<iface>`` part of
   this parameter must be a valid device definition. It follows the same format
-  provided to any ``-w`` or ``--vdev`` options.
+  provided to any ``-a`` or ``--vdev`` options.
 
   Enclosing the device definition within parentheses here allows using
   additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
 
 .. note::
 
-   In case where the sub-device is also used as a whitelist device, using ``-w``
+   In case where the sub-device is also used as an allowed device, using ``-a``
    on the EAL command line, the fail-safe PMD will use the device with the
    options provided to the EAL instead of its own parameters.
 
-   When trying to use a PCI device automatically probed by the blacklist mode,
+   When trying to use a PCI device automatically probed by the command line,
    the name for the fail-safe sub-device must be the full PCI id:
    Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
    as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
 #. To build a PMD and configure DPDK, refer to the document
    :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
 
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
-   operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+   operations to avoid probing it twice, as the PCI bus is in blocklist mode.
 
    .. code-block:: console
 
@@ -120,25 +120,25 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
          --vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
          -b 84:00.0 -b 00:04.0 -- -i
 
-   If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+   If the sub-device ``84:00.0`` is not blocked, it will be probed by the
    EAL first. When the fail-safe then tries to initialize it the probe operation
    fails.
 
-   Note that PCI blacklist mode is the default PCI operating mode.
+   Note that PCI blocklist mode is the default PCI operating mode.
 
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
 
    .. code-block:: console
 
       ./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 \
          --vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-         -w 81:00.0 -- -i
+         -a 81:00.0 -- -i
 
 #. Start testpmd using a flexible device definition
 
    .. code-block:: console
 
-      ./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 -w ff:ff.f \
+      ./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 -a ff:ff.f \
          --vdev='net_failsafe0,exec(echo 84:00.0)' -- -i
 
 #. Start testpmd, automatically probing the device 84:00.0 and using it with
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a4b288abcf5f..43f74e02abf3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
 Unicast MAC filter
 ------------------
 
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
 
 * **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
 * **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 828a25988e34..ab0a6ee36e51 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
 
   The number of reserved queue per VF is determined by its host PF. If the
   PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
-  VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+  VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
   The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
   number of reserved queues per VF is 4 by default. If VF request more than
   reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
   Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
   parameter ``support-multi-driver`` is introduced, for example::
 
-    -w 84:00.0,support-multi-driver=1
+    -a 84:00.0,support-multi-driver=1
 
   With the above configuration, DPDK PMD will not change global registers, and
   will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
   port representors for on initialization of the PF PMD by passing the VF IDs of
   the VFs which are required.::
 
-  -w DBDF,representor=[0,1,4]
+  -a DBDF,representor=[0,1,4]
 
   Currently hot-plugging of representor ports is not supported so all required
   representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
   since it can get better perf in some real work loading cases. So ``devargs`` param
   ``use-latest-supported-vec`` is introduced, for example::
 
-  -w 84:00.0,use-latest-supported-vec=1
+  -a 84:00.0,use-latest-supported-vec=1
 
 - ``Enable validation for VF message`` (default ``not enabled``)
 
@@ -222,7 +222,7 @@ Runtime Config Options
   Format -- "maximal-message@period-seconds:ignore-seconds"
   For example::
 
-  -w 84:00.0,vf_msg_cfg=80@120:180
+  -a 84:00.0,vf_msg_cfg=80@120:180
 
 Vector RX Pre-conditions
 ~~~~~~~~~~~~~~~~~~~~~~~~
@@ -452,7 +452,7 @@ no physical uplink on the associated NIC port.
 To enable this feature, the user should pass a ``devargs`` parameter to the
 EAL, for example::
 
-    -w 84:00.0,enable_floating_veb=1
+    -a 84:00.0,enable_floating_veb=1
 
 In this configuration the PMD will use the floating VEB feature for all the
 VFs created by this PF device.
@@ -460,7 +460,7 @@ VFs created by this PF device.
 Alternatively, the user can specify which VFs need to connect to this floating
 VEB using the ``floating_veb_list`` argument::
 
-    -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+    -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
 
 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
 while other VFs connect to the normal VEB.
@@ -796,7 +796,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
 
 7. The command line of running l3fwd would be something like the following::
 
-      ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+      ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
               -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
 
    This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 11c7420ed502..f03103704014 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
   But if user intend to use the device without OS package, user can take ``devargs``
   parameter ``safe-mode-support``, for example::
 
-    -w 80:00.0,safe-mode-support=1
+    -a 80:00.0,safe-mode-support=1
 
   Then the driver will be initialized successfully and the device will enter Safe Mode.
   NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
   In pipeline mode, a flow can be set at one specific stage by setting parameter
   ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
   priority 0 located at the first pipeline stage which typically be used as a firewall
-  to drop the packet on a blacklist(we called it permission stage). At this stage,
+  to drop the packet on a blocklist(we called it permission stage). At this stage,
   flow rules are created for the device's exact match engine: switch. Flows with priority
   !0 located at the second stage, typically packets are classified here and be steered to
   specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
   use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
   for example::
 
-    -w 80:00.0,pipeline-mode-support=1
+    -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+  This is a hint to the driver to select the data path that supports flow mark extraction
+  by default.
+  NOTE: This is an experimental devarg, it will be removed when any of below conditions
+  is ready.
+  1) all data paths support flow mark (currently vPMD does not)
+  2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+  Example::
+
+    -a 80:00.0,flow-mark-support=1
 
 - ``Protocol extraction for per queue``
 
@@ -62,8 +74,8 @@ Runtime Config Options
 
   The argument format is::
 
-      -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
-      -w 18:00.0,proto_xtr=<protocol>
+      -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+      -a 18:00.0,proto_xtr=<protocol>
 
   Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
   is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
 
   .. code-block:: console
 
-    dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+    dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
 
   This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
   VLAN extraction, other queues run with no protocol extraction.
 
   .. code-block:: console
 
-    dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+    dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
 
   This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
   IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
 
 #. Bind the VF0,  and run testpmd with 'cap=dcf' devarg::
 
-      dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+      dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
 
 #. Monitor the VF2 interface network traffic::
 
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
 
 .. code-block:: console
 
-   testpmd -w af:10.0,pflink_fullchk=1 -- -i
+   testpmd -a af:10.0,pflink_fullchk=1 -- -i
 
 - ``pflink_fullchk`` (default **0**)
 
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
 port representors for on initialization of the PF PMD by passing the VF IDs of
 the VFs which are required.::
 
-  -w DBDF,representor=[0,1,4]
+  -a DBDF,representor=[0,1,4]
 
 Currently hot-plugging of representor ports is not supported so all required
 representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index c408ab71385b..10660ce853b4 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
 bus address, thus unlike most drivers, librte_net_mlx4 registers itself as a
 PCI driver that allocates one Ethernet device per detected port.
 
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
 
 Besides its dependency on libibverbs (that implies libmlx4 and associated
 kernel support), librte_net_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_net_mlx4.
       eth4
       eth5
 
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses to be used with the allow argument::
 
       {
           for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_net_mlx4.
               (cd "/sys/class/net/${intf}/device/" && pwd -P);
           done;
       } |
-      sed -n 's,.*/\(.*\),-w \1,p'
+      sed -n 's,.*/\(.*\),-a \1,p'
 
    Example output::
 
-      -w 0000:83:00.0
-      -w 0000:83:00.0
-      -w 0000:84:00.0
-      -w 0000:84:00.0
+      -a 0000:83:00.0
+      -a 0000:83:00.0
+      -a 0000:84:00.0
+      -a 0000:84:00.0
 
    .. note::
 
@@ -409,7 +409,7 @@ devices managed by librte_net_mlx4.
 
 #. Start testpmd with basic parameters::
 
-      testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+      testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
 
    Example output::
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 59b2bf4036b9..e96aca21eb9a 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1524,7 +1524,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
       eth32
       eth33
 
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
 
       {
           for intf in eth2 eth3 eth4 eth5;
@@ -1532,14 +1532,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
               (cd "/sys/class/net/${intf}/device/" && pwd -P);
           done;
       } |
-      sed -n 's,.*/\(.*\),-w \1,p'
+      sed -n 's,.*/\(.*\),-a \1,p'
 
    Example output::
 
-      -w 0000:05:00.1
-      -w 0000:06:00.0
-      -w 0000:06:00.1
-      -w 0000:05:00.0
+      -a 0000:05:00.1
+      -a 0000:06:00.0
+      -a 0000:06:00.1
+      -a 0000:05:00.0
 
 #. Request huge pages::
 
@@ -1547,7 +1547,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
 
 #. Start testpmd with basic parameters::
 
-      testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+      testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
 
    Example output::
 
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+    ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
 
 When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
 containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 18566a2c6665..a4f224424ef5 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,7 @@ for details.
 
    .. code-block:: console
 
-      ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+      ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
       EAL: Detected 24 lcore(s)
       EAL: Detected 1 NUMA nodes
       EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +116,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,reta_size=256
+      -a 0002:02:00.0,reta_size=256
 
    With the above configuration, reta table of size 256 is populated.
 
@@ -127,7 +127,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,flow_max_priority=10
+      -a 0002:02:00.0,flow_max_priority=10
 
    With the above configuration, priority level was set to 10 (0-9). Max
    priority level supported is 32.
@@ -139,7 +139,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,flow_prealloc_size=4
+      -a 0002:02:00.0,flow_prealloc_size=4
 
    With the above configuration, pre alloc size was set to 4. Max pre alloc
    size supported is 32.
@@ -151,7 +151,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,max_sqb_count=64
+      -a 0002:02:00.0,max_sqb_count=64
 
    With the above configuration, each send queue's decscriptor buffer count is
    limited to a maximum of 64 buffers.
@@ -163,7 +163,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,switch_header="higig2"
+      -a 0002:02:00.0,switch_header="higig2"
 
    With the above configuration, higig2 will be enabled on that port and the
    traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +185,7 @@ Runtime Config Options
 
    For example to select the legacy mode(RSS tag adder as XOR)::
 
-      -w 0002:02:00.0,tag_as_xor=1
+      -a 0002:02:00.0,tag_as_xor=1
 
 - ``Max SPI for inbound inline IPsec`` (default ``1``)
 
@@ -194,7 +194,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,ipsec_in_max_spi=128
+      -a 0002:02:00.0,ipsec_in_max_spi=128
 
    With the above configuration, application can enable inline IPsec processing
    on 128 SAs (SPI 0-127).
@@ -205,7 +205,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,lock_rx_ctx=1
+      -a 0002:02:00.0,lock_rx_ctx=1
 
 - ``Lock Tx contexts in NDC cache``
 
@@ -213,7 +213,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,lock_tx_ctx=1
+      -a 0002:02:00.0,lock_tx_ctx=1
 
 .. note::
 
@@ -229,7 +229,7 @@ Runtime Config Options
 
    For example::
 
-      -w 0002:02:00.0,npa_lock_mask=0xf
+      -a 0002:02:00.0,npa_lock_mask=0xf
 
 .. _otx2_tmapi:
 
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index cc5b9f120c97..962e54389fbc 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -350,7 +350,7 @@ Per-Device Parameters
 ~~~~~~~~~~~~~~~~~~~~~
 
 The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
 
 Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
 boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
 
 .. Note:
 
-   Change the ``-b`` options to blacklist all of your physical ports. The
+   Change the ``-b`` options to exclude all of your physical ports. The
    following command line is all one line.
 
    Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 6f9900883495..12d43ce93e28 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
 
    .. code-block:: console
 
-      ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+      ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
         -- -i --no-flush-rx \
         --port-topology=loop
 
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
 Example:
    .. code-block:: console
 
-      -w 0002:01:00.2,skip_data_bytes=8
+      -a 0002:01:00.2,skip_data_bytes=8
 
 Limitations
 -----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..1f30e13b8bf3 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
 callback. Care must be taken not to close the device from the interrupt handler
 context. It is necessary to reschedule such closing operation.
 
-Blacklisting
-~~~~~~~~~~~~
+Block list
+~~~~~~~~~~
 
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device block list functionality can be used to mark certain NIC ports as unavailable,
 so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocked are identified using the PCIe* description (Domain:Bus:Device.Function).
 
 Misc Functions
 ~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..57fd7425a15d 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
     Secondary processes should run alongside primary process with same DPDK version.
 
     Secondary processes which requires access to physical devices in Primary process, must
-    be passed with the same whitelist and blacklist options.
+    be passed with the same allow and block options.
 
 To support these two process types, and other multi-process setups described later,
 two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
 .. note::
 
     Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
-    Any network ports being used by one process should be blacklisted in every other process.
+    Any network ports being used by one process should be blocked by every other process.
 
 Running Multiple Independent Groups of DPDK Applications
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
   this argument allows user to specify which switch ports to enable port
   representors for.::
 
-   -w DBDF,representor=0
-   -w DBDF,representor=[0,4,6,9]
-   -w DBDF,representor=[0-31]
+   -a DBDF,representor=0
+   -a DBDF,representor=[0,4,6,9]
+   -a DBDF,representor=[0-31]
 
 Note: PMDs are not required to support the standard device arguments and users
 should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
 
 ::
 
-   -w pci:dbdf,representor=0
-   -w pci:dbdf,representor=[0-3]
-   -w pci:dbdf,representor=[0,5-11]
+   -a pci:dbdf,representor=0
+   -a pci:dbdf,representor=[0-3]
+   -a pci:dbdf,representor=[0,5-11]
 
 - As virtual devices, they may be more limited than their physical
   counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6bbd6ee93922..5da3a9cd05c5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -644,6 +644,11 @@ API Changes
 * sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
   from ``struct rte_sched_subport_params``.
 
+* eal: The definitions related to including and excluding devices
+  has been changed from blacklist/whitelist to block/allow list.
+  There are compatibility macros and command line mapping to accept
+  the old values but applications and scripts are strongly encouraged
+  to migrate to the new names.
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
 a SW baseband device/s (virtual BBdev) must be created (using --vdev).
 
 To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
 issue the command:
 
 .. code-block:: console
 
-    $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+    $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
     -c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
 
 where, NIC0PCIADDR is the PCI address of the Rx port
 
 This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
 3 cores are allocated to the application, and assigned as:
 
  - core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
 To allow the bbdev sample app to do the loopback, an influx of traffic is required.
 This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
 it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
 ports.
 
 .. code-block:: console
 
     $ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
-    --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+    --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
 
 where:
 
 * ``-c COREMASK``: A hexadecimal bitmask of cores to run on
 * ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
 * ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
 * ``-m <string>``: Matrix for mapping ports to logical cores.
 * ``-P``: PROMISCUOUS mode
 
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
 
 .. code-block:: console
 
-    ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
-    -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+    ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+    -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
 
 The application has some sanity checking built-in, so if there is a function
 (e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..faf00c75d135 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
 and software crypto devices are detected, hardware devices will be used.
 
 A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to only use the Ethernet devices needed (via the allow flag)
+and therefore implicitly blocking all hardware crypto devices.
 
 For example, something like the following command line:
 
 .. code-block:: console
 
     ./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
-            -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+            -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
             --vdev "crypto_aesni_mb" --vdev "crypto_null" \
 	    -- \
             -p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
 
 *   ``REMOTE_IFACE``: interface name for the test-port on the DUT.
 
-*   ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+*   ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
 
 Also the user can optionally setup:
 
 *   ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
 
-*   ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+*   ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
     appropriate vdevs will be created by the script
 
 Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
 *   ``-h`` Show usage.
 
 If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..e7875f8dcd7e 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,19 @@ Following is the sample command:
 
 .. code-block:: console
 
-    ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+    ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
 
 or
 
 .. code-block:: console
 
-    ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+    ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+		-- -p 0x03 --mode=eventdev --eventq-sched=ordered
 
 In this command:
 
-*   -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+*   -a option allows the event device supported by platform.
+    The syntax used to indicate this device may vary based on platform.
 
 *   The --mode option defines PMD to be used for packet I/O.
 
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
 
 *   Route information rules, which are used for L3 forwarding
 
-*   Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+*   Access Control List (ACL) rules that block packets with a specific characteristic
 
 When packets are received from a port,
 the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
 
 .. code-block:: console
 
-    ./<build_dir>/examples/dpdk-l3fwd-power -l xxx   -n 4   -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+    ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+    	-- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
 
 Where,
 
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..cb9c4f216986 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
 .. code-block:: console
 
         ./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
-                -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+                -a 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
                 -- --interactive
 
 .. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
 for cipher encryption aes-cbc and auth generation sha1-hmac,
 one million operations, burst size 32, packet size 64::
 
-   dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+   dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
    --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
    --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
    sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
 on two cores for cipher encryption aes-cbc, ten operations in silent mode::
 
    dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
-   --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+   --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
    --cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
    --cipher-op encrypt --optype cipher-only --silent
    --ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
 in silent mode, test vector provide in file "test_aes_gcm.data"
 with packet verification::
 
-   dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+   dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
    --devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
    --aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
    --optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 018358ac1719..634009cceea9 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -65,7 +65,7 @@ with a ``--`` separator:
 
 .. code-block:: console
 
-	sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --rules-count=1000000
+	sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --rules-count=1000000
 
 The command line options are:
 
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
 
 The tool has a number of command line options. Here is the sample command line::
 
-   ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+   ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
-- 
2.27.0


^ permalink raw reply	[relevance 1%]

Results 4401-4600 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-07-03 17:15     [dpdk-dev] [PATCH] doc: add sample for ABI checks in contribution guide Ferruh Yigit
2020-11-27 14:37  4% ` David Marchand
2020-08-10  9:24     [dpdk-dev] [PATCH] doc: clarify abi reference version to use in patches Ray Kinsella
2020-11-27 14:37  4% ` David Marchand
2020-09-22 14:31     [dpdk-dev] [PATCH 0/8] replace blacklist/whitelist with block/allow Stephen Hemminger
2020-11-10 22:55     ` [dpdk-dev] [PATCH v10 0/7] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-11-10 22:55  1%   ` [dpdk-dev] [PATCH v10 4/7] doc: update documentation to reflect new options Stephen Hemminger
2020-10-14 18:31     [dpdk-dev] [PATCH v7 0/3] pmdinfogen: rewrite in Python Dmitry Kozlyuk
2020-10-20 17:44     ` [dpdk-dev] [PATCH v8 " Dmitry Kozlyuk
2020-10-20 17:44       ` [dpdk-dev] [PATCH v8 2/3] build: use Python pmdinfogen Dmitry Kozlyuk
2021-01-20  0:05  3%     ` Thomas Monjalon
2021-01-20  7:23  0%       ` Dmitry Kozlyuk
2021-01-20 10:24  0%         ` Thomas Monjalon
2021-01-22 20:31  4%           ` Dmitry Kozlyuk
2021-01-22 20:57  0%             ` Thomas Monjalon
2021-01-22 22:24  3%               ` Dmitry Kozlyuk
2021-01-23 11:38  4%                 ` Thomas Monjalon
2021-01-24 20:52  3%                   ` Dmitry Kozlyuk
2021-01-25  9:25  3%                   ` Kinsella, Ray
2021-01-25 10:01  0%                     ` Kinsella, Ray
2021-01-25 10:29  3%                       ` David Marchand
2021-01-25 10:46  0%                         ` Kinsella, Ray
2021-01-25 11:03  0%                           ` Thomas Monjalon
2021-01-25 10:05  0%                     ` Dmitry Kozlyuk
2021-01-25 10:11  4%                       ` Kinsella, Ray
2021-01-25 10:31  0%                         ` Dmitry Kozlyuk
2021-01-22 22:43  3%   ` [dpdk-dev] [PATCH v9 0/3] pmdinfogen: rewrite in Python Dmitry Kozlyuk
2021-01-24 20:51  3%     ` [dpdk-dev] [PATCH v10 " Dmitry Kozlyuk
2021-01-24 20:51  2%       ` [dpdk-dev] [PATCH v10 2/3] build: use Python pmdinfogen Dmitry Kozlyuk
2020-11-05 18:09     [dpdk-dev] [RFC] app/testpmd: fix MTU after device configure Ferruh Yigit
2020-11-13 11:44     ` [dpdk-dev] [PATCH] " Ferruh Yigit
2020-11-16 18:50  3%   ` Ferruh Yigit
2020-11-12 13:38  4% [dpdk-dev] [PATCH] devtools: fix x86-default env when installing David Marchand
2020-11-16  7:55     [dpdk-dev] [PATCH 0/5] fix protocol size calculation Xiaoyu Min
2020-11-16  7:55     ` [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy Xiaoyu Min
2020-11-16 16:23  3%   ` Ferruh Yigit
2020-11-22 13:28  0%     ` Jack Min
2020-11-19  3:52  1% [dpdk-dev] [RFC] remove unused functions Ferruh Yigit
2020-11-20 12:27  3% [dpdk-dev] [PATCH v1 1/1] build: alias default build as generic Juraj Linkeš
2020-11-24  7:52  3% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
2020-11-22 13:40     [dpdk-dev] Minutes of Technical Board Meeting, 2020-11-18 Ananyev, Konstantin
2020-11-23  9:30  2% ` Morten Brørup
2020-11-23 10:00  0%   ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2020-11-23 11:16  0%     ` Morten Brørup
2020-11-23 13:40  5% [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes Ferruh Yigit
2020-11-23 13:50  0% ` Andrew Rybchenko
2020-11-23 14:17  0%   ` Ferruh Yigit
2020-11-23 14:25  0%     ` Andrew Rybchenko
2020-11-23 15:51  3%       ` Ferruh Yigit
2020-11-24 11:43  4%         ` Ori Kam
2020-11-24 12:56  3%           ` Ferruh Yigit
2020-11-24 13:00  4%             ` Andrew Rybchenko
2020-11-24 13:01  0%               ` Andrew Rybchenko
2020-11-24 20:40  7% [dpdk-dev] [PATCH v1] doc: update release notes for 20.11 John McNamara
2020-11-24 21:57  4% [dpdk-dev] [PATCH] ci: hook to Github Actions David Marchand
2020-11-25 13:44  0% ` Aaron Conole
2020-11-25 14:31       ` David Marchand
2020-11-26  4:46         ` Honnappa Nagarahalli
2020-11-26  8:06           ` David Marchand
2020-11-26 17:01             ` Honnappa Nagarahalli
2020-12-08 14:08  4%           ` David Marchand
2020-12-04 17:36  4% ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions David Marchand
2020-12-04 17:36 20%   ` [dpdk-dev] [PATCH v2 2/2] ci: enable v21 ABI checks David Marchand
2020-12-14 14:13  4%     ` Aaron Conole
2020-12-11 20:07  3%   ` [dpdk-dev] [PATCH v2 1/2] ci: hook to GitHub Actions Ferruh Yigit
2020-12-14 10:44  0%     ` Thomas Monjalon
2020-12-14 14:12  0%   ` Aaron Conole
2020-11-26 14:25     [dpdk-dev] [PATCH] eal: fix errno on service cores init failure Olivier Matz
2020-11-26 14:46  3% ` Van Haaren, Harry
2020-11-26 16:37  0%   ` Olivier Matz
2020-11-26 16:42  0%     ` Van Haaren, Harry
2020-11-27 21:37  4% [dpdk-dev] [dpdk-announce] DPDK 20.11 released Thomas Monjalon
2020-11-30  9:23 10% [dpdk-dev] [PATCH] version: 21.02-rc0 David Marchand
2020-12-07 17:32 36% [dpdk-dev] [PATCH 1/1] devtools: adjust verbosity of ABI check Thomas Monjalon
2020-12-08 15:22  9% ` Kinsella, Ray
2020-12-08 15:32  4%   ` Thomas Monjalon
2020-12-08 15:31  4% ` David Marchand
2020-12-17  9:05 36% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
2021-01-13  9:21  4%   ` Thomas Monjalon
2020-12-07 17:33 10% [dpdk-dev] [PATCH 1/1] devtools: avoid installing static binaries Thomas Monjalon
2020-12-07 17:47  3% ` Bruce Richardson
2020-12-07 18:12  0%   ` Thomas Monjalon
2020-12-08  9:33  0%     ` Bruce Richardson
2020-12-08 15:37  4% ` David Marchand
2020-12-08 15:52  5%   ` Thomas Monjalon
2021-01-13 19:05 13% ` [dpdk-dev] [PATCH v2 " Thomas Monjalon
2021-01-13 22:01  0%   ` Thomas Monjalon
2021-01-15 15:24  3%     ` David Marchand
2021-01-15 16:02  4%       ` Thomas Monjalon
2020-12-10 10:54  3% [dpdk-dev] DPDK Release Status Meeting 10/12/2020 Ferruh Yigit
2020-12-16  8:58     [dpdk-dev] [dpdk-dev 21.02 0/5] enable UDP ecpri configure in dcf Jeff Guo
2020-12-24  6:59     ` [dpdk-dev] [dpdk-dev v2 0/2] add new UDP tunnel port for ecpri Jeff Guo
2020-12-24  6:59       ` [dpdk-dev] [dpdk-dev v2 1/2] ethdev: add new tunnel type " Jeff Guo
2021-01-06 22:12  3%     ` Thomas Monjalon
2021-01-07  9:32  3%       ` Guo, Jia
2021-01-07 10:11  0%         ` Thomas Monjalon
2021-01-07 12:47  4%           ` Zhang, Qi Z
2021-01-07 13:33  0%             ` Thomas Monjalon
2021-01-07 13:45  0%               ` David Marchand
2021-01-07 14:27  3%                 ` Dodji Seketeli
2021-01-07 15:24  0%               ` Zhang, Qi Z
2021-01-08  9:22  0%               ` Ferruh Yigit
2021-01-08 10:23  3%                 ` Thomas Monjalon
2021-01-08 10:43  3%                   ` Ferruh Yigit
2021-01-08 14:06  3%                     ` Thomas Monjalon
2021-01-08 14:07  0%                       ` Kinsella, Ray
2021-01-08 14:10  0%                         ` Thomas Monjalon
2021-01-08 12:38  0%                   ` Kinsella, Ray
2021-01-08 14:27  0%                     ` Ferruh Yigit
2021-01-08 14:31  0%                       ` Kinsella, Ray
2021-01-08 17:34  0%                       ` Kinsella, Ray
2021-01-15  2:42     ` [dpdk-dev] [dpdk-dev v3 0/2] add new UDP tunnel port " Jeff Guo
2021-01-15  2:42 11%   ` [dpdk-dev] [dpdk-dev v3 1/2] ethdev: add new tunnel type " Jeff Guo
2021-01-15  4:35     ` [dpdk-dev] [dpdk-dev v4 0/2] add new UDP tunnel port configure for eCPRI Jeff Guo
2021-01-15  4:35 11%   ` [dpdk-dev] [dpdk-dev v4 1/2] ethdev: add new tunnel type " Jeff Guo
2021-01-15  5:15     ` [dpdk-dev] [dpdk-dev v5 0/2] add new UDP tunnel port configure " Jeff Guo
2021-01-15  5:15 11%   ` [dpdk-dev] [dpdk-dev v5 1/2] ethdev: add new tunnel type " Jeff Guo
2021-01-15 12:23  3%     ` Ferruh Yigit
2021-01-18  2:40  0%       ` Guo, Jia
2020-12-17  9:22     [dpdk-dev] [PATCH v2 00/22] fix rx packets dropped issue Steve Yang
2021-01-14  9:45     ` [dpdk-dev] [PATCH v3 01/22] ethdev: fix MTU size exceeds max rx packet length Steve Yang
2021-01-14 16:36       ` Ferruh Yigit
2021-01-14 17:13         ` Ferruh Yigit
2021-01-14 17:29           ` Andrew Boyer
2021-01-14 20:44  3%         ` Ferruh Yigit
2020-12-17 11:36     [dpdk-dev] [PATCH v1] power: fix make build for power apps David Hunt
2021-01-08 14:30     ` [dpdk-dev] [PATCH 0/6] " David Hunt
2021-01-13 11:08       ` Burakov, Anatoly
2021-01-13 11:14         ` David Hunt
2021-01-13 11:18           ` Burakov, Anatoly
2021-01-13 13:25             ` David Hunt
2021-01-13 17:30  3%           ` Burakov, Anatoly
2020-12-17 14:05     [dpdk-dev] [PATCH v12 00/11] Add PMD power management Anatoly Burakov
2020-11-02 11:09     ` [dpdk-dev] [PATCH v11 0/6] Add PMD power mgmt Liang Ma
2020-12-17 14:05  2%   ` [dpdk-dev] [PATCH v12 01/11] eal: uninline power intrinsics Anatoly Burakov
2020-12-17 14:05     ` [dpdk-dev] [PATCH v12 06/11] ethdev: add simple power management API Anatoly Burakov
2021-01-12 20:32       ` Lance Richardson
2021-01-13 13:04         ` Burakov, Anatoly
2021-01-13 13:25  3%       ` Ananyev, Konstantin
2020-12-17 16:12  3% ` [dpdk-dev] [PATCH v12 00/11] Add PMD power management David Marchand
2021-01-08 16:42  0%   ` Burakov, Anatoly
2021-01-11  8:44  0%     ` David Marchand
2021-01-08 17:42     ` [dpdk-dev] [PATCH v13 " Anatoly Burakov
2021-01-08 17:42  2%   ` [dpdk-dev] [PATCH v13 01/11] eal: uninline power intrinsics Anatoly Burakov
2021-01-12 15:54  0%     ` Ananyev, Konstantin
2021-01-11 14:35       ` [dpdk-dev] [PATCH v14 00/11] Add PMD power management Anatoly Burakov
2021-01-11 14:35  2%     ` [dpdk-dev] [PATCH v14 01/11] eal: uninline power intrinsics Anatoly Burakov
2021-01-11 14:58         ` [dpdk-dev] [PATCH v15 00/11] Add PMD power management Anatoly Burakov
2021-01-11 14:58  2%       ` [dpdk-dev] [PATCH v15 01/11] eal: uninline power intrinsics Anatoly Burakov
2021-01-12 17:37           ` [dpdk-dev] [PATCH v16 00/11] Add PMD power management Anatoly Burakov
2021-01-12 17:37  2%         ` [dpdk-dev] [PATCH v16 01/11] eal: uninline power intrinsics Anatoly Burakov
2021-01-14  9:36  9%         ` [dpdk-dev] [PATCH v16 00/11] Add PMD power management David Marchand
2021-01-14 10:25  0%           ` Burakov, Anatoly
2021-01-14 14:46  2%         ` [dpdk-dev] [PATCH v17 " Anatoly Burakov
2021-01-14 14:46  2%           ` [dpdk-dev] [PATCH v17 01/11] eal: uninline power intrinsics Anatoly Burakov
2021-01-14 14:46  7%           ` [dpdk-dev] [PATCH v17 06/11] ethdev: add simple power management API Anatoly Burakov
2021-01-18 15:24  0%           ` [dpdk-dev] [PATCH v17 00/11] Add PMD power management David Marchand
2021-01-18 15:45  0%             ` Burakov, Anatoly
2021-01-18 16:06  3%               ` Thomas Monjalon
2021-01-18 17:02  3%                 ` Burakov, Anatoly
2021-01-18 17:54  3%                   ` David Marchand
2020-12-18 10:12     [dpdk-dev] [RFC PATCH] lpm: add sve support for lookup on Arm platform Ruifeng Wang
2021-01-12  2:57     ` [dpdk-dev] [PATCH v3 0/5] lpm lookup with sve support Ruifeng Wang
2021-01-12  2:57       ` [dpdk-dev] [PATCH v3 1/5] lpm: add sve support for lookup on Arm platform Ruifeng Wang
2021-01-27 13:04         ` David Marchand
2021-01-27 21:03  4%       ` Honnappa Nagarahalli
2021-01-28  8:03  0%         ` David Marchand
2021-01-28 12:24  3%           ` Honnappa Nagarahalli
2020-12-18 14:55     [dpdk-dev] [RFC 0/7] support SubFunction representor Xueming Li
2020-12-18 14:55     ` [dpdk-dev] [RFC 3/7] devarg: change reprsentor ID to bitmap Xueming Li
2020-12-28 13:36       ` Andrew Rybchenko
2021-01-05  6:19  3%     ` Xueming(Steven) Li
2020-12-18 19:21     [dpdk-dev] [RFC] mem_debug add more log Peng Zhihong
2020-12-18 18:54     ` Stephen Hemminger
2020-12-21  7:35       ` Peng, ZhihongX
2020-12-21 18:44  3%     ` Stephen Hemminger
2020-12-25  7:20  3%       ` Peng, ZhihongX
2020-12-19  8:26  4% [dpdk-dev] [PATCH] ci: fix package installation in GitHub Actions David Marchand
2020-12-20 21:13  2% [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework Maxime Coquelin
2020-12-20 21:13     ` [dpdk-dev] [PATCH 08/40] net/virtio: force IOVA as VA mode for Virtio-user Maxime Coquelin
2021-01-06  9:06  4%   ` David Marchand
2021-01-06  9:11  3%     ` Thomas Monjalon
2021-01-06  9:22  0%       ` Maxime Coquelin
2021-01-06 16:37  0%       ` Kinsella, Ray
2021-01-06  9:14  0%     ` Maxime Coquelin
2020-12-21 10:58  0% ` [dpdk-dev] [PATCH 00/40] net/virtio: Virtio PMD rework Maxime Coquelin
2020-12-22 14:42     [dpdk-dev] [PATCH v7 0/2] support enqueue & dequeue callbacks on cryptodev Abhinandan Gujjar
2020-12-22 14:42  2% ` [dpdk-dev] [PATCH v7 1/2] cryptodev: support enqueue and dequeue callback functions Abhinandan Gujjar
2021-01-04  6:59  0%   ` Gujjar, Abhinandan S
2021-01-15 16:01       ` Akhil Goyal
2021-01-19 18:31  8%     ` Thomas Monjalon
2021-01-20 13:01  3%       ` Kinsella, Ray
2021-01-20 13:12  0%         ` David Marchand
2021-01-20 13:15  0%         ` Thomas Monjalon
2021-01-20 14:09  0%           ` Kinsella, Ray
2021-01-05 12:16  5% [dpdk-dev] [PATCH] ci: fix default ccache in GitHub Actions David Marchand
2021-01-05 14:09  0% ` Aaron Conole
2021-01-08 19:13  4% [dpdk-dev] Reader-Writer lock starvation issues Stephen Hemminger
2021-01-08 21:27  0% ` Honnappa Nagarahalli
2021-01-11 11:52  0%   ` Ferruh Yigit
2021-01-11 13:05  0%     ` Honnappa Nagarahalli
2021-01-12  1:04  3% ` [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
2021-01-14 16:55  3%   ` [dpdk-dev] [PATCH v2] " Stephen Hemminger
2021-01-12  1:18     [dpdk-dev] [PATCH] eal/headers: explicitly cast void * to type * Tyler Retzlaff
2021-01-14 10:55     ` Bruce Richardson
2021-01-15 19:21       ` Tyler Retzlaff
2021-01-17 17:13  3%     ` Thomas Monjalon
2021-01-12 22:41     [dpdk-dev] [PATCH v2] bus/pci/windows: guard against sdk/dpdk guid collision Tyler Retzlaff
2021-01-14 21:22     ` [dpdk-dev] [PATCH v3] pci/windows: fix build with SDK >= 10.0.20253 Tyler Retzlaff
2021-01-14 22:59       ` Ranjit Menon
2021-01-15  5:34  3%     ` Tyler Retzlaff
2021-01-13  9:03  5% [dpdk-dev] [PATCH] doc: recommend GitHub Actions for CI David Marchand
2021-01-14 11:05     [dpdk-dev] [PATCH 00/20] ensure headers have correct includes Bruce Richardson
2021-01-25 14:11     ` [dpdk-dev] [PATCH v3 0/4] add checking of header includes Bruce Richardson
2021-01-25 15:51  2%   ` David Marchand
2021-01-25 18:17  0%     ` Bruce Richardson
2021-01-26 11:15  4%     ` Bruce Richardson
2021-01-26 14:04  3%       ` David Marchand
2021-01-26 14:24  0%         ` Bruce Richardson
2021-01-26 14:39  0%           ` Bruce Richardson
2021-01-26 14:18     ` [dpdk-dev] [PATCH v4 0/7] " Bruce Richardson
2021-01-26 14:18 13%   ` [dpdk-dev] [PATCH v4 2/7] eal: fix error attribute use for clang Bruce Richardson
2021-01-26 21:38     ` [dpdk-dev] [PATCH v5 0/8] add checking of header includes Bruce Richardson
2021-01-26 21:38 13%   ` [dpdk-dev] [PATCH v5 2/8] eal: fix error attribute use for clang Bruce Richardson
2021-01-27 17:33     ` [dpdk-dev] [PATCH v6 0/8] add checking of header includes Bruce Richardson
2021-01-27 17:33 13%   ` [dpdk-dev] [PATCH v6 2/8] eal: fix error attribute use for clang Bruce Richardson
2021-01-28 11:00  4%     ` David Marchand
2021-01-28 11:20  0%       ` Bruce Richardson
2021-01-28 13:36  0%         ` David Marchand
2021-01-28 14:16  0%           ` Bruce Richardson
2021-01-28 15:16  0%             ` Bruce Richardson
2021-01-28 16:46  0%               ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2021-01-28 17:36  0%                 ` Bruce Richardson
2021-01-28 10:55  3%   ` [dpdk-dev] [PATCH v6 0/8] add checking of header includes David Marchand
2021-01-28 11:47  0%     ` Bruce Richardson
2021-01-18 12:41     [dpdk-dev] [PATCH v17 07/11] power: add PMD power management API and callback David Hunt
2021-01-19 16:45  3% ` [dpdk-dev] [PATCH v18 0/2] Add PMD power management Anatoly Burakov
2021-01-20 11:50  3%   ` [dpdk-dev] [PATCH v19 0/4] " Anatoly Burakov
2021-01-22 17:12  3%     ` [dpdk-dev] [PATCH v20 " Anatoly Burakov
2021-01-19 21:24     [dpdk-dev] [PATCH v2 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
2021-01-19 21:24     ` [dpdk-dev] [PATCH v2 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
2021-01-20 15:32  8%   ` David Marchand
2021-01-20 17:47  0%     ` Maxime Coquelin
2021-01-20 14:25  4% [dpdk-dev] [PATCH v1] devtools: update abi ignore for cryptodev Ray Kinsella
2021-01-20 15:41  7% ` Thomas Monjalon
2021-01-21 15:15  4%   ` Dodji Seketeli
2021-01-21 15:58  4%     ` Thomas Monjalon
2021-01-22 12:11  4%       ` Kinsella, Ray
2021-01-22 13:09  4%       ` Dodji Seketeli
2021-01-22 13:12  4%         ` Kinsella, Ray
2021-01-24 11:58  4%           ` Dodji Seketeli
2021-01-26 11:55  8% ` Thomas Monjalon
2021-01-21  6:03     [dpdk-dev] [PATCH v11 0/4] raw/ifpga: add extra OPAE APIs Wei Huang
2021-01-21  6:03     ` [dpdk-dev] [PATCH v11 3/4] raw/ifpga: add OPAE API for OpenStack Cyborg Wei Huang
2021-01-21 16:30  3%   ` Ferruh Yigit
2021-01-22  3:16  3%     ` Huang, Wei
2021-01-21 12:04  4% [dpdk-dev] DPDK Release Status Meeting 21/01/2021 Ferruh Yigit
2021-01-22  6:38  0% ` Ruifeng Wang
2021-01-26  3:38     [dpdk-dev] [PATCH] ethdev: add IPv6 DSCP option for modify field action Alexander Kozyrev
2021-01-26  3:43  3% ` Stephen Hemminger
2021-01-26  5:21  3%   ` Alexander Kozyrev
2021-01-26  5:35  0%     ` Ajit Khaparde
2021-01-26  5:44  0%     ` Stephen Hemminger
2021-01-26 10:15  3% [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin
2021-01-26 10:15  7% ` [dpdk-dev] [PATCH v4 02/44] bus/vdev: add driver IOVA VA mode requirement Maxime Coquelin
2021-01-26 11:50  0%   ` Xia, Chenbo
2021-01-26 12:50  0%   ` David Marchand
2021-01-26 13:23  0%     ` Kinsella, Ray
2021-01-26 14:40  4%       ` David Marchand
2021-01-26 15:28  0%         ` Kinsella, Ray
2021-01-27  8:23  0%   ` David Marchand
2021-01-27  8:25  0%     ` Maxime Coquelin
2021-01-27 11:59  0% ` [dpdk-dev] [PATCH v4 00/44] net/virtio: Virtio PMD rework Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).