* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
2020-11-23 13:50 0% ` Andrew Rybchenko
@ 2020-11-23 14:17 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-23 14:17 UTC (permalink / raw)
To: Andrew Rybchenko, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam
On 11/23/2020 1:50 PM, Andrew Rybchenko wrote:
> On 11/23/20 4:40 PM, Ferruh Yigit wrote:
>> Proposing to replace protocol header fields in the ``rte_flow_item_*``
>> structures with the protocol structs, like:
>>
>> Current ``struct rte_flow_item_eth``,
>>
>> struct rte_flow_item_eth {
>> struct rte_ether_addr dst;
>> struct rte_ether_addr src;
>> rte_be16_t type;
>> uint32_t has_vlan:1;
>> uint32_t reserved:31;
>> }
>>
>> will become
>>
>> struct rte_flow_item_eth {
>> struct rte_ether_hdr hdr;
>> uint32_t has_vlan:1;
>> uint32_t reserved:31;
>> }
>>
>> This is both for documenting the intention and to be sure
>> ``rte_flow_item_*`` always starts with complete protocol header.
>>
>> Already many ``rte_flow_item_*`` structs implemented to have protocol
>> struct, target is convert all to this usage.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> a minor note below
>
>> ---
>> Cc: Thomas Monjalon <thomas@monjalon.net>
>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Cc: Ori Kam <orika@nvidia.com>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 96986fabd598..a2fa0c196472 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -88,6 +88,13 @@ Deprecation Notices
>> will be limited to maximum 256 queues.
>> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>>
>> +* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
>> + should start with relevant protocol header.
>> + Some matching pattern structures implements this by duplicating protocol header
>> + fields in the struct. To clarify the intention and to be sure protocol header
>> + is intact, will replace those fields with relevant protocol header struct.
>> + Target is v21.02 release and this should not change the ABI.
>> +
>> * sched: To allow more traffic classes, flexible mapping of pipe queues to
>> traffic classes, and subport level configuration of pipes and queues
>> changes will be made to macros, data structures and API functions defined
>>
>
> Just want to highlight that even API could be kept using
> unnamed union for hdr and unnamed structure for existing
> protocol header fields.
>
Then we may never clean the protocol header fields out of it,
yes this will impact the user but I believe the impact is small and trivial,
I prefer replacing fields with protocol struct.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
2020-11-23 13:40 5% [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes Ferruh Yigit
@ 2020-11-23 13:50 0% ` Andrew Rybchenko
2020-11-23 14:17 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-11-23 13:50 UTC (permalink / raw)
To: Ferruh Yigit, Ray Kinsella, Neil Horman; +Cc: dev, Thomas Monjalon, Ori Kam
On 11/23/20 4:40 PM, Ferruh Yigit wrote:
> Proposing to replace protocol header fields in the ``rte_flow_item_*``
> structures with the protocol structs, like:
>
> Current ``struct rte_flow_item_eth``,
>
> struct rte_flow_item_eth {
> struct rte_ether_addr dst;
> struct rte_ether_addr src;
> rte_be16_t type;
> uint32_t has_vlan:1;
> uint32_t reserved:31;
> }
>
> will become
>
> struct rte_flow_item_eth {
> struct rte_ether_hdr hdr;
> uint32_t has_vlan:1;
> uint32_t reserved:31;
> }
>
> This is both for documenting the intention and to be sure
> ``rte_flow_item_*`` always starts with complete protocol header.
>
> Already many ``rte_flow_item_*`` structs implemented to have protocol
> struct, target is convert all to this usage.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
a minor note below
> ---
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 96986fabd598..a2fa0c196472 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -88,6 +88,13 @@ Deprecation Notices
> will be limited to maximum 256 queues.
> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>
> +* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
> + should start with relevant protocol header.
> + Some matching pattern structures implements this by duplicating protocol header
> + fields in the struct. To clarify the intention and to be sure protocol header
> + is intact, will replace those fields with relevant protocol header struct.
> + Target is v21.02 release and this should not change the ABI.
> +
> * sched: To allow more traffic classes, flexible mapping of pipe queues to
> traffic classes, and subport level configuration of pipes and queues
> changes will be made to macros, data structures and API functions defined
>
Just want to highlight that even API could be kept using
unnamed union for hdr and unnamed structure for existing
protocol header fields.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes
@ 2020-11-23 13:40 5% Ferruh Yigit
2020-11-23 13:50 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-23 13:40 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman
Cc: Ferruh Yigit, dev, Thomas Monjalon, Andrew Rybchenko, Ori Kam
Proposing to replace protocol header fields in the ``rte_flow_item_*``
structures with the protocol structs, like:
Current ``struct rte_flow_item_eth``,
struct rte_flow_item_eth {
struct rte_ether_addr dst;
struct rte_ether_addr src;
rte_be16_t type;
uint32_t has_vlan:1;
uint32_t reserved:31;
}
will become
struct rte_flow_item_eth {
struct rte_ether_hdr hdr;
uint32_t has_vlan:1;
uint32_t reserved:31;
}
This is both for documenting the intention and to be sure
``rte_flow_item_*`` always starts with complete protocol header.
Already many ``rte_flow_item_*`` structs implemented to have protocol
struct, target is convert all to this usage.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Ori Kam <orika@nvidia.com>
---
doc/guides/rel_notes/deprecation.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 96986fabd598..a2fa0c196472 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -88,6 +88,13 @@ Deprecation Notices
will be limited to maximum 256 queues.
Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
+* ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
+ should start with relevant protocol header.
+ Some matching pattern structures implements this by duplicating protocol header
+ fields in the struct. To clarify the intention and to be sure protocol header
+ is intact, will replace those fields with relevant protocol header struct.
+ Target is v21.02 release and this should not change the ABI.
+
* sched: To allow more traffic classes, flexible mapping of pipe queues to
traffic classes, and subport level configuration of pipes and queues
changes will be made to macros, data structures and API functions defined
--
2.26.2
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [dpdk-techboard] Minutes of Technical Board Meeting, 2020-11-18
2020-11-23 10:00 0% ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
@ 2020-11-23 11:16 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2020-11-23 11:16 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Bruce Richardson, dev, techboard
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Monday, November 23, 2020 11:00 AM
>
> 23/11/2020 10:30, Morten Brørup:
> > Bruce,
> >
> > Here's my input as a developer of hardware appliances. It is my
> opinion, and as such may contradict the trend towards making DPDK a
> library, rather than a development kit.
> >
> > > DPDK build configuration - future enhancements
> > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > There are multiple requests (sometimes controversial) for new
> abilities
> > > to add into DPDK build system.
> > > In particular, request from few different teams:
> > > - add ability to enable/disable individual apps/libs
> > > - override some build settings for specific libs/drivers
> >
> > My wish list, in prioritized order:
> >
> > 1. The ability to remove features to reduce complexity - and thus the
> likelihood of bugs!
> >
> > Remember to consider this in application context.
> >
> > Background: Our previous firmware used the Linux kernel, and some
> loadable modules. We ran into a lot of extremely rare and unexpected
> cases where the Linux kernel network stack did something completely
> unusual, and our firmware needed to consider all these exceptional
> cases. This is one of the key reasons we switched to DPDK - the fast
> path libraries are clean and simple, and don't do anything we didn't
> ask them to do.
> >
> > DPDK example: If support for segmented packets are considered
> "required" by DPDK libraries and drivers, is it also required for
> applications to support segmented packet? If the application doesn’t
> need segmented packets, can it safely assume that no DPDK libraries or
> drivers create segmented packets under any circumstances? If support
> for segmented packets is a compile time option, there is an implicit
> guarantee that they don't appear.
>
> The primary rule in DPDK is that the application remains in control.
> If the application does not call the API function for a feature,
> it won't be enabled. So no need to remove the unused libraries.
I think that this principle - the application remaining in control - is extremely important for DPDK, and we must always remember this principle when adding features to DPDK.
However, being able to disable some features at compile time elevates the certainty that these features are not being unexpectedly used from "trust" to "absolute certainty".
The DPDK core and libraries are growing in complexity, and I am starting to worry about this. Once bitten twice shy.
By the way, I consider the Dynamic MBUF concept a great enhancement in this area. The cleanup part of the Dynamic MBUF patch set made non-essential fields in the mbuf truly optional.
>
>
> > 2. The ability to remove/tweak features to improve *application*
> performance in specific environments would be good.
> >
> > E.g. removing support for multiple mbuf pools would free up an mbuf
> field (m->pool) for application use.
> > So would removing support for segmented packets (m->nb_segs, m-
> >next).
> >
> > Both of these modifications would also reduce complexity, although
> they would increase source code complexity in all the libraries and
> drivers needing to support a multidimensional matrix of features. (I
> highly doubt that all libraries support the combination of all features
> today... I remember having to argue strongly for the DPDK eBPF library
> to support reading data inside segmented packets.)
>
> Because code must remain simple, the mbuf layout is fixed
> (except dynamic fields).
The mbuf layout could remain fixed (so vector implementations can rely on the layout), but the removed fields would become unused and available for application use instead, thus improving the application performance.
In the example of removing support for multiple mbuf pools, the functions free()'ing mbufs in the DPDK mbuf library and the DPDK drivers would be simpler, thus improving the performance. Removing support for segmented packets would also allow simpler (and thus higher performing) implementations of a few DPDK core functions.
It is somewhat difficult to formulate in writing, but I will try rephrasing my original point: Tweaking DPDK can provide performance improvements in the application itself, not only in the DPDK libraries/drivers.
>
>
> > 3. Removing cruft that has no effect on performance or similar, is
> "nice to have".
> >
> > E.g. drivers for hardware that we do not use.
> >
> > > As a first step to move forward - produce design doc of current
> build
> > > system.
> > > Discuss further enhancements based on that doc.
> >
> > > While planning changes to the build system backward compatibility
> > > with 20.11 should be considered.
> >
> > Backward compatibility is not a high priority for us. It is an
> extremely rare event for us to upgrade to a new version of any external
> software (Linux Kernel, DPDK and other libraries) or build tools,
> because we consider switching any of it to another version high effort
> (e.g. it requires extensive testing). In this perspective, having to
> change some details in the build system is a relatively small effort.
> >
> > With this said, the documentation of each DPDK release should include
> a chapter describing what an application developer should do different
> than with the previous release. E.g. the Release Note enumerates the
> key modifications as bullet points, but it is not always obvious how
> that affects an application being developed. (DPDK generally has great
> documentation, but is somewhat lacking in this area.)
> >
> > I know that ABI Stability is supposed to make much of this go away,
> but DPDK is clearly not there yet.
> >
> > > AR to Bruce to create initial version of the DD.
> > >
> >
> > The following may be scope creep, so just consider it me thinking out
> loud:
> >
> > Consider a general design documents in the form of a "life of an
> mbuf" document, describing how mbufs are pre-allocated for driver RX
> descriptors, and then handed over to the application trough the receive
> function, and then possibly going through defragmentation and
> reordering libraries, and then handed over to another driver's transmit
> function, which uses the mbufs to set up TX descriptors, and after
> transmission frees the mbufs to their original pool, where they are
> ultimately allocated again by a driver to refill its RX descriptor
> pool.
> >
> > The document can start off with the simple case with a single non-
> segmented, non-fragmented, in-order packet. And then it can be extended
> with variations, e.g. adding the description of segmented packets would
> explain how the m->nb_segs and m->next are being used when the packet
> is handled by the drivers and libraries.
> >
> > In the context of being able to enable/disable libraries and
> features, the purpose of this document would be to help showing
> interdependencies.
>
> I agree we need this kind of doc.
> It could be part of the prog guide.
> Feel free to draft a skeleton.
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-techboard] Minutes of Technical Board Meeting, 2020-11-18
2020-11-23 9:30 2% ` Morten Brørup
@ 2020-11-23 10:00 0% ` Thomas Monjalon
2020-11-23 11:16 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-11-23 10:00 UTC (permalink / raw)
To: Morten Brørup; +Cc: Bruce Richardson, dev, techboard
23/11/2020 10:30, Morten Brørup:
> Bruce,
>
> Here's my input as a developer of hardware appliances. It is my opinion, and as such may contradict the trend towards making DPDK a library, rather than a development kit.
>
> > DPDK build configuration - future enhancements
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > There are multiple requests (sometimes controversial) for new abilities
> > to add into DPDK build system.
> > In particular, request from few different teams:
> > - add ability to enable/disable individual apps/libs
> > - override some build settings for specific libs/drivers
>
> My wish list, in prioritized order:
>
> 1. The ability to remove features to reduce complexity - and thus the likelihood of bugs!
>
> Remember to consider this in application context.
>
> Background: Our previous firmware used the Linux kernel, and some loadable modules. We ran into a lot of extremely rare and unexpected cases where the Linux kernel network stack did something completely unusual, and our firmware needed to consider all these exceptional cases. This is one of the key reasons we switched to DPDK - the fast path libraries are clean and simple, and don't do anything we didn't ask them to do.
>
> DPDK example: If support for segmented packets are considered "required" by DPDK libraries and drivers, is it also required for applications to support segmented packet? If the application doesn’t need segmented packets, can it safely assume that no DPDK libraries or drivers create segmented packets under any circumstances? If support for segmented packets is a compile time option, there is an implicit guarantee that they don't appear.
The primary rule in DPDK is that the application remains in control.
If the application does not call the API function for a feature,
it won't be enabled. So no need to remove the unused libraries.
> 2. The ability to remove/tweak features to improve *application* performance in specific environments would be good.
>
> E.g. removing support for multiple mbuf pools would free up an mbuf field (m->pool) for application use.
> So would removing support for segmented packets (m->nb_segs, m->next).
>
> Both of these modifications would also reduce complexity, although they would increase source code complexity in all the libraries and drivers needing to support a multidimensional matrix of features. (I highly doubt that all libraries support the combination of all features today... I remember having to argue strongly for the DPDK eBPF library to support reading data inside segmented packets.)
Because code must remain simple, the mbuf layout is fixed
(except dynamic fields).
> 3. Removing cruft that has no effect on performance or similar, is "nice to have".
>
> E.g. drivers for hardware that we do not use.
>
> > As a first step to move forward - produce design doc of current build
> > system.
> > Discuss further enhancements based on that doc.
>
> > While planning changes to the build system backward compatibility
> > with 20.11 should be considered.
>
> Backward compatibility is not a high priority for us. It is an extremely rare event for us to upgrade to a new version of any external software (Linux Kernel, DPDK and other libraries) or build tools, because we consider switching any of it to another version high effort (e.g. it requires extensive testing). In this perspective, having to change some details in the build system is a relatively small effort.
>
> With this said, the documentation of each DPDK release should include a chapter describing what an application developer should do different than with the previous release. E.g. the Release Note enumerates the key modifications as bullet points, but it is not always obvious how that affects an application being developed. (DPDK generally has great documentation, but is somewhat lacking in this area.)
>
> I know that ABI Stability is supposed to make much of this go away, but DPDK is clearly not there yet.
>
> > AR to Bruce to create initial version of the DD.
> >
>
> The following may be scope creep, so just consider it me thinking out loud:
>
> Consider a general design documents in the form of a "life of an mbuf" document, describing how mbufs are pre-allocated for driver RX descriptors, and then handed over to the application trough the receive function, and then possibly going through defragmentation and reordering libraries, and then handed over to another driver's transmit function, which uses the mbufs to set up TX descriptors, and after transmission frees the mbufs to their original pool, where they are ultimately allocated again by a driver to refill its RX descriptor pool.
>
> The document can start off with the simple case with a single non-segmented, non-fragmented, in-order packet. And then it can be extended with variations, e.g. adding the description of segmented packets would explain how the m->nb_segs and m->next are being used when the packet is handled by the drivers and libraries.
>
> In the context of being able to enable/disable libraries and features, the purpose of this document would be to help showing interdependencies.
I agree we need this kind of doc.
It could be part of the prog guide.
Feel free to draft a skeleton.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] Minutes of Technical Board Meeting, 2020-11-18
@ 2020-11-23 9:30 2% ` Morten Brørup
2020-11-23 10:00 0% ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-23 9:30 UTC (permalink / raw)
To: Bruce Richardson, dev; +Cc: techboard
Bruce,
Here's my input as a developer of hardware appliances. It is my opinion, and as such may contradict the trend towards making DPDK a library, rather than a development kit.
> DPDK build configuration - future enhancements
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> There are multiple requests (sometimes controversial) for new abilities
> to add into DPDK build system.
> In particular, request from few different teams:
> - add ability to enable/disable individual apps/libs
> - override some build settings for specific libs/drivers
My wish list, in prioritized order:
1. The ability to remove features to reduce complexity - and thus the likelihood of bugs!
Remember to consider this in application context.
Background: Our previous firmware used the Linux kernel, and some loadable modules. We ran into a lot of extremely rare and unexpected cases where the Linux kernel network stack did something completely unusual, and our firmware needed to consider all these exceptional cases. This is one of the key reasons we switched to DPDK - the fast path libraries are clean and simple, and don't do anything we didn't ask them to do.
DPDK example: If support for segmented packets are considered "required" by DPDK libraries and drivers, is it also required for applications to support segmented packet? If the application doesn’t need segmented packets, can it safely assume that no DPDK libraries or drivers create segmented packets under any circumstances? If support for segmented packets is a compile time option, there is an implicit guarantee that they don't appear.
2. The ability to remove/tweak features to improve *application* performance in specific environments would be good.
E.g. removing support for multiple mbuf pools would free up an mbuf field (m->pool) for application use.
So would removing support for segmented packets (m->nb_segs, m->next).
Both of these modifications would also reduce complexity, although they would increase source code complexity in all the libraries and drivers needing to support a multidimensional matrix of features. (I highly doubt that all libraries support the combination of all features today... I remember having to argue strongly for the DPDK eBPF library to support reading data inside segmented packets.)
3. Removing cruft that has no effect on performance or similar, is "nice to have".
E.g. drivers for hardware that we do not use.
> As a first step to move forward - produce design doc of current build
> system.
> Discuss further enhancements based on that doc.
> While planning changes to the build system backward compatibility
> with 20.11 should be considered.
Backward compatibility is not a high priority for us. It is an extremely rare event for us to upgrade to a new version of any external software (Linux Kernel, DPDK and other libraries) or build tools, because we consider switching any of it to another version high effort (e.g. it requires extensive testing). In this perspective, having to change some details in the build system is a relatively small effort.
With this said, the documentation of each DPDK release should include a chapter describing what an application developer should do different than with the previous release. E.g. the Release Note enumerates the key modifications as bullet points, but it is not always obvious how that affects an application being developed. (DPDK generally has great documentation, but is somewhat lacking in this area.)
I know that ABI Stability is supposed to make much of this go away, but DPDK is clearly not there yet.
> AR to Bruce to create initial version of the DD.
>
The following may be scope creep, so just consider it me thinking out loud:
Consider a general design documents in the form of a "life of an mbuf" document, describing how mbufs are pre-allocated for driver RX descriptors, and then handed over to the application trough the receive function, and then possibly going through defragmentation and reordering libraries, and then handed over to another driver's transmit function, which uses the mbufs to set up TX descriptors, and after transmission frees the mbufs to their original pool, where they are ultimately allocated again by a driver to refill its RX descriptor pool.
The document can start off with the simple case with a single non-segmented, non-fragmented, in-order packet. And then it can be extended with variations, e.g. adding the description of segmented packets would explain how the m->nb_segs and m->next are being used when the packet is handled by the drivers and libraries.
In the context of being able to enable/disable libraries and features, the purpose of this document would be to help showing interdependencies.
Med venlig hilsen / kind regards
- Morten Brørup
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
2020-11-16 16:23 3% ` Ferruh Yigit
@ 2020-11-22 13:28 0% ` Jack Min
0 siblings, 0 replies; 200+ results
From: Jack Min @ 2020-11-22 13:28 UTC (permalink / raw)
To: Ferruh Yigit, Xiaoyu Min, Jingjing Wu, Beilei Xing
Cc: dev, NBU-Contact-Thomas Monjalon, Andrew Rybchenko, Ori Kam, Dekel Peled
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, November 17, 2020 00:23
> To: Xiaoyu Min <jackmin@mellanox.com>; Jingjing Wu <jingjing.wu@intel.com>;
> Beilei Xing <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Jack Min <jackmin@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> <arybchenko@solarflare.com>; Ori Kam <orika@nvidia.com>; Dekel Peled
> <dekelp@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
>
> On 11/16/2020 7:55 AM, Xiaoyu Min wrote:
> > From: Xiaoyu Min <jackmin@nvidia.com>
> >
> > The rte_flow_item_vlan items are refined.
> > The structs do not exactly represent the packet bits captured on the
> > wire anymore so should only copy real header instead of the whole struct.
> >
> > Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
> >
> > Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN
> items")
> >
> > Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
> > ---
> > drivers/net/iavf/iavf_fdir.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> > index d683a468c1..7054bde0b9 100644
> > --- a/drivers/net/iavf/iavf_fdir.c
> > +++ b/drivers/net/iavf/iavf_fdir.c
> > @@ -541,7 +541,7 @@ iavf_fdir_parse_pattern(__rte_unused struct
> iavf_adapter *ad,
> > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr,
> ETH, ETHERTYPE);
> >
> > rte_memcpy(hdr->buffer,
> > - eth_spec, sizeof(*eth_spec));
> > + eth_spec, sizeof(struct rte_ether_hdr));
>
> This requires 'struct rte_flow_item_eth' should have 'struct rte_ether_hdr' as
> first element, and I suspect this usage exists in a few more locations, but I
> wonder if this assumption is real and documented in somewhere?
> I am not just talking about 'struct rte_flow_item_eth', but for all
> 'rte_flow_item_*'...
>
I think this is not documented and this assumption is not real.
I've created one ticket on Bugzilla (https://bugs.dpdk.org/show_bug.cgi?id=581) to track.
>
>
> btw, while checking for the 'struct rte_flow_item_eth', pahole shows it is using
> 20 bytes, and I suspect this is not the intention with the reserved field:
>
> struct rte_flow_item_eth {
> struct rte_ether_addr dst; /* 0 6 */
> struct rte_ether_addr src; /* 6 6 */
> uint16_t type; /* 12 2 */
>
> /* Bitfield combined with previous fields */
>
> uint32_t has_vlan:1; /* 12:15 4 */
>
> /* XXX 31 bits hole, try to pack */
>
> uint32_t reserved:31; /* 16: 1 4 */
>
> /* size: 20, cachelines: 1, members: 5 */
> /* bit holes: 1, sum bit holes: 31 bits */
> /* bit_padding: 1 bits */
> /* last cacheline: 20 bytes */
> };
>
> 'has_vlan' seems combined with previous field to make together 32 bits. So the
> 'reserved' field is occupying a new 32 bits all by itself.
>
> What about changing the struct as following, while we can change the ABI:
> struct rte_flow_item_eth {
> struct rte_ether_addr dst; /* 0 6 */
> struct rte_ether_addr src; /* 6 6 */
> uint16_t type; /* 12 2 */
> uint16_t has_vlan:1; /* 14:15 2 */
> uint16_t reserved:15; /* 14: 0 2 */
>
> /* size: 16, cachelines: 1, members: 5 */
> /* last cacheline: 16 bytes */
> };
>
Well we probably need to discuss this in next release.
It's too late to change this API at this moment.
-Jack
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1 1/1] build: alias default build as generic
@ 2020-11-20 12:27 3% Juraj Linkeš
0 siblings, 0 replies; 200+ results
From: Juraj Linkeš @ 2020-11-20 12:27 UTC (permalink / raw)
To: thomas, bruce.richardson, Honnappa.Nagarahalli; +Cc: dev, Juraj Linkeš
The current machine='default' build name is not descriptive. The actual
default build is machine='native'. Add an alternative string which does
the same build and better describes what we're building:
machine='generic'. Leave machine='default' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
config/arm/meson.build | 5 +++--
config/meson.build | 13 +++++++------
devtools/test-meson-builds.sh | 12 ++++++------
doc/guides/prog_guide/build-sdk-meson.rst | 4 ++--
meson_options.txt | 2 +-
5 files changed, 19 insertions(+), 17 deletions(-)
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 42b4e43c7..d4066ade8 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -1,12 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation.
# Copyright(c) 2017 Cavium, Inc
+# Copyright(c) 2020 PANTHEON.tech s.r.o.
# for checking defines we need to use the correct compiler flags
march_opt = '-march=@0@'.format(machine)
arm_force_native_march = false
-arm_force_default_march = (machine == 'default')
+arm_force_generic_march = (machine == 'generic')
flags_common_default = [
# Accelarate rte_memcpy. Be sure to run unit test (memcpy_perf_autotest)
@@ -148,7 +149,7 @@ else
cmd_generic = ['generic', '', '', 'default', '']
cmd_output = cmd_generic # Set generic by default
machine_args = [] # Clear previous machine args
- if arm_force_default_march and not meson.is_cross_build()
+ if arm_force_generic_march and not meson.is_cross_build()
machine = impl_generic
impl_pn = 'default'
elif not meson.is_cross_build()
diff --git a/config/meson.build b/config/meson.build
index a29693b88..3db2f55e0 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -70,21 +70,22 @@ else
machine = get_option('machine')
endif
-# machine type 'default' is special, it defaults to the per arch agreed common
-# minimal baseline needed for DPDK.
+# machine type 'generic' is special, it selects the per arch agreed common
+# minimal baseline needed for DPDK. Machine type 'default' is also supported
+# with the same meaning for backwards compatibility.
# That might not be the most optimized, but the most portable version while
# still being able to support the CPU features required for DPDK.
# This can be bumped up by the DPDK project, but it can never be an
# invariant like 'native'
-if machine == 'default'
+if machine == 'default' or machine == 'generic'
if host_machine.cpu_family().startswith('x86')
- # matches the old pre-meson build systems default
+ # matches the old pre-meson build systems generic machine
machine = 'corei7'
elif host_machine.cpu_family().startswith('arm')
machine = 'armv7-a'
elif host_machine.cpu_family().startswith('aarch')
- # arm64 manages defaults in config/arm/meson.build
- machine = 'default'
+ # arm64 manages generic config in config/arm/meson.build
+ machine = 'generic'
elif host_machine.cpu_family().startswith('ppc')
machine = 'power8'
endif
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 3ce49368c..11aa9bf11 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -209,11 +209,11 @@ done
# test compilation with minimal x86 instruction set
# Set the install path for libraries to "lib" explicitly to prevent problems
# with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
-default_machine='nehalem'
-if ! check_cc_flags "-march=$default_machine" ; then
- default_machine='corei7'
+generic_machine='nehalem'
+if ! check_cc_flags "-march=$generic_machine" ; then
+ generic_machine='corei7'
fi
-build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
+build build-x86-generic cc -Dlibdir=lib -Dmachine=$generic_machine $use_shared
# 32-bit with default compiler
if check_cc_flags '-m32' ; then
@@ -253,10 +253,10 @@ for f in $srcdir/config/ppc/ppc* ; do
build build-$(basename $f | cut -d'-' -f-2) $f $use_shared
done
-# Test installation of the x86-default target, to be used for checking
+# Test installation of the x86-generic target, to be used for checking
# the sample apps build using the pkg-config file for cflags and libs
load_env cc
-build_path=$(readlink -f $builds_dir/build-x86-default)
+build_path=$(readlink -f $builds_dir/build-x86-generic)
export DESTDIR=$build_path/install
# No need to reinstall if ABI checks are enabled
if [ -z "$DPDK_ABI_REF_VERSION" ]; then
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 3429e2647..c7e12eedf 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -85,7 +85,7 @@ Project-specific options are passed used -Doption=value::
meson -Denable_docs=true fullbuild # build and install docs
- meson -Dmachine=default # use builder-independent baseline -march
+ meson -Dmachine=generic # use builder-independent baseline -march
meson -Ddisable_drivers=event/*,net/tap # disable tap driver and all
# eventdev PMDs for a smaller build
@@ -114,7 +114,7 @@ Examples of setting some of the same options using meson configure::
re-scan from meson.
.. note::
- machine=default uses a config that works on all supported architectures
+ machine=generic uses a config that works on all supported architectures
regardless of the capabilities of the machine where the build is happening.
As well as those settings taken from ``meson configure``, other options
diff --git a/meson_options.txt b/meson_options.txt
index e384e6dbb..bb4c0279e 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -21,7 +21,7 @@ option('kernel_dir', type: 'string', value: '',
option('lib_musdk_dir', type: 'string', value: '',
description: 'path to the MUSDK library installation directory')
option('machine', type: 'string', value: 'native',
- description: 'set the target machine type')
+ description: 'set the target machine type. Special values: 'generic' is a build usable on all machines of the build machine architecture, 'native' lets the compiler pick the architecture of the build machine.')
option('max_ethports', type: 'integer', value: 32,
description: 'maximum number of Ethernet devices')
option('max_lcores', type: 'integer', value: 128,
--
2.20.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [RFC] remove unused functions
@ 2020-11-19 3:52 1% Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-19 3:52 UTC (permalink / raw)
To: Jerin Jacob, Cristian Dumitrescu, Hemant Agrawal, Sachin Saxena,
Ray Kinsella, Neil Horman, Rosen Xu, Jingjing Wu, Beilei Xing,
Nithin Dabilpuram, Ajit Khaparde, Raveendra Padasalagi,
Vikas Gupta, Gagandeep Singh, Somalapuram Amaranath, Akhil Goyal,
Jay Zhou, Timothy McDaniel, Liang Ma, Peter Mccarthy,
Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
Pavel Belous, Rasesh Mody, Shahed Shaikh, Somnath Kotur,
Chas Williams, Min Hu (Connor),
Rahul Lakkireddy, Jeff Guo, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
Qi Zhang, Xiao Wang, Qiming Yang, Alfredo Cardigliano,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
Liron Himi, Stephen Hemminger, K. Y. Srinivasan, Haiyang Zhang,
Long Li, Heinrich Kuhn, Harman Kalra, Kiran Kumar K,
Andrew Rybchenko, Jasvinder Singh, Jiawen Wu, Jian Wang,
Tianfei zhang, Ori Kam, Guy Kaneti, Anatoly Burakov,
Maxime Coquelin, Chenbo Xia
Cc: Ferruh Yigit, dev
Removing unused functions, reported by cppcheck.
Easy way to remove clutter, since the code is already in the git repo,
they can be added back when needed.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-eventdev/parser.c | 88 -
app/test-eventdev/parser.h | 6 -
app/test/test_table_pipeline.c | 36 -
drivers/bus/dpaa/base/fman/fman_hw.c | 182 -
drivers/bus/dpaa/base/fman/netcfg_layer.c | 11 -
drivers/bus/dpaa/base/qbman/bman.c | 34 -
drivers/bus/dpaa/base/qbman/bman_driver.c | 16 -
drivers/bus/dpaa/base/qbman/process.c | 94 -
drivers/bus/dpaa/base/qbman/qman.c | 778 ----
drivers/bus/dpaa/base/qbman/qman_priv.h | 9 -
drivers/bus/dpaa/dpaa_bus.c | 20 -
drivers/bus/dpaa/include/fsl_bman.h | 15 -
drivers/bus/dpaa/include/fsl_fman.h | 28 -
drivers/bus/dpaa/include/fsl_qman.h | 307 --
drivers/bus/dpaa/include/fsl_usd.h | 11 -
drivers/bus/dpaa/include/netcfg.h | 6 -
drivers/bus/dpaa/rte_dpaa_bus.h | 13 -
drivers/bus/dpaa/version.map | 10 -
drivers/bus/fslmc/fslmc_bus.c | 19 -
drivers/bus/fslmc/mc/dpbp.c | 141 -
drivers/bus/fslmc/mc/dpci.c | 320 --
drivers/bus/fslmc/mc/dpcon.c | 241 --
drivers/bus/fslmc/mc/dpdmai.c | 144 -
drivers/bus/fslmc/mc/dpio.c | 191 -
drivers/bus/fslmc/mc/fsl_dpbp.h | 20 -
drivers/bus/fslmc/mc/fsl_dpci.h | 49 -
drivers/bus/fslmc/mc/fsl_dpcon.h | 37 -
drivers/bus/fslmc/mc/fsl_dpdmai.h | 20 -
drivers/bus/fslmc/mc/fsl_dpio.h | 26 -
drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 7 -
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 3 -
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 2 -
.../fslmc/qbman/include/fsl_qbman_portal.h | 463 ---
drivers/bus/fslmc/qbman/qbman_debug.c | 5 -
drivers/bus/fslmc/qbman/qbman_portal.c | 437 ---
drivers/bus/fslmc/rte_fslmc.h | 10 -
drivers/bus/fslmc/version.map | 6 -
drivers/bus/ifpga/ifpga_common.c | 23 -
drivers/bus/ifpga/ifpga_common.h | 3 -
drivers/common/dpaax/dpaa_of.c | 27 -
drivers/common/dpaax/dpaa_of.h | 5 -
drivers/common/dpaax/dpaax_iova_table.c | 39 -
drivers/common/dpaax/dpaax_iova_table.h | 2 -
drivers/common/dpaax/version.map | 1 -
drivers/common/iavf/iavf_common.c | 425 ---
drivers/common/iavf/iavf_prototype.h | 17 -
drivers/common/octeontx2/otx2_mbox.c | 13 -
drivers/common/octeontx2/otx2_mbox.h | 1 -
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 19 -
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 3 -
drivers/crypto/bcmfs/bcmfs_vfio.c | 24 -
drivers/crypto/bcmfs/bcmfs_vfio.h | 4 -
drivers/crypto/caam_jr/caam_jr_pvt.h | 1 -
drivers/crypto/caam_jr/caam_jr_uio.c | 28 -
drivers/crypto/ccp/ccp_dev.c | 65 -
drivers/crypto/ccp/ccp_dev.h | 8 -
drivers/crypto/dpaa2_sec/mc/dpseci.c | 401 --
drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h | 52 -
drivers/crypto/virtio/virtio_pci.c | 13 -
drivers/crypto/virtio/virtio_pci.h | 5 -
drivers/event/dlb/dlb_priv.h | 2 -
drivers/event/dlb/dlb_xstats.c | 7 -
drivers/event/dlb2/dlb2_priv.h | 2 -
drivers/event/dlb2/dlb2_xstats.c | 7 -
drivers/event/opdl/opdl_ring.c | 210 --
drivers/event/opdl/opdl_ring.h | 236 --
drivers/net/ark/ark_ddm.c | 13 -
drivers/net/ark/ark_ddm.h | 1 -
drivers/net/ark/ark_pktchkr.c | 52 -
drivers/net/ark/ark_pktchkr.h | 3 -
drivers/net/ark/ark_pktdir.c | 22 -
drivers/net/ark/ark_pktdir.h | 3 -
drivers/net/ark/ark_pktgen.c | 27 -
drivers/net/ark/ark_pktgen.h | 2 -
drivers/net/ark/ark_udm.c | 15 -
drivers/net/ark/ark_udm.h | 2 -
drivers/net/atlantic/hw_atl/hw_atl_b0.c | 14 -
drivers/net/atlantic/hw_atl/hw_atl_b0.h | 2 -
drivers/net/atlantic/hw_atl/hw_atl_llh.c | 318 --
drivers/net/atlantic/hw_atl/hw_atl_llh.h | 153 -
drivers/net/atlantic/hw_atl/hw_atl_utils.c | 36 -
drivers/net/atlantic/hw_atl/hw_atl_utils.h | 4 -
drivers/net/bnx2x/ecore_sp.c | 17 -
drivers/net/bnx2x/ecore_sp.h | 2 -
drivers/net/bnx2x/elink.c | 1367 -------
drivers/net/bnx2x/elink.h | 57 -
drivers/net/bnxt/tf_core/bitalloc.c | 156 -
drivers/net/bnxt/tf_core/bitalloc.h | 26 -
drivers/net/bnxt/tf_core/stack.c | 25 -
drivers/net/bnxt/tf_core/stack.h | 12 -
drivers/net/bnxt/tf_core/tf_core.c | 241 --
drivers/net/bnxt/tf_core/tf_core.h | 81 -
drivers/net/bnxt/tf_core/tf_msg.c | 40 -
drivers/net/bnxt/tf_core/tf_msg.h | 31 -
drivers/net/bnxt/tf_core/tf_session.c | 33 -
drivers/net/bnxt/tf_core/tf_session.h | 16 -
drivers/net/bnxt/tf_core/tf_shadow_tbl.c | 53 -
drivers/net/bnxt/tf_core/tf_shadow_tbl.h | 14 -
drivers/net/bnxt/tf_core/tf_tcam.c | 7 -
drivers/net/bnxt/tf_core/tf_tcam.h | 17 -
drivers/net/bnxt/tf_core/tfp.c | 27 -
drivers/net/bnxt/tf_core/tfp.h | 4 -
drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c | 78 -
drivers/net/bnxt/tf_ulp/ulp_port_db.c | 31 -
drivers/net/bnxt/tf_ulp/ulp_port_db.h | 14 -
drivers/net/bnxt/tf_ulp/ulp_utils.c | 11 -
drivers/net/bnxt/tf_ulp/ulp_utils.h | 3 -
drivers/net/bonding/eth_bond_private.h | 4 -
drivers/net/bonding/rte_eth_bond.h | 38 -
drivers/net/bonding/rte_eth_bond_api.c | 39 -
drivers/net/bonding/rte_eth_bond_pmd.c | 22 -
drivers/net/cxgbe/base/common.h | 5 -
drivers/net/cxgbe/base/t4_hw.c | 41 -
drivers/net/dpaa/fmlib/fm_vsp.c | 19 -
drivers/net/dpaa/fmlib/fm_vsp_ext.h | 3 -
drivers/net/dpaa2/mc/dpdmux.c | 725 ----
drivers/net/dpaa2/mc/dpni.c | 818 +----
drivers/net/dpaa2/mc/dprtc.c | 365 --
drivers/net/dpaa2/mc/fsl_dpdmux.h | 108 -
drivers/net/dpaa2/mc/fsl_dpni.h | 134 -
drivers/net/dpaa2/mc/fsl_dprtc.h | 57 -
drivers/net/e1000/base/e1000_82542.c | 97 -
drivers/net/e1000/base/e1000_82543.c | 78 -
drivers/net/e1000/base/e1000_82543.h | 4 -
drivers/net/e1000/base/e1000_82571.c | 35 -
drivers/net/e1000/base/e1000_82571.h | 1 -
drivers/net/e1000/base/e1000_82575.c | 298 --
drivers/net/e1000/base/e1000_82575.h | 8 -
drivers/net/e1000/base/e1000_api.c | 530 ---
drivers/net/e1000/base/e1000_api.h | 40 -
drivers/net/e1000/base/e1000_base.c | 78 -
drivers/net/e1000/base/e1000_base.h | 1 -
drivers/net/e1000/base/e1000_ich8lan.c | 266 --
drivers/net/e1000/base/e1000_ich8lan.h | 3 -
drivers/net/e1000/base/e1000_mac.c | 14 -
drivers/net/e1000/base/e1000_mac.h | 1 -
drivers/net/e1000/base/e1000_manage.c | 192 -
drivers/net/e1000/base/e1000_manage.h | 2 -
drivers/net/e1000/base/e1000_nvm.c | 129 -
drivers/net/e1000/base/e1000_nvm.h | 5 -
drivers/net/e1000/base/e1000_phy.c | 201 -
drivers/net/e1000/base/e1000_phy.h | 4 -
drivers/net/e1000/base/e1000_vf.c | 19 -
drivers/net/e1000/base/e1000_vf.h | 1 -
drivers/net/ena/base/ena_com.c | 222 --
drivers/net/ena/base/ena_com.h | 144 -
drivers/net/ena/base/ena_eth_com.c | 11 -
drivers/net/ena/base/ena_eth_com.h | 2 -
drivers/net/fm10k/base/fm10k_api.c | 104 -
drivers/net/fm10k/base/fm10k_api.h | 11 -
drivers/net/fm10k/base/fm10k_tlv.c | 183 -
drivers/net/fm10k/base/fm10k_tlv.h | 1 -
drivers/net/i40e/base/i40e_common.c | 2989 ++-------------
drivers/net/i40e/base/i40e_dcb.c | 43 -
drivers/net/i40e/base/i40e_dcb.h | 3 -
drivers/net/i40e/base/i40e_diag.c | 146 -
drivers/net/i40e/base/i40e_diag.h | 30 -
drivers/net/i40e/base/i40e_lan_hmc.c | 264 --
drivers/net/i40e/base/i40e_lan_hmc.h | 6 -
drivers/net/i40e/base/i40e_nvm.c | 988 -----
drivers/net/i40e/base/i40e_prototype.h | 202 -
drivers/net/i40e/base/meson.build | 1 -
drivers/net/iavf/iavf.h | 2 -
drivers/net/iavf/iavf_vchnl.c | 72 -
drivers/net/ice/base/ice_acl.c | 108 -
drivers/net/ice/base/ice_acl.h | 13 -
drivers/net/ice/base/ice_common.c | 2084 ++---------
drivers/net/ice/base/ice_common.h | 70 -
drivers/net/ice/base/ice_dcb.c | 161 -
drivers/net/ice/base/ice_dcb.h | 11 -
drivers/net/ice/base/ice_fdir.c | 262 --
drivers/net/ice/base/ice_fdir.h | 16 -
drivers/net/ice/base/ice_flex_pipe.c | 103 -
drivers/net/ice/base/ice_flex_pipe.h | 4 -
drivers/net/ice/base/ice_flow.c | 207 --
drivers/net/ice/base/ice_flow.h | 15 -
drivers/net/ice/base/ice_nvm.c | 200 -
drivers/net/ice/base/ice_nvm.h | 8 -
drivers/net/ice/base/ice_sched.c | 1440 +-------
drivers/net/ice/base/ice_sched.h | 78 -
drivers/net/ice/base/ice_switch.c | 1646 +--------
drivers/net/ice/base/ice_switch.h | 62 -
drivers/net/igc/base/igc_api.c | 598 ---
drivers/net/igc/base/igc_api.h | 41 -
drivers/net/igc/base/igc_base.c | 78 -
drivers/net/igc/base/igc_base.h | 1 -
drivers/net/igc/base/igc_hw.h | 3 -
drivers/net/igc/base/igc_i225.c | 159 -
drivers/net/igc/base/igc_i225.h | 4 -
drivers/net/igc/base/igc_mac.c | 853 -----
drivers/net/igc/base/igc_mac.h | 22 -
drivers/net/igc/base/igc_manage.c | 262 --
drivers/net/igc/base/igc_manage.h | 4 -
drivers/net/igc/base/igc_nvm.c | 679 ----
drivers/net/igc/base/igc_nvm.h | 16 -
drivers/net/igc/base/igc_osdep.c | 25 -
drivers/net/igc/base/igc_phy.c | 3256 +----------------
drivers/net/igc/base/igc_phy.h | 49 -
drivers/net/ionic/ionic.h | 2 -
drivers/net/ionic/ionic_dev.c | 39 -
drivers/net/ionic/ionic_dev.h | 4 -
drivers/net/ionic/ionic_lif.c | 11 -
drivers/net/ionic/ionic_lif.h | 1 -
drivers/net/ionic/ionic_main.c | 33 -
drivers/net/ionic/ionic_rx_filter.c | 14 -
drivers/net/ionic/ionic_rx_filter.h | 1 -
drivers/net/mlx5/mlx5.h | 1 -
drivers/net/mlx5/mlx5_utils.c | 21 -
drivers/net/mlx5/mlx5_utils.h | 25 -
drivers/net/mvneta/mvneta_ethdev.c | 18 -
drivers/net/netvsc/hn_rndis.c | 31 -
drivers/net/netvsc/hn_rndis.h | 1 -
drivers/net/netvsc/hn_var.h | 3 -
drivers/net/netvsc/hn_vf.c | 25 -
drivers/net/nfp/nfpcore/nfp_cpp.h | 213 --
drivers/net/nfp/nfpcore/nfp_cppcore.c | 218 --
drivers/net/nfp/nfpcore/nfp_mip.c | 6 -
drivers/net/nfp/nfpcore/nfp_mip.h | 1 -
drivers/net/nfp/nfpcore/nfp_mutex.c | 93 -
drivers/net/nfp/nfpcore/nfp_nsp.c | 41 -
drivers/net/nfp/nfpcore/nfp_nsp.h | 16 -
drivers/net/nfp/nfpcore/nfp_nsp_cmds.c | 79 -
drivers/net/nfp/nfpcore/nfp_nsp_eth.c | 206 --
drivers/net/nfp/nfpcore/nfp_resource.c | 12 -
drivers/net/nfp/nfpcore/nfp_resource.h | 7 -
drivers/net/nfp/nfpcore/nfp_rtsym.c | 34 -
drivers/net/nfp/nfpcore/nfp_rtsym.h | 4 -
drivers/net/octeontx/base/octeontx_bgx.c | 54 -
drivers/net/octeontx/base/octeontx_bgx.h | 2 -
drivers/net/octeontx/base/octeontx_pkivf.c | 22 -
drivers/net/octeontx/base/octeontx_pkivf.h | 1 -
drivers/net/octeontx2/otx2_ethdev.c | 26 -
drivers/net/octeontx2/otx2_ethdev.h | 3 -
drivers/net/octeontx2/otx2_ethdev_debug.c | 55 -
drivers/net/octeontx2/otx2_flow.h | 2 -
drivers/net/octeontx2/otx2_flow_utils.c | 18 -
drivers/net/pfe/base/pfe.h | 12 -
drivers/net/pfe/pfe_hal.c | 144 -
drivers/net/pfe/pfe_hif_lib.c | 20 -
drivers/net/pfe/pfe_hif_lib.h | 1 -
drivers/net/qede/base/ecore.h | 3 -
drivers/net/qede/base/ecore_cxt.c | 229 --
drivers/net/qede/base/ecore_cxt.h | 27 -
drivers/net/qede/base/ecore_dcbx.c | 266 --
drivers/net/qede/base/ecore_dcbx_api.h | 27 -
drivers/net/qede/base/ecore_dev.c | 306 --
drivers/net/qede/base/ecore_dev_api.h | 127 -
drivers/net/qede/base/ecore_hw.c | 16 -
drivers/net/qede/base/ecore_hw.h | 10 -
drivers/net/qede/base/ecore_init_fw_funcs.c | 616 ----
drivers/net/qede/base/ecore_init_fw_funcs.h | 227 --
drivers/net/qede/base/ecore_int.c | 193 -
drivers/net/qede/base/ecore_int.h | 13 -
drivers/net/qede/base/ecore_int_api.h | 60 -
drivers/net/qede/base/ecore_iov_api.h | 469 ---
drivers/net/qede/base/ecore_l2.c | 103 -
drivers/net/qede/base/ecore_l2_api.h | 24 -
drivers/net/qede/base/ecore_mcp.c | 1121 +-----
drivers/net/qede/base/ecore_mcp.h | 37 -
drivers/net/qede/base/ecore_mcp_api.h | 449 ---
drivers/net/qede/base/ecore_sp_commands.c | 89 -
drivers/net/qede/base/ecore_sp_commands.h | 21 -
drivers/net/qede/base/ecore_sriov.c | 767 ----
drivers/net/qede/base/ecore_vf.c | 48 -
drivers/net/qede/base/ecore_vf_api.h | 40 -
drivers/net/qede/qede_debug.c | 532 ---
drivers/net/qede/qede_debug.h | 97 -
drivers/net/sfc/sfc_kvargs.c | 37 -
drivers/net/sfc/sfc_kvargs.h | 2 -
drivers/net/softnic/parser.c | 218 --
drivers/net/softnic/parser.h | 10 -
.../net/softnic/rte_eth_softnic_cryptodev.c | 15 -
.../net/softnic/rte_eth_softnic_internals.h | 28 -
drivers/net/softnic/rte_eth_softnic_thread.c | 183 -
drivers/net/txgbe/base/txgbe_eeprom.c | 72 -
drivers/net/txgbe/base/txgbe_eeprom.h | 2 -
drivers/raw/ifpga/base/opae_eth_group.c | 25 -
drivers/raw/ifpga/base/opae_eth_group.h | 1 -
drivers/raw/ifpga/base/opae_hw_api.c | 212 --
drivers/raw/ifpga/base/opae_hw_api.h | 36 -
drivers/raw/ifpga/base/opae_i2c.c | 12 -
drivers/raw/ifpga/base/opae_i2c.h | 4 -
drivers/raw/ifpga/base/opae_ifpga_hw_api.c | 99 -
drivers/raw/ifpga/base/opae_ifpga_hw_api.h | 15 -
drivers/regex/mlx5/mlx5_regex.h | 2 -
drivers/regex/mlx5/mlx5_regex_fastpath.c | 25 -
drivers/regex/mlx5/mlx5_rxp.c | 45 -
.../regex/octeontx2/otx2_regexdev_hw_access.c | 58 -
.../regex/octeontx2/otx2_regexdev_hw_access.h | 2 -
drivers/regex/octeontx2/otx2_regexdev_mbox.c | 28 -
drivers/regex/octeontx2/otx2_regexdev_mbox.h | 3 -
examples/ip_pipeline/cryptodev.c | 8 -
examples/ip_pipeline/cryptodev.h | 3 -
examples/ip_pipeline/link.c | 21 -
examples/ip_pipeline/link.h | 3 -
examples/ip_pipeline/parser.c | 202 -
examples/ip_pipeline/parser.h | 7 -
examples/pipeline/obj.c | 21 -
examples/pipeline/obj.h | 3 -
lib/librte_eal/linux/eal_memory.c | 8 -
lib/librte_vhost/fd_man.c | 15 -
lib/librte_vhost/fd_man.h | 2 -
302 files changed, 833 insertions(+), 38856 deletions(-)
delete mode 100644 drivers/net/i40e/base/i40e_diag.c
delete mode 100644 drivers/net/i40e/base/i40e_diag.h
diff --git a/app/test-eventdev/parser.c b/app/test-eventdev/parser.c
index 24f1855e9a..131f7383d9 100644
--- a/app/test-eventdev/parser.c
+++ b/app/test-eventdev/parser.c
@@ -37,44 +37,6 @@ get_hex_val(char c)
}
}
-int
-parser_read_arg_bool(const char *p)
-{
- p = skip_white_spaces(p);
- int result = -EINVAL;
-
- if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
- ((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
- p += 3;
- result = 1;
- }
-
- if (((p[0] == 'o') && (p[1] == 'n')) ||
- ((p[0] == 'O') && (p[1] == 'N'))) {
- p += 2;
- result = 1;
- }
-
- if (((p[0] == 'n') && (p[1] == 'o')) ||
- ((p[0] == 'N') && (p[1] == 'O'))) {
- p += 2;
- result = 0;
- }
-
- if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
- ((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
- p += 3;
- result = 0;
- }
-
- p = skip_white_spaces(p);
-
- if (p[0] != '\0')
- return -EINVAL;
-
- return result;
-}
-
int
parser_read_uint64(uint64_t *value, const char *p)
{
@@ -115,24 +77,6 @@ parser_read_uint64(uint64_t *value, const char *p)
return 0;
}
-int
-parser_read_int32(int32_t *value, const char *p)
-{
- char *next;
- int32_t val;
-
- p = skip_white_spaces(p);
- if (!isdigit(*p))
- return -EINVAL;
-
- val = strtol(p, &next, 10);
- if (p == next)
- return -EINVAL;
-
- *value = val;
- return 0;
-}
-
int
parser_read_uint64_hex(uint64_t *value, const char *p)
{
@@ -169,22 +113,6 @@ parser_read_uint32(uint32_t *value, const char *p)
return 0;
}
-int
-parser_read_uint32_hex(uint32_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT32_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
parser_read_uint16(uint16_t *value, const char *p)
{
@@ -201,22 +129,6 @@ parser_read_uint16(uint16_t *value, const char *p)
return 0;
}
-int
-parser_read_uint16_hex(uint16_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT16_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
parser_read_uint8(uint8_t *value, const char *p)
{
diff --git a/app/test-eventdev/parser.h b/app/test-eventdev/parser.h
index 673ff22d78..94856e66e3 100644
--- a/app/test-eventdev/parser.h
+++ b/app/test-eventdev/parser.h
@@ -28,20 +28,14 @@ skip_digits(const char *src)
return i;
}
-int parser_read_arg_bool(const char *p);
-
int parser_read_uint64(uint64_t *value, const char *p);
int parser_read_uint32(uint32_t *value, const char *p);
int parser_read_uint16(uint16_t *value, const char *p);
int parser_read_uint8(uint8_t *value, const char *p);
int parser_read_uint64_hex(uint64_t *value, const char *p);
-int parser_read_uint32_hex(uint32_t *value, const char *p);
-int parser_read_uint16_hex(uint16_t *value, const char *p);
int parser_read_uint8_hex(uint8_t *value, const char *p);
-int parser_read_int32(int32_t *value, const char *p);
-
int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
int parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens);
diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c
index aabf4375db..4e5926a7c0 100644
--- a/app/test/test_table_pipeline.c
+++ b/app/test/test_table_pipeline.c
@@ -61,46 +61,10 @@ rte_pipeline_port_out_action_handler port_action_stub(struct rte_mbuf **pkts,
#endif
-rte_pipeline_table_action_handler_hit
-table_action_0x00(struct rte_pipeline *p, struct rte_mbuf **pkts,
- uint64_t pkts_mask, struct rte_pipeline_table_entry **entry, void *arg);
-
-rte_pipeline_table_action_handler_hit
-table_action_stub_hit(struct rte_pipeline *p, struct rte_mbuf **pkts,
- uint64_t pkts_mask, struct rte_pipeline_table_entry **entry, void *arg);
-
static int
table_action_stub_miss(struct rte_pipeline *p, struct rte_mbuf **pkts,
uint64_t pkts_mask, struct rte_pipeline_table_entry *entry, void *arg);
-rte_pipeline_table_action_handler_hit
-table_action_0x00(__rte_unused struct rte_pipeline *p,
- __rte_unused struct rte_mbuf **pkts,
- uint64_t pkts_mask,
- __rte_unused struct rte_pipeline_table_entry **entry,
- __rte_unused void *arg)
-{
- printf("Table Action, setting pkts_mask to 0x00\n");
- pkts_mask = ~0x00;
- rte_pipeline_ah_packet_drop(p, pkts_mask);
- return 0;
-}
-
-rte_pipeline_table_action_handler_hit
-table_action_stub_hit(__rte_unused struct rte_pipeline *p,
- __rte_unused struct rte_mbuf **pkts,
- uint64_t pkts_mask,
- __rte_unused struct rte_pipeline_table_entry **entry,
- __rte_unused void *arg)
-{
- printf("STUB Table Action Hit - doing nothing\n");
- printf("STUB Table Action Hit - setting mask to 0x%"PRIx64"\n",
- override_hit_mask);
- pkts_mask = (~override_hit_mask) & 0x3;
- rte_pipeline_ah_packet_drop(p, pkts_mask);
- return 0;
-}
-
static int
table_action_stub_miss(struct rte_pipeline *p,
__rte_unused struct rte_mbuf **pkts,
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4ab49f7853..b69b133a90 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -56,74 +56,6 @@ fman_if_reset_mcast_filter_table(struct fman_if *p)
out_be32(hashtable_ctrl, i & ~HASH_CTRL_MCAST_EN);
}
-static
-uint32_t get_mac_hash_code(uint64_t eth_addr)
-{
- uint64_t mask1, mask2;
- uint32_t xorVal = 0;
- uint8_t i, j;
-
- for (i = 0; i < 6; i++) {
- mask1 = eth_addr & (uint64_t)0x01;
- eth_addr >>= 1;
-
- for (j = 0; j < 7; j++) {
- mask2 = eth_addr & (uint64_t)0x01;
- mask1 ^= mask2;
- eth_addr >>= 1;
- }
-
- xorVal |= (mask1 << (5 - i));
- }
-
- return xorVal;
-}
-
-int
-fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
-{
- uint64_t eth_addr;
- void *hashtable_ctrl;
- uint32_t hash;
-
- struct __fman_if *__if = container_of(p, struct __fman_if, __if);
-
- eth_addr = ETH_ADDR_TO_UINT64(eth);
-
- if (!(eth_addr & GROUP_ADDRESS))
- return -1;
-
- hash = get_mac_hash_code(eth_addr) & HASH_CTRL_ADDR_MASK;
- hash = hash | HASH_CTRL_MCAST_EN;
-
- hashtable_ctrl = &((struct memac_regs *)__if->ccsr_map)->hashtable_ctrl;
- out_be32(hashtable_ctrl, hash);
-
- return 0;
-}
-
-int
-fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
-{
- struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- void *mac_reg =
- &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
- u32 val = in_be32(mac_reg);
-
- eth[0] = (val & 0x000000ff) >> 0;
- eth[1] = (val & 0x0000ff00) >> 8;
- eth[2] = (val & 0x00ff0000) >> 16;
- eth[3] = (val & 0xff000000) >> 24;
-
- mac_reg = &((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_u;
- val = in_be32(mac_reg);
-
- eth[4] = (val & 0x000000ff) >> 0;
- eth[5] = (val & 0x0000ff00) >> 8;
-
- return 0;
-}
-
void
fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
{
@@ -180,38 +112,6 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
return 0;
}
-void
-fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable)
-{
- struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- u32 value = 0;
- void *cmdcfg;
-
- assert(fman_ccsr_map_fd != -1);
-
- /* Set Rx Ignore Pause Frames */
- cmdcfg = &((struct memac_regs *)__if->ccsr_map)->command_config;
- if (enable)
- value = in_be32(cmdcfg) | CMD_CFG_PAUSE_IGNORE;
- else
- value = in_be32(cmdcfg) & ~CMD_CFG_PAUSE_IGNORE;
-
- out_be32(cmdcfg, value);
-}
-
-void
-fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len)
-{
- struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- unsigned int *maxfrm;
-
- assert(fman_ccsr_map_fd != -1);
-
- /* Set Max frame length */
- maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
- out_be32(maxfrm, (MAXFRM_RX_MASK & max_frame_len));
-}
-
void
fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats)
{
@@ -422,23 +322,6 @@ fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta)
return 0;
}
-int
-fman_if_get_fdoff(struct fman_if *fm_if)
-{
- u32 fmbm_rebm;
- int fdoff;
-
- struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
-
- assert(fman_ccsr_map_fd != -1);
-
- fmbm_rebm = in_be32(&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rebm);
-
- fdoff = (fmbm_rebm >> FMAN_SP_EXT_BUF_MARG_START_SHIFT) & 0x1ff;
-
- return fdoff;
-}
-
void
fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
{
@@ -451,28 +334,6 @@ fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid)
out_be32(fmbm_refqid, err_fqid);
}
-int
-fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp)
-{
- struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
- int val = 0;
- int iceof_mask = 0x001f0000;
- int icsz_mask = 0x0000001f;
- int iciof_mask = 0x00000f00;
-
- assert(fman_ccsr_map_fd != -1);
-
- unsigned int *fmbm_ricp =
- &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
- val = in_be32(fmbm_ricp);
-
- icp->iceof = (val & iceof_mask) >> 12;
- icp->iciof = (val & iciof_mask) >> 4;
- icp->icsz = (val & icsz_mask) << 4;
-
- return 0;
-}
-
int
fman_if_set_ic_params(struct fman_if *fm_if,
const struct fman_if_ic_params *icp)
@@ -526,19 +387,6 @@ fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm)
out_be32(reg_maxfrm, (in_be32(reg_maxfrm) & 0xFFFF0000) | max_frm);
}
-uint16_t
-fman_if_get_maxfrm(struct fman_if *fm_if)
-{
- struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
- unsigned int *reg_maxfrm;
-
- assert(fman_ccsr_map_fd != -1);
-
- reg_maxfrm = &((struct memac_regs *)__if->ccsr_map)->maxfrm;
-
- return (in_be32(reg_maxfrm) | 0x0000FFFF);
-}
-
/* MSB in fmbm_rebm register
* 0 - If BMI cannot store the frame in a single buffer it may select a buffer
* of smaller size and store the frame in scatter gather (S/G) buffers
@@ -580,36 +428,6 @@ fman_if_set_sg(struct fman_if *fm_if, int enable)
out_be32(fmbm_rebm, (in_be32(fmbm_rebm) & ~fmbm_mask) | val);
}
-void
-fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia)
-{
- struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
- unsigned int *fmqm_pndn;
-
- assert(fman_ccsr_map_fd != -1);
-
- fmqm_pndn = &((struct fman_port_qmi_regs *)__if->qmi_map)->fmqm_pndn;
-
- out_be32(fmqm_pndn, nia);
-}
-
-void
-fman_if_discard_rx_errors(struct fman_if *fm_if)
-{
- struct __fman_if *__if = container_of(fm_if, struct __fman_if, __if);
- unsigned int *fmbm_rfsdm, *fmbm_rfsem;
-
- fmbm_rfsem = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsem;
- out_be32(fmbm_rfsem, 0);
-
- /* Configure the discard mask to discard the error packets which have
- * DMA errors, Frame size error, Header error etc. The mask 0x010EE3F0
- * is to configured discard all the errors which come in the FD[STATUS]
- */
- fmbm_rfsdm = &((struct rx_bmi_regs *)__if->bmi_map)->fmbm_rfsdm;
- out_be32(fmbm_rfsdm, 0x010EE3F0);
-}
-
void
fman_if_receive_rx_errors(struct fman_if *fm_if,
unsigned int err_eq)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index b7009f2299..1d6460f1d1 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -148,14 +148,3 @@ netcfg_acquire(void)
return NULL;
}
-
-void
-netcfg_release(struct netcfg_info *cfg_ptr)
-{
- rte_free(cfg_ptr);
- /* Close socket for shared interfaces */
- if (skfd >= 0) {
- close(skfd);
- skfd = -1;
- }
-}
diff --git a/drivers/bus/dpaa/base/qbman/bman.c b/drivers/bus/dpaa/base/qbman/bman.c
index 8a6290734f..95215bb24e 100644
--- a/drivers/bus/dpaa/base/qbman/bman.c
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -321,41 +321,7 @@ int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
return ret;
}
-int bman_query_pools(struct bm_pool_state *state)
-{
- struct bman_portal *p = get_affine_portal();
- struct bm_mc_result *mcr;
-
- bm_mc_start(&p->p);
- bm_mc_commit(&p->p, BM_MCC_VERB_CMD_QUERY);
- while (!(mcr = bm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & BM_MCR_VERB_CMD_MASK) ==
- BM_MCR_VERB_CMD_QUERY);
- *state = mcr->query;
- state->as.state.state[0] = be32_to_cpu(state->as.state.state[0]);
- state->as.state.state[1] = be32_to_cpu(state->as.state.state[1]);
- state->ds.state.state[0] = be32_to_cpu(state->ds.state.state[0]);
- state->ds.state.state[1] = be32_to_cpu(state->ds.state.state[1]);
- return 0;
-}
-
u32 bman_query_free_buffers(struct bman_pool *pool)
{
return bm_pool_free_buffers(pool->params.bpid);
}
-
-int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds)
-{
- u32 bpid;
-
- bpid = bman_get_params(pool)->bpid;
-
- return bm_pool_set(bpid, thresholds);
-}
-
-int bman_shutdown_pool(u32 bpid)
-{
- struct bman_portal *p = get_affine_portal();
- return bm_shutdown_pool(&p->p, bpid);
-}
diff --git a/drivers/bus/dpaa/base/qbman/bman_driver.c b/drivers/bus/dpaa/base/qbman/bman_driver.c
index 750b756b93..8763ac6215 100644
--- a/drivers/bus/dpaa/base/qbman/bman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/bman_driver.c
@@ -109,11 +109,6 @@ static int fsl_bman_portal_finish(void)
return ret;
}
-int bman_thread_fd(void)
-{
- return bmfd;
-}
-
int bman_thread_init(void)
{
/* Convert from contiguous/virtual cpu numbering to real cpu when
@@ -127,17 +122,6 @@ int bman_thread_finish(void)
return fsl_bman_portal_finish();
}
-void bman_thread_irq(void)
-{
- qbman_invoke_irq(pcfg.irq);
- /* Now we need to uninhibit interrupts. This is the only code outside
- * the regular portal driver that manipulates any portal register, so
- * rather than breaking that encapsulation I am simply hard-coding the
- * offset to the inhibit register here.
- */
- out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
-}
-
int bman_init_ccsr(const struct device_node *node)
{
static int ccsr_map_fd;
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 9bc92681cd..9ce8ac8b12 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -204,100 +204,6 @@ struct dpaa_ioctl_raw_portal {
#define DPAA_IOCTL_FREE_RAW_PORTAL \
_IOR(DPAA_IOCTL_MAGIC, 0x0D, struct dpaa_ioctl_raw_portal)
-static int process_portal_allocate(struct dpaa_ioctl_raw_portal *portal)
-{
- int ret = check_fd();
-
- if (ret)
- return ret;
-
- ret = ioctl(fd, DPAA_IOCTL_ALLOC_RAW_PORTAL, portal);
- if (ret) {
- perror("ioctl(DPAA_IOCTL_ALLOC_RAW_PORTAL)");
- return ret;
- }
- return 0;
-}
-
-static int process_portal_free(struct dpaa_ioctl_raw_portal *portal)
-{
- int ret = check_fd();
-
- if (ret)
- return ret;
-
- ret = ioctl(fd, DPAA_IOCTL_FREE_RAW_PORTAL, portal);
- if (ret) {
- perror("ioctl(DPAA_IOCTL_FREE_RAW_PORTAL)");
- return ret;
- }
- return 0;
-}
-
-int qman_allocate_raw_portal(struct dpaa_raw_portal *portal)
-{
- struct dpaa_ioctl_raw_portal input;
- int ret;
-
- input.type = dpaa_portal_qman;
- input.index = portal->index;
- input.enable_stash = portal->enable_stash;
- input.cpu = portal->cpu;
- input.cache = portal->cache;
- input.window = portal->window;
- input.sdest = portal->sdest;
-
- ret = process_portal_allocate(&input);
- if (ret)
- return ret;
- portal->index = input.index;
- portal->cinh = input.cinh;
- portal->cena = input.cena;
- return 0;
-}
-
-int qman_free_raw_portal(struct dpaa_raw_portal *portal)
-{
- struct dpaa_ioctl_raw_portal input;
-
- input.type = dpaa_portal_qman;
- input.index = portal->index;
- input.cinh = portal->cinh;
- input.cena = portal->cena;
-
- return process_portal_free(&input);
-}
-
-int bman_allocate_raw_portal(struct dpaa_raw_portal *portal)
-{
- struct dpaa_ioctl_raw_portal input;
- int ret;
-
- input.type = dpaa_portal_bman;
- input.index = portal->index;
- input.enable_stash = 0;
-
- ret = process_portal_allocate(&input);
- if (ret)
- return ret;
- portal->index = input.index;
- portal->cinh = input.cinh;
- portal->cena = input.cena;
- return 0;
-}
-
-int bman_free_raw_portal(struct dpaa_raw_portal *portal)
-{
- struct dpaa_ioctl_raw_portal input;
-
- input.type = dpaa_portal_bman;
- input.index = portal->index;
- input.cinh = portal->cinh;
- input.cena = portal->cena;
-
- return process_portal_free(&input);
-}
-
#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
_IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 447c091770..a8deecf689 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -199,14 +199,6 @@ static int find_empty_fq_table_entry(u32 *entry, struct qman_fq *fq)
return -ENOMEM;
}
-static void clear_fq_table_entry(u32 entry)
-{
- spin_lock(&fq_hash_table_lock);
- DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
- qman_fq_lookup_table[entry] = NULL;
- spin_unlock(&fq_hash_table_lock);
-}
-
static inline struct qman_fq *get_fq_table_entry(u32 entry)
{
DPAA_BUG_ON(entry >= qman_fq_lookup_table_size);
@@ -235,13 +227,6 @@ static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)
fqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);
}
-static inline void cpu_to_hw_fd(struct qm_fd *fd)
-{
- fd->addr = cpu_to_be40(fd->addr);
- fd->status = cpu_to_be32(fd->status);
- fd->opaque = cpu_to_be32(fd->opaque);
-}
-
static inline void hw_fd_to_cpu(struct qm_fd *fd)
{
fd->addr = be40_to_cpu(fd->addr);
@@ -285,15 +270,6 @@ static irqreturn_t portal_isr(__always_unused int irq, void *ptr)
return IRQ_HANDLED;
}
-/* This inner version is used privately by qman_create_affine_portal(), as well
- * as by the exported qman_stop_dequeues().
- */
-static inline void qman_stop_dequeues_ex(struct qman_portal *p)
-{
- if (!(p->dqrr_disable_ref++))
- qm_dqrr_set_maxfill(&p->p, 0);
-}
-
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
@@ -1173,17 +1149,6 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits)
return 0;
}
-u16 qman_affine_channel(int cpu)
-{
- if (cpu < 0) {
- struct qman_portal *portal = get_affine_portal();
-
- cpu = portal->config->cpu;
- }
- DPAA_BUG_ON(!CPU_ISSET(cpu, &affine_mask));
- return affine_channels[cpu];
-}
-
unsigned int qman_portal_poll_rx(unsigned int poll_limit,
void **bufs,
struct qman_portal *p)
@@ -1247,14 +1212,6 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
return rx_number;
}
-void qman_clear_irq(void)
-{
- struct qman_portal *p = get_affine_portal();
- u32 clear = QM_DQAVAIL_MASK | (p->irq_sources &
- ~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));
- qm_isr_status_clear(&p->p, clear);
-}
-
u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
void **bufs)
{
@@ -1370,51 +1327,6 @@ void qman_dqrr_consume(struct qman_fq *fq,
qm_dqrr_next(&p->p);
}
-int qman_poll_dqrr(unsigned int limit)
-{
- struct qman_portal *p = get_affine_portal();
- int ret;
-
- ret = __poll_portal_fast(p, limit);
- return ret;
-}
-
-void qman_poll(void)
-{
- struct qman_portal *p = get_affine_portal();
-
- if ((~p->irq_sources) & QM_PIRQ_SLOW) {
- if (!(p->slowpoll--)) {
- u32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;
- u32 active = __poll_portal_slow(p, is);
-
- if (active) {
- qm_isr_status_clear(&p->p, active);
- p->slowpoll = SLOW_POLL_BUSY;
- } else
- p->slowpoll = SLOW_POLL_IDLE;
- }
- }
- if ((~p->irq_sources) & QM_PIRQ_DQRI)
- __poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);
-}
-
-void qman_stop_dequeues(void)
-{
- struct qman_portal *p = get_affine_portal();
-
- qman_stop_dequeues_ex(p);
-}
-
-void qman_start_dequeues(void)
-{
- struct qman_portal *p = get_affine_portal();
-
- DPAA_ASSERT(p->dqrr_disable_ref > 0);
- if (!(--p->dqrr_disable_ref))
- qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
-}
-
void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
{
struct qman_portal *p = qp ? qp : get_affine_portal();
@@ -1424,28 +1336,6 @@ void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-void qman_static_dequeue_del(u32 pools, struct qman_portal *qp)
-{
- struct qman_portal *p = qp ? qp : get_affine_portal();
-
- pools &= p->config->pools;
- p->sdqcr &= ~pools;
- qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
-}
-
-u32 qman_static_dequeue_get(struct qman_portal *qp)
-{
- struct qman_portal *p = qp ? qp : get_affine_portal();
- return p->sdqcr;
-}
-
-void qman_dca(const struct qm_dqrr_entry *dq, int park_request)
-{
- struct qman_portal *p = get_affine_portal();
-
- qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
-}
-
void qman_dca_index(u8 index, int park_request)
{
struct qman_portal *p = get_affine_portal();
@@ -1563,42 +1453,11 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
return -EIO;
}
-void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)
-{
- /*
- * We don't need to lock the FQ as it is a pre-condition that the FQ be
- * quiesced. Instead, run some checks.
- */
- switch (fq->state) {
- case qman_fq_state_parked:
- DPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);
- /* Fallthrough */
- case qman_fq_state_oos:
- if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))
- qman_release_fqid(fq->fqid);
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
- clear_fq_table_entry(fq->key);
-#endif
- return;
- default:
- break;
- }
- DPAA_ASSERT(NULL == "qman_free_fq() on unquiesced FQ!");
-}
-
u32 qman_fq_fqid(struct qman_fq *fq)
{
return fq->fqid;
}
-void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)
-{
- if (state)
- *state = fq->state;
- if (flags)
- *flags = fq->flags;
-}
-
int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
{
struct qm_mc_command *mcc;
@@ -1695,48 +1554,6 @@ int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
return 0;
}
-int qman_schedule_fq(struct qman_fq *fq)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p;
-
- int ret = 0;
- u8 res;
-
- if (fq->state != qman_fq_state_parked)
- return -EINVAL;
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
- return -EINVAL;
-#endif
- /* Issue a ALTERFQ_SCHED management command */
- p = get_affine_portal();
-
- FQLOCK(fq);
- if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
- (fq->state != qman_fq_state_parked))) {
- ret = -EBUSY;
- goto out;
- }
- mcc = qm_mc_start(&p->p);
- mcc->alterfq.fqid = cpu_to_be32(fq->fqid);
- qm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);
- res = mcr->result;
- if (res != QM_MCR_RESULT_OK) {
- ret = -EIO;
- goto out;
- }
- fq->state = qman_fq_state_sched;
-out:
- FQUNLOCK(fq);
-
- return ret;
-}
-
int qman_retire_fq(struct qman_fq *fq, u32 *flags)
{
struct qm_mc_command *mcc;
@@ -1866,98 +1683,6 @@ int qman_oos_fq(struct qman_fq *fq)
return ret;
}
-int qman_fq_flow_control(struct qman_fq *fq, int xon)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p;
-
- int ret = 0;
- u8 res;
- u8 myverb;
-
- if ((fq->state == qman_fq_state_oos) ||
- (fq->state == qman_fq_state_retired) ||
- (fq->state == qman_fq_state_parked))
- return -EINVAL;
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- if (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))
- return -EINVAL;
-#endif
- /* Issue a ALTER_FQXON or ALTER_FQXOFF management command */
- p = get_affine_portal();
- FQLOCK(fq);
- if (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||
- (fq->state == qman_fq_state_parked) ||
- (fq->state == qman_fq_state_oos) ||
- (fq->state == qman_fq_state_retired))) {
- ret = -EBUSY;
- goto out;
- }
- mcc = qm_mc_start(&p->p);
- mcc->alterfq.fqid = fq->fqid;
- mcc->alterfq.count = 0;
- myverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;
-
- qm_mc_commit(&p->p, myverb);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
-
- res = mcr->result;
- if (res != QM_MCR_RESULT_OK) {
- ret = -EIO;
- goto out;
- }
-out:
- FQUNLOCK(fq);
- return ret;
-}
-
-int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p = get_affine_portal();
-
- u8 res;
-
- mcc = qm_mc_start(&p->p);
- mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
- qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
- res = mcr->result;
- if (res == QM_MCR_RESULT_OK)
- *fqd = mcr->queryfq.fqd;
- hw_fqd_to_cpu(fqd);
- if (res != QM_MCR_RESULT_OK)
- return -EIO;
- return 0;
-}
-
-int qman_query_fq_has_pkts(struct qman_fq *fq)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p = get_affine_portal();
-
- int ret = 0;
- u8 res;
-
- mcc = qm_mc_start(&p->p);
- mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
- qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- res = mcr->result;
- if (res == QM_MCR_RESULT_OK)
- ret = !!mcr->queryfq_np.frm_cnt;
- return ret;
-}
-
int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
{
struct qm_mc_command *mcc;
@@ -2022,65 +1747,6 @@ int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt)
return 0;
}
-int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p = get_affine_portal();
-
- u8 res, myverb;
-
- myverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :
- QM_MCR_VERB_QUERYWQ;
- mcc = qm_mc_start(&p->p);
- mcc->querywq.channel.id = cpu_to_be16(wq->channel.id);
- qm_mc_commit(&p->p, myverb);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);
- res = mcr->result;
- if (res == QM_MCR_RESULT_OK) {
- int i, array_len;
-
- wq->channel.id = be16_to_cpu(mcr->querywq.channel.id);
- array_len = ARRAY_SIZE(mcr->querywq.wq_len);
- for (i = 0; i < array_len; i++)
- wq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);
- }
- if (res != QM_MCR_RESULT_OK) {
- pr_err("QUERYWQ failed: %s\n", mcr_result_str(res));
- return -EIO;
- }
- return 0;
-}
-
-int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
- struct qm_mcr_cgrtestwrite *result)
-{
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- struct qman_portal *p = get_affine_portal();
-
- u8 res;
-
- mcc = qm_mc_start(&p->p);
- mcc->cgrtestwrite.cgid = cgr->cgrid;
- mcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);
- mcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;
- qm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);
- res = mcr->result;
- if (res == QM_MCR_RESULT_OK)
- *result = mcr->cgrtestwrite;
- if (res != QM_MCR_RESULT_OK) {
- pr_err("CGR TEST WRITE failed: %s\n", mcr_result_str(res));
- return -EIO;
- }
- return 0;
-}
-
int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
{
struct qm_mc_command *mcc;
@@ -2116,32 +1782,6 @@ int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)
return 0;
}
-int qman_query_congestion(struct qm_mcr_querycongestion *congestion)
-{
- struct qm_mc_result *mcr;
- struct qman_portal *p = get_affine_portal();
- u8 res;
- unsigned int i;
-
- qm_mc_start(&p->p);
- qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
- while (!(mcr = qm_mc_result(&p->p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
- QM_MCC_VERB_QUERYCONGESTION);
- res = mcr->result;
- if (res == QM_MCR_RESULT_OK)
- *congestion = mcr->querycongestion;
- if (res != QM_MCR_RESULT_OK) {
- pr_err("QUERY_CONGESTION failed: %s\n", mcr_result_str(res));
- return -EIO;
- }
- for (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)
- congestion->state.state[i] =
- be32_to_cpu(congestion->state.state[i]);
- return 0;
-}
-
int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
{
struct qman_portal *p = get_affine_portal();
@@ -2179,128 +1819,6 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
return ret;
}
-int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
- u32 vdqcr)
-{
- struct qman_portal *p;
- int ret = -EBUSY;
-
- if ((fq->state != qman_fq_state_parked) &&
- (fq->state != qman_fq_state_retired))
- return -EINVAL;
- if (vdqcr & QM_VDQCR_FQID_MASK)
- return -EINVAL;
- if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
- return -EBUSY;
- vdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;
-
- p = get_affine_portal();
-
- if (!p->vdqcr_owned) {
- FQLOCK(fq);
- if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
- goto escape;
- fq_set(fq, QMAN_FQ_STATE_VDQCR);
- FQUNLOCK(fq);
- p->vdqcr_owned = fq;
- ret = 0;
- }
-escape:
- if (ret)
- return ret;
-
- /* VDQCR is set */
- qm_dqrr_vdqcr_set(&p->p, vdqcr);
- return 0;
-}
-
-static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)
-{
- if (avail)
- qm_eqcr_cce_prefetch(&p->p);
- else
- qm_eqcr_cce_update(&p->p);
-}
-
-int qman_eqcr_is_empty(void)
-{
- struct qman_portal *p = get_affine_portal();
- u8 avail;
-
- update_eqcr_ci(p, 0);
- avail = qm_eqcr_get_fill(&p->p);
- return (avail == 0);
-}
-
-void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)
-{
- if (affine) {
- struct qman_portal *p = get_affine_portal();
-
- p->cb_dc_ern = handler;
- } else
- cb_dc_ern = handler;
-}
-
-static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,
- struct qman_fq *fq,
- const struct qm_fd *fd,
- u32 flags)
-{
- struct qm_eqcr_entry *eq;
- u8 avail;
-
- if (p->use_eqcr_ci_stashing) {
- /*
- * The stashing case is easy, only update if we need to in
- * order to try and liberate ring entries.
- */
- eq = qm_eqcr_start_stash(&p->p);
- } else {
- /*
- * The non-stashing case is harder, need to prefetch ahead of
- * time.
- */
- avail = qm_eqcr_get_avail(&p->p);
- if (avail < 2)
- update_eqcr_ci(p, avail);
- eq = qm_eqcr_start_no_stash(&p->p);
- }
-
- if (unlikely(!eq))
- return NULL;
-
- if (flags & QMAN_ENQUEUE_FLAG_DCA)
- eq->dca = QM_EQCR_DCA_ENABLE |
- ((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?
- QM_EQCR_DCA_PARK : 0) |
- ((flags >> 8) & QM_EQCR_DCA_IDXMASK);
- eq->fqid = cpu_to_be32(fq->fqid);
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
- eq->tag = cpu_to_be32(fq->key);
-#else
- eq->tag = cpu_to_be32((u32)(uintptr_t)fq);
-#endif
- eq->fd = *fd;
- cpu_to_hw_fd(&eq->fd);
- return eq;
-}
-
-int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
-{
- struct qman_portal *p = get_affine_portal();
- struct qm_eqcr_entry *eq;
-
- eq = try_p_eq_start(p, fq, fd, flags);
- if (!eq)
- return -EBUSY;
- /* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
- qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |
- (flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
- /* Factor the below out, it's used from qman_enqueue_orp() too */
- return 0;
-}
-
int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd, u32 *flags,
int frames_to_send)
@@ -2442,37 +1960,6 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
return sent;
}
-int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
- struct qman_fq *orp, u16 orp_seqnum)
-{
- struct qman_portal *p = get_affine_portal();
- struct qm_eqcr_entry *eq;
-
- eq = try_p_eq_start(p, fq, fd, flags);
- if (!eq)
- return -EBUSY;
- /* Process ORP-specifics here */
- if (flags & QMAN_ENQUEUE_FLAG_NLIS)
- orp_seqnum |= QM_EQCR_SEQNUM_NLIS;
- else {
- orp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;
- if (flags & QMAN_ENQUEUE_FLAG_NESN)
- orp_seqnum |= QM_EQCR_SEQNUM_NESN;
- else
- /* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */
- orp_seqnum &= ~QM_EQCR_SEQNUM_NESN;
- }
- eq->seqnum = cpu_to_be16(orp_seqnum);
- eq->orp = cpu_to_be32(orp->fqid);
- /* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */
- qm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |
- ((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?
- 0 : QM_EQCR_VERB_CMD_ENQUEUE) |
- (flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));
-
- return 0;
-}
-
int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
struct qm_mcc_initcgr *opts)
{
@@ -2581,52 +2068,6 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
return ret;
}
-int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
- struct qm_mcc_initcgr *opts)
-{
- struct qm_mcc_initcgr local_opts;
- struct qm_mcr_querycgr cgr_state;
- int ret;
-
- if ((qman_ip_rev & 0xFF00) < QMAN_REV30) {
- pr_warn("QMan version doesn't support CSCN => DCP portal\n");
- return -EINVAL;
- }
- /* We have to check that the provided CGRID is within the limits of the
- * data-structures, for obvious reasons. However we'll let h/w take
- * care of determining whether it's within the limits of what exists on
- * the SoC.
- */
- if (cgr->cgrid >= __CGR_NUM)
- return -EINVAL;
-
- ret = qman_query_cgr(cgr, &cgr_state);
- if (ret)
- return ret;
-
- memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
- if (opts)
- local_opts = *opts;
-
- if ((qman_ip_rev & 0xFF00) >= QMAN_REV30)
- local_opts.cgr.cscn_targ_upd_ctrl =
- QM_CGR_TARG_UDP_CTRL_WRITE_BIT |
- QM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;
- else
- local_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |
- TARG_DCP_MASK(dcp_portal);
- local_opts.we_mask |= QM_CGR_WE_CSCN_TARG;
-
- /* send init if flags indicate so */
- if (opts && (flags & QMAN_CGR_FLAG_USE_INIT))
- ret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,
- &local_opts);
- else
- ret = qman_modify_cgr(cgr, 0, &local_opts);
-
- return ret;
-}
-
int qman_delete_cgr(struct qman_cgr *cgr)
{
struct qm_mcr_querycgr cgr_state;
@@ -2674,222 +2115,3 @@ int qman_delete_cgr(struct qman_cgr *cgr)
put_portal:
return ret;
}
-
-int qman_shutdown_fq(u32 fqid)
-{
- struct qman_portal *p;
- struct qm_portal *low_p;
- struct qm_mc_command *mcc;
- struct qm_mc_result *mcr;
- u8 state;
- int orl_empty, fq_empty, drain = 0;
- u32 result;
- u32 channel, wq;
- u16 dest_wq;
-
- p = get_affine_portal();
- low_p = &p->p;
-
- /* Determine the state of the FQID */
- mcc = qm_mc_start(low_p);
- mcc->queryfq_np.fqid = cpu_to_be32(fqid);
- qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);
- while (!(mcr = qm_mc_result(low_p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
- state = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;
- if (state == QM_MCR_NP_STATE_OOS)
- return 0; /* Already OOS, no need to do anymore checks */
-
- /* Query which channel the FQ is using */
- mcc = qm_mc_start(low_p);
- mcc->queryfq.fqid = cpu_to_be32(fqid);
- qm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);
- while (!(mcr = qm_mc_result(low_p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);
-
- /* Need to store these since the MCR gets reused */
- dest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);
- channel = dest_wq & 0x7;
- wq = dest_wq >> 3;
-
- switch (state) {
- case QM_MCR_NP_STATE_TEN_SCHED:
- case QM_MCR_NP_STATE_TRU_SCHED:
- case QM_MCR_NP_STATE_ACTIVE:
- case QM_MCR_NP_STATE_PARKED:
- orl_empty = 0;
- mcc = qm_mc_start(low_p);
- mcc->alterfq.fqid = cpu_to_be32(fqid);
- qm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);
- while (!(mcr = qm_mc_result(low_p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
- QM_MCR_VERB_ALTER_RETIRE);
- result = mcr->result; /* Make a copy as we reuse MCR below */
-
- if (result == QM_MCR_RESULT_PENDING) {
- /* Need to wait for the FQRN in the message ring, which
- * will only occur once the FQ has been drained. In
- * order for the FQ to drain the portal needs to be set
- * to dequeue from the channel the FQ is scheduled on
- */
- const struct qm_mr_entry *msg;
- const struct qm_dqrr_entry *dqrr = NULL;
- int found_fqrn = 0;
- __maybe_unused u16 dequeue_wq = 0;
-
- /* Flag that we need to drain FQ */
- drain = 1;
-
- if (channel >= qm_channel_pool1 &&
- channel < (u16)(qm_channel_pool1 + 15)) {
- /* Pool channel, enable the bit in the portal */
- dequeue_wq = (channel -
- qm_channel_pool1 + 1) << 4 | wq;
- } else if (channel < qm_channel_pool1) {
- /* Dedicated channel */
- dequeue_wq = wq;
- } else {
- pr_info("Cannot recover FQ 0x%x,"
- " it is scheduled on channel 0x%x",
- fqid, channel);
- return -EBUSY;
- }
- /* Set the sdqcr to drain this channel */
- if (channel < qm_channel_pool1)
- qm_dqrr_sdqcr_set(low_p,
- QM_SDQCR_TYPE_ACTIVE |
- QM_SDQCR_CHANNELS_DEDICATED);
- else
- qm_dqrr_sdqcr_set(low_p,
- QM_SDQCR_TYPE_ACTIVE |
- QM_SDQCR_CHANNELS_POOL_CONV
- (channel));
- while (!found_fqrn) {
- /* Keep draining DQRR while checking the MR*/
- qm_dqrr_pvb_update(low_p);
- dqrr = qm_dqrr_current(low_p);
- while (dqrr) {
- qm_dqrr_cdc_consume_1ptr(
- low_p, dqrr, 0);
- qm_dqrr_pvb_update(low_p);
- qm_dqrr_next(low_p);
- dqrr = qm_dqrr_current(low_p);
- }
- /* Process message ring too */
- qm_mr_pvb_update(low_p);
- msg = qm_mr_current(low_p);
- while (msg) {
- if ((msg->ern.verb &
- QM_MR_VERB_TYPE_MASK)
- == QM_MR_VERB_FQRN)
- found_fqrn = 1;
- qm_mr_next(low_p);
- qm_mr_cci_consume_to_current(low_p);
- qm_mr_pvb_update(low_p);
- msg = qm_mr_current(low_p);
- }
- cpu_relax();
- }
- }
- if (result != QM_MCR_RESULT_OK &&
- result != QM_MCR_RESULT_PENDING) {
- /* error */
- pr_err("qman_retire_fq failed on FQ 0x%x,"
- " result=0x%x\n", fqid, result);
- return -1;
- }
- if (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {
- /* ORL had no entries, no need to wait until the
- * ERNs come in.
- */
- orl_empty = 1;
- }
- /* Retirement succeeded, check to see if FQ needs
- * to be drained.
- */
- if (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {
- /* FQ is Not Empty, drain using volatile DQ commands */
- fq_empty = 0;
- do {
- const struct qm_dqrr_entry *dqrr = NULL;
- u32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);
-
- qm_dqrr_vdqcr_set(low_p, vdqcr);
-
- /* Wait for a dequeue to occur */
- while (dqrr == NULL) {
- qm_dqrr_pvb_update(low_p);
- dqrr = qm_dqrr_current(low_p);
- if (!dqrr)
- cpu_relax();
- }
- /* Process the dequeues, making sure to
- * empty the ring completely.
- */
- while (dqrr) {
- if (dqrr->fqid == fqid &&
- dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)
- fq_empty = 1;
- qm_dqrr_cdc_consume_1ptr(low_p,
- dqrr, 0);
- qm_dqrr_pvb_update(low_p);
- qm_dqrr_next(low_p);
- dqrr = qm_dqrr_current(low_p);
- }
- } while (fq_empty == 0);
- }
- qm_dqrr_sdqcr_set(low_p, 0);
-
- /* Wait for the ORL to have been completely drained */
- while (orl_empty == 0) {
- const struct qm_mr_entry *msg;
-
- qm_mr_pvb_update(low_p);
- msg = qm_mr_current(low_p);
- while (msg) {
- if ((msg->ern.verb & QM_MR_VERB_TYPE_MASK) ==
- QM_MR_VERB_FQRL)
- orl_empty = 1;
- qm_mr_next(low_p);
- qm_mr_cci_consume_to_current(low_p);
- qm_mr_pvb_update(low_p);
- msg = qm_mr_current(low_p);
- }
- cpu_relax();
- }
- mcc = qm_mc_start(low_p);
- mcc->alterfq.fqid = cpu_to_be32(fqid);
- qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
- while (!(mcr = qm_mc_result(low_p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
- QM_MCR_VERB_ALTER_OOS);
- if (mcr->result != QM_MCR_RESULT_OK) {
- pr_err(
- "OOS after drain Failed on FQID 0x%x, result 0x%x\n",
- fqid, mcr->result);
- return -1;
- }
- return 0;
-
- case QM_MCR_NP_STATE_RETIRED:
- /* Send OOS Command */
- mcc = qm_mc_start(low_p);
- mcc->alterfq.fqid = cpu_to_be32(fqid);
- qm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);
- while (!(mcr = qm_mc_result(low_p)))
- cpu_relax();
- DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==
- QM_MCR_VERB_ALTER_OOS);
- if (mcr->result) {
- pr_err("OOS Failed on FQID 0x%x\n", fqid);
- return -1;
- }
- return 0;
-
- }
- return -1;
-}
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 8254729e66..25306804a5 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -165,15 +165,6 @@ struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
void qm_put_unused_portal(struct qm_portal_config *pcfg);
void qm_set_liodns(struct qm_portal_config *pcfg);
-/* This CGR feature is supported by h/w and required by unit-tests and the
- * debugfs hooks, so is implemented in the driver. However it allows an explicit
- * corruption of h/w fields by s/w that are usually incorruptible (because the
- * counters are usually maintained entirely within h/w). As such, we declare
- * this API internally.
- */
-int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,
- struct qm_mcr_cgrtestwrite *result);
-
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
/* If the fq object pointer is greater than the size of context_b field,
* than a lookup table is required.
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 3098e23093..ca1e27aeaf 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -359,11 +359,6 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
return 0;
}
-int rte_dpaa_portal_fq_close(struct qman_fq *fq)
-{
- return fsl_qman_fq_portal_destroy(fq->qp);
-}
-
void
dpaa_portal_finish(void *arg)
{
@@ -488,21 +483,6 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
driver->dpaa_bus = &rte_dpaa_bus;
}
-/* un-register a dpaa bus based dpaa driver */
-void
-rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
-{
- struct rte_dpaa_bus *dpaa_bus;
-
- BUS_INIT_FUNC_TRACE();
-
- dpaa_bus = driver->dpaa_bus;
-
- TAILQ_REMOVE(&dpaa_bus->driver_list, driver, next);
- /* Update Bus references */
- driver->dpaa_bus = NULL;
-}
-
static int
rte_dpaa_device_match(struct rte_dpaa_driver *drv,
struct rte_dpaa_device *dev)
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index 82da2fcfe0..a06d29eb2d 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -252,8 +252,6 @@ static inline int bman_reserve_bpid(u32 bpid)
void bman_seed_bpid_range(u32 bpid, unsigned int count);
-int bman_shutdown_pool(u32 bpid);
-
/**
* bman_new_pool - Allocates a Buffer Pool object
* @params: parameters specifying the buffer pool ID and behaviour
@@ -310,12 +308,6 @@ __rte_internal
int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
u32 flags);
-/**
- * bman_query_pools - Query all buffer pool states
- * @state: storage for the queried availability and depletion states
- */
-int bman_query_pools(struct bm_pool_state *state);
-
/**
* bman_query_free_buffers - Query how many free buffers are in buffer pool
* @pool: the buffer pool object to query
@@ -325,13 +317,6 @@ int bman_query_pools(struct bm_pool_state *state);
__rte_internal
u32 bman_query_free_buffers(struct bman_pool *pool);
-/**
- * bman_update_pool_thresholds - Change the buffer pool's depletion thresholds
- * @pool: the buffer pool object to which the thresholds will be set
- * @thresholds: the new thresholds
- */
-int bman_update_pool_thresholds(struct bman_pool *pool, const u32 *thresholds);
-
/**
* bm_pool_set_hw_threshold - Change the buffer pool's thresholds
* @pool: Pool id
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index a3cf77f0e3..71f5a2f8cf 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -64,12 +64,6 @@ void fman_if_stats_reset(struct fman_if *p);
__rte_internal
void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
-/* Set ignore pause option for a specific interface */
-void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
-
-/* Set max frame length */
-void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
-
/* Enable/disable Rx promiscuous mode on specified interface */
__rte_internal
void fman_if_promiscuous_enable(struct fman_if *p);
@@ -114,18 +108,11 @@ int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
__rte_internal
void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
-/* Get IC transfer params */
-int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
-
/* Set IC transfer params */
__rte_internal
int fman_if_set_ic_params(struct fman_if *fm_if,
const struct fman_if_ic_params *icp);
-/* Get interface fd->offset value */
-__rte_internal
-int fman_if_get_fdoff(struct fman_if *fm_if);
-
/* Set interface fd->offset value */
__rte_internal
void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
@@ -138,20 +125,10 @@ int fman_if_get_sg_enable(struct fman_if *fm_if);
__rte_internal
void fman_if_set_sg(struct fman_if *fm_if, int enable);
-/* Get interface Max Frame length (MTU) */
-uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
-
/* Set interface Max Frame length (MTU) */
__rte_internal
void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
-/* Set interface next invoked action for dequeue operation */
-void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
-
-/* discard error packets on rx */
-__rte_internal
-void fman_if_discard_rx_errors(struct fman_if *fm_if);
-
__rte_internal
void fman_if_receive_rx_errors(struct fman_if *fm_if,
unsigned int err_eq);
@@ -162,11 +139,6 @@ void fman_if_set_mcast_filter_table(struct fman_if *p);
__rte_internal
void fman_if_reset_mcast_filter_table(struct fman_if *p);
-int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
-
-int fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth);
-
-
/* Enable/disable Rx on all interfaces */
static inline void fman_if_enable_all_rx(void)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 10212f0fd5..b24aa76409 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1379,16 +1379,6 @@ int qman_irqsource_remove(u32 bits);
__rte_internal
int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
-/**
- * qman_affine_channel - return the channel ID of an portal
- * @cpu: the cpu whose affine portal is the subject of the query
- *
- * If @cpu is -1, the affine portal for the current CPU will be used. It is a
- * bug to call this function for any value of @cpu (other than -1) that is not a
- * member of the cpu mask.
- */
-u16 qman_affine_channel(int cpu);
-
__rte_internal
unsigned int qman_portal_poll_rx(unsigned int poll_limit,
void **bufs, struct qman_portal *q);
@@ -1428,55 +1418,6 @@ __rte_internal
void qman_dqrr_consume(struct qman_fq *fq,
struct qm_dqrr_entry *dq);
-/**
- * qman_poll_dqrr - process DQRR (fast-path) entries
- * @limit: the maximum number of DQRR entries to process
- *
- * Use of this function requires that DQRR processing not be interrupt-driven.
- * Ie. the value returned by qman_irqsource_get() should not include
- * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,
- * this function will return -EINVAL, otherwise the return value is >=0 and
- * represents the number of DQRR entries processed.
- */
-__rte_internal
-int qman_poll_dqrr(unsigned int limit);
-
-/**
- * qman_poll
- *
- * Dispatcher logic on a cpu can use this to trigger any maintenance of the
- * affine portal. There are two classes of portal processing in question;
- * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking
- * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR
- * thresholds, congestion state changes, etc). This function does whatever
- * processing is not triggered by interrupts.
- *
- * Note, if DQRR and some slow-path processing are poll-driven (rather than
- * interrupt-driven) then this function uses a heuristic to determine how often
- * to run slow-path processing - as slow-path processing introduces at least a
- * minimum latency each time it is run, whereas fast-path (DQRR) processing is
- * close to zero-cost if there is no work to be done.
- */
-void qman_poll(void);
-
-/**
- * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal
- *
- * Disables DQRR processing of the portal. This is reference-counted, so
- * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
- * truly re-enable dequeuing.
- */
-void qman_stop_dequeues(void);
-
-/**
- * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal
- *
- * Enables DQRR processing of the portal. This is reference-counted, so
- * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to
- * truly re-enable dequeuing.
- */
-void qman_start_dequeues(void);
-
/**
* qman_static_dequeue_add - Add pool channels to the portal SDQCR
* @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
@@ -1488,39 +1429,6 @@ void qman_start_dequeues(void);
__rte_internal
void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
-/**
- * qman_static_dequeue_del - Remove pool channels from the portal SDQCR
- * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)
- *
- * Removes a set of pool channels from the portal's static dequeue command
- * register (SDQCR). The requested pools are limited to those the portal has
- * dequeue access to.
- */
-void qman_static_dequeue_del(u32 pools, struct qman_portal *qp);
-
-/**
- * qman_static_dequeue_get - return the portal's current SDQCR
- *
- * Returns the portal's current static dequeue command register (SDQCR). The
- * entire register is returned, so if only the currently-enabled pool channels
- * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
- */
-u32 qman_static_dequeue_get(struct qman_portal *qp);
-
-/**
- * qman_dca - Perform a Discrete Consumption Acknowledgment
- * @dq: the DQRR entry to be consumed
- * @park_request: indicates whether the held-active @fq should be parked
- *
- * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
- * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
- * does not take a 'portal' argument but implies the core affine portal from the
- * cpu that is currently executing the function. For reasons of locking, this
- * function must be called from the same CPU as that which processed the DQRR
- * entry in the first place.
- */
-void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
-
/**
* qman_dca_index - Perform a Discrete Consumption Acknowledgment
* @index: the DQRR index to be consumed
@@ -1536,36 +1444,6 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
__rte_internal
void qman_dca_index(u8 index, int park_request);
-/**
- * qman_eqcr_is_empty - Determine if portal's EQCR is empty
- *
- * For use in situations where a cpu-affine caller needs to determine when all
- * enqueues for the local portal have been processed by Qman but can't use the
- * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().
- * The function forces tracking of EQCR consumption (which normally doesn't
- * happen until enqueue processing needs to find space to put new enqueue
- * commands), and returns zero if the ring still has unprocessed entries,
- * non-zero if it is empty.
- */
-int qman_eqcr_is_empty(void);
-
-/**
- * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications
- * @handler: callback for processing DCP ERNs
- * @affine: whether this handler is specific to the locally affine portal
- *
- * If a hardware block's interface to Qman (ie. its direct-connect portal, or
- * DCP) is configured not to receive enqueue rejections, then any enqueues
- * through that DCP that are rejected will be sent to a given software portal.
- * If @affine is non-zero, then this handler will only be used for DCP ERNs
- * received on the portal affine to the current CPU. If multiple CPUs share a
- * portal and they all call this function, they will be setting the handler for
- * the same portal! If @affine is zero, then this handler will be global to all
- * portals handled by this instance of the driver. Only those portals that do
- * not have their own affine handler will use the global handler.
- */
-void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
-
/* FQ management */
/* ------------- */
/**
@@ -1594,18 +1472,6 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
__rte_internal
int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
-/**
- * qman_destroy_fq - Deallocates a FQ
- * @fq: the frame queue object to release
- * @flags: bit-mask of QMAN_FQ_FREE_*** options
- *
- * The memory for this frame queue object ('fq' provided in qman_create_fq()) is
- * not deallocated but the caller regains ownership, to do with as desired. The
- * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag
- * is specified, in which case it may also be in the 'parked' state.
- */
-void qman_destroy_fq(struct qman_fq *fq, u32 flags);
-
/**
* qman_fq_fqid - Queries the frame queue ID of a FQ object
* @fq: the frame queue object to query
@@ -1613,19 +1479,6 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
__rte_internal
u32 qman_fq_fqid(struct qman_fq *fq);
-/**
- * qman_fq_state - Queries the state of a FQ object
- * @fq: the frame queue object to query
- * @state: pointer to state enum to return the FQ scheduling state
- * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask
- *
- * Queries the state of the FQ object, without performing any h/w commands.
- * This captures the state, as seen by the driver, at the time the function
- * executes.
- */
-__rte_internal
-void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
-
/**
* qman_init_fq - Initialises FQ fields, leaves the FQ "parked" or "scheduled"
* @fq: the frame queue object to modify, must be 'parked' or new.
@@ -1663,15 +1516,6 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
__rte_internal
int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
-/**
- * qman_schedule_fq - Schedules a FQ
- * @fq: the frame queue object to schedule, must be 'parked'
- *
- * Schedules the frame queue, which must be Parked, which takes it to
- * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.
- */
-int qman_schedule_fq(struct qman_fq *fq);
-
/**
* qman_retire_fq - Retires a FQ
* @fq: the frame queue object to retire
@@ -1703,32 +1547,6 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
__rte_internal
int qman_oos_fq(struct qman_fq *fq);
-/**
- * qman_fq_flow_control - Set the XON/XOFF state of a FQ
- * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',
- * or 'retired' or 'parked' state
- * @xon: boolean to set fq in XON or XOFF state
- *
- * The frame should be in Tentatively Scheduled state or Truly Schedule sate,
- * otherwise the IFSI interrupt will be asserted.
- */
-int qman_fq_flow_control(struct qman_fq *fq, int xon);
-
-/**
- * qman_query_fq - Queries FQD fields (via h/w query command)
- * @fq: the frame queue object to be queried
- * @fqd: storage for the queried FQD fields
- */
-int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);
-
-/**
- * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'
- * if packets are in the frame queue. If there are no packets on frame
- * queue '0' is returned.
- * @fq: the frame queue object to be queried
- */
-int qman_query_fq_has_pkts(struct qman_fq *fq);
-
/**
* qman_query_fq_np - Queries non-programmable FQD fields
* @fq: the frame queue object to be queried
@@ -1745,73 +1563,6 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
__rte_internal
int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
-/**
- * qman_query_wq - Queries work queue lengths
- * @query_dedicated: If non-zero, query length of WQs in the channel dedicated
- * to this software portal. Otherwise, query length of WQs in a
- * channel specified in wq.
- * @wq: storage for the queried WQs lengths. Also specified the channel to
- * to query if query_dedicated is zero.
- */
-int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
-
-/**
- * qman_volatile_dequeue - Issue a volatile dequeue command
- * @fq: the frame queue object to dequeue from
- * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options
- * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()
- *
- * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.
- * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and
- * the VDQCR is already in use, otherwise returns non-zero for failure. If
- * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once
- * the VDQCR command has finished executing (ie. once the callback for the last
- * DQRR entry resulting from the VDQCR command has been called). If not using
- * the FINISH flag, completion can be determined either by detecting the
- * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits
- * in the "stat" field of the "struct qm_dqrr_entry" passed to the FQ's dequeue
- * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
- * "flags" retrieved from qman_fq_state().
- */
-__rte_internal
-int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
-
-/**
- * qman_enqueue - Enqueue a frame to a frame queue
- * @fq: the frame queue object to enqueue to
- * @fd: a descriptor of the frame to be enqueued
- * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
- *
- * Fills an entry in the EQCR of portal @qm to enqueue the frame described by
- * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'
- * field is ignored. The return value is non-zero on error, such as ring full
- * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR
- * specified), etc. If the ring is full and FLAG_WAIT is specified, this
- * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal
- * interrupt will assert when Qman consumes the EQCR entry (subject to "status
- * disable", "enable", and "inhibit" registers). If FLAG_DCA is set, Qman will
- * perform an implied "discrete consumption acknowledgment" on the dequeue
- * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)
- * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,
- * this implicit DCA can delay the release of a "held active" frame queue
- * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing
- * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is
- * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption
- * acknowledgment should "park request" the "held active" frame queue. Ie.
- * when the portal eventually releases that frame queue, it will be left in the
- * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the
- * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag
- * is requested, and the FQ is a member of a congestion group, then this
- * function returns -EAGAIN if the congestion group is currently congested.
- * Note, this does not eliminate ERNs, as the async interface means we can be
- * sending enqueue commands to an un-congested FQ that becomes congested before
- * the enqueue commands are processed, but it does minimise needless thrashing
- * of an already busy hardware resource by throttling many of the to-be-dropped
- * enqueues "at the source".
- */
-__rte_internal
-int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
-
__rte_internal
int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
int frames_to_send);
@@ -1846,45 +1597,6 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
typedef int (*qman_cb_precommit) (void *arg);
-/**
- * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP
- * @fq: the frame queue object to enqueue to
- * @fd: a descriptor of the frame to be enqueued
- * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options
- * @orp: the frame queue object used as an order restoration point.
- * @orp_seqnum: the sequence number of this frame in the order restoration path
- *
- * Similar to qman_enqueue(), but with the addition of an Order Restoration
- * Point (@orp) and corresponding sequence number (@orp_seqnum) for this
- * enqueue operation to employ order restoration. Each frame queue object acts
- * as an Order Definition Point (ODP) by providing each frame dequeued from it
- * with an incrementing sequence number, this value is generally ignored unless
- * that sequence of dequeued frames will need order restoration later. Each
- * frame queue object also encapsulates an Order Restoration Point (ORP), which
- * is a re-assembly context for re-ordering frames relative to their sequence
- * numbers as they are enqueued. The ORP does not have to be within the frame
- * queue that receives the enqueued frame, in fact it is usually the frame
- * queue from which the frames were originally dequeued. For the purposes of
- * order restoration, multiple frames (or "fragments") can be enqueued for a
- * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all
- * enqueues except the final fragment of a given sequence number. Ordering
- * between sequence numbers is guaranteed, even if fragments of different
- * sequence numbers are interlaced with one another. Fragments of the same
- * sequence number will retain the order in which they are enqueued. If no
- * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given
- * sequence number is to be "skipped" by the ORP logic (eg. if a frame has been
- * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given
- * sequence number should become the ORP's "Next Expected Sequence Number".
- *
- * Side note: a frame queue object can be used purely as an ORP, without
- * carrying any frames at all. Care should be taken not to deallocate a frame
- * queue object that is being actively used as an ORP, as a future allocation
- * of the frame queue object may start using the internal ORP before the
- * previous use has finished.
- */
-int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
- struct qman_fq *orp, u16 orp_seqnum);
-
/**
* qman_alloc_fqid_range - Allocate a contiguous range of FQIDs
* @result: is set by the API to the base FQID of the allocated range
@@ -1922,8 +1634,6 @@ static inline void qman_release_fqid(u32 fqid)
void qman_seed_fqid_range(u32 fqid, unsigned int count);
-int qman_shutdown_fq(u32 fqid);
-
/**
* qman_reserve_fqid_range - Reserve the specified range of frame queue IDs
* @fqid: the base FQID of the range to deallocate
@@ -2001,17 +1711,6 @@ __rte_internal
int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
struct qm_mcc_initcgr *opts);
-/**
- * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal
- * @cgr: the 'cgr' object, with fields filled in
- * @flags: QMAN_CGR_FLAG_* values
- * @dcp_portal: the DCP portal to which the cgr object is registered.
- * @opts: optional state of CGR settings
- *
- */
-int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
- struct qm_mcc_initcgr *opts);
-
/**
* qman_delete_cgr - Deregisters a congestion group object
* @cgr: the 'cgr' object to deregister
@@ -2048,12 +1747,6 @@ int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
*/
int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);
-/**
- * qman_query_congestion - Queries the state of all congestion groups
- * @congestion: storage for the queried state of all congestion groups
- */
-int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
-
/**
* qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs
* @result: is set by the API to the base CGR ID of the allocated range
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index dcf35e4adb..3a5df9bf7e 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -51,16 +51,9 @@ struct dpaa_raw_portal {
uint64_t cena;
};
-int qman_allocate_raw_portal(struct dpaa_raw_portal *portal);
-int qman_free_raw_portal(struct dpaa_raw_portal *portal);
-
-int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
-int bman_free_raw_portal(struct dpaa_raw_portal *portal);
-
/* Obtain thread-local UIO file-descriptors */
__rte_internal
int qman_thread_fd(void);
-int bman_thread_fd(void);
/* Post-process interrupts. NB, the kernel IRQ handler disables the interrupt
* line before notifying us, and this post-processing re-enables it once
@@ -70,12 +63,8 @@ int bman_thread_fd(void);
__rte_internal
void qman_thread_irq(void);
-__rte_internal
-void bman_thread_irq(void);
__rte_internal
void qman_fq_portal_thread_irq(struct qman_portal *qp);
-__rte_internal
-void qman_clear_irq(void);
/* Global setup */
int qman_global_init(void);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index d7d1befd24..815b3ba087 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -49,12 +49,6 @@ struct netcfg_interface {
__rte_internal
struct netcfg_info *netcfg_acquire(void);
-/* cfg_ptr: configuration information pointer.
- * Frees the resources allocated by the configuration layer.
- */
-__rte_internal
-void netcfg_release(struct netcfg_info *cfg_ptr);
-
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
/* cfg_ptr: configuration information pointer.
* This function dumps configuration data to stdout.
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 48d5cf4625..40d82412df 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -214,16 +214,6 @@ rte_dpaa_mem_vtop(void *vaddr)
__rte_internal
void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
-/**
- * Unregister a DPAA driver.
- *
- * @param driver
- * A pointer to a rte_dpaa_driver structure describing the driver
- * to be unregistered.
- */
-__rte_internal
-void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
-
/**
* Initialize a DPAA portal
*
@@ -239,9 +229,6 @@ int rte_dpaa_portal_init(void *arg);
__rte_internal
int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
-__rte_internal
-int rte_dpaa_portal_fq_close(struct qman_fq *fq);
-
/**
* Cleanup a DPAA Portal
*/
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index fe4f9ac5aa..98f1e00582 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -7,7 +7,6 @@ INTERNAL {
bman_new_pool;
bman_query_free_buffers;
bman_release;
- bman_thread_irq;
dpaa_get_ioctl_version_number;
dpaa_get_eth_port_cfg;
dpaa_get_qm_channel_caam;
@@ -25,11 +24,9 @@ INTERNAL {
fman_if_add_mac_addr;
fman_if_clear_mac_addr;
fman_if_disable_rx;
- fman_if_discard_rx_errors;
fman_if_enable_rx;
fman_if_get_fc_quanta;
fman_if_get_fc_threshold;
- fman_if_get_fdoff;
fman_if_get_sg_enable;
fman_if_loopback_disable;
fman_if_loopback_enable;
@@ -52,19 +49,16 @@ INTERNAL {
fman_if_receive_rx_errors;
fsl_qman_fq_portal_create;
netcfg_acquire;
- netcfg_release;
per_lcore_dpaa_io;
qman_alloc_cgrid_range;
qman_alloc_fqid_range;
qman_alloc_pool_range;
- qman_clear_irq;
qman_create_cgr;
qman_create_fq;
qman_dca_index;
qman_delete_cgr;
qman_dequeue;
qman_dqrr_consume;
- qman_enqueue;
qman_enqueue_multi;
qman_enqueue_multi_fq;
qman_ern_poll_free;
@@ -79,7 +73,6 @@ INTERNAL {
qman_irqsource_remove;
qman_modify_cgr;
qman_oos_fq;
- qman_poll_dqrr;
qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
@@ -92,10 +85,7 @@ INTERNAL {
qman_static_dequeue_add;
qman_thread_fd;
qman_thread_irq;
- qman_volatile_dequeue;
rte_dpaa_driver_register;
- rte_dpaa_driver_unregister;
- rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
rte_dpaa_portal_init;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 58435589b2..51749764e7 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -521,25 +521,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
driver->fslmc_bus = &rte_fslmc_bus;
}
-/*un-register a fslmc bus based dpaa2 driver */
-void
-rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
-{
- struct rte_fslmc_bus *fslmc_bus;
-
- fslmc_bus = driver->fslmc_bus;
-
- /* Cleanup the PA->VA Translation table; From whereever this function
- * is called from.
- */
- if (rte_eal_iova_mode() == RTE_IOVA_PA)
- dpaax_iova_table_depopulate();
-
- TAILQ_REMOVE(&fslmc_bus->driver_list, driver, next);
- /* Update Bus references */
- driver->fslmc_bus = NULL;
-}
-
/*
* All device has iova as va
*/
diff --git a/drivers/bus/fslmc/mc/dpbp.c b/drivers/bus/fslmc/mc/dpbp.c
index d9103409cf..f3af33b658 100644
--- a/drivers/bus/fslmc/mc/dpbp.c
+++ b/drivers/bus/fslmc/mc/dpbp.c
@@ -77,78 +77,6 @@ int dpbp_close(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpbp_create() - Create the DPBP object.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id; use in subsequent API calls
- *
- * Create the DPBP object, allocate required resources and
- * perform required initialization.
- *
- * This function accepts an authentication token of a parent
- * container that this object should be assigned to and returns
- * an object id. This object_id will be used in all subsequent calls to
- * this specific object.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpbp_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpbp_cfg *cfg,
- uint32_t *obj_id)
-{
- struct mc_command cmd = { 0 };
- int err;
-
- (void)(cfg); /* unused */
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPBP_CMDID_CREATE,
- cmd_flags, dprc_token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpbp_destroy() - Destroy the DPBP object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @obj_id: ID of DPBP object
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpbp_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t obj_id)
-{
- struct dpbp_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPBP_CMDID_DESTROY,
- cmd_flags, dprc_token);
-
- cmd_params = (struct dpbp_cmd_destroy *)cmd.params;
- cmd_params->object_id = cpu_to_le32(obj_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpbp_enable() - Enable the DPBP.
* @mc_io: Pointer to MC portal's I/O object
@@ -193,40 +121,6 @@ int dpbp_disable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpbp_is_enabled() - Check if the DPBP is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPBP object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpbp_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpbp_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPBP_CMDID_IS_ENABLED, cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpbp_rsp_is_enabled *)cmd.params;
- *en = rsp_params->enabled & DPBP_ENABLE;
-
- return 0;
-}
-
/**
* dpbp_reset() - Reset the DPBP, returns the object to initial state.
* @mc_io: Pointer to MC portal's I/O object
@@ -284,41 +178,6 @@ int dpbp_get_attributes(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpbp_get_api_version - Get Data Path Buffer Pool API version
- * @mc_io: Pointer to Mc portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of Buffer Pool API
- * @minor_ver: Minor version of Buffer Pool API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpbp_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpbp_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPBP_CMDID_GET_API_VERSION,
- cmd_flags, 0);
-
- /* send command to mc */
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpbp_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
-
/**
* dpbp_get_num_free_bufs() - Get number of free buffers in the buffer pool
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/dpci.c b/drivers/bus/fslmc/mc/dpci.c
index 7e31327afa..cd558d507c 100644
--- a/drivers/bus/fslmc/mc/dpci.c
+++ b/drivers/bus/fslmc/mc/dpci.c
@@ -53,116 +53,6 @@ int dpci_open(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpci_close() - Close the control session of the object
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_CLOSE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_create() - Create the DPCI object.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPCI object, allocate required resources and perform required
- * initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpci_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpci_cmd_create *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpci_cmd_create *)cmd.params;
- cmd_params->num_of_priorities = cfg->num_of_priorities;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpci_destroy() - Destroy the DPCI object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpci_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dpci_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpci_cmd_destroy *)cmd.params;
- cmd_params->dpci_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpci_enable() - Enable the DPCI, allow sending and receiving frames.
* @mc_io: Pointer to MC portal's I/O object
@@ -186,86 +76,6 @@ int dpci_enable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpci_disable() - Disable the DPCI, stop sending and receiving frames.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_DISABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_is_enabled() - Check if the DPCI is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpci_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_IS_ENABLED, cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpci_rsp_is_enabled *)cmd.params;
- *en = dpci_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
-/**
- * dpci_reset() - Reset the DPCI, returns the object to initial state.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_RESET,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpci_get_attributes() - Retrieve DPCI attributes.
* @mc_io: Pointer to MC portal's I/O object
@@ -431,133 +241,3 @@ int dpci_get_tx_queue(struct fsl_mc_io *mc_io,
return 0;
}
-
-/**
- * dpci_get_api_version() - Get communication interface API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path communication interface API
- * @minor_ver: Minor version of data path communication interface API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpci_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpci_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
-
-/**
- * dpci_set_opr() - Set Order Restoration configuration.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- * @index: The queue index
- * @options: Configuration mode options
- * can be OPR_OPT_CREATE or OPR_OPT_RETIRE
- * @cfg: Configuration options for the OPR
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_set_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- uint8_t options,
- struct opr_cfg *cfg)
-{
- struct dpci_cmd_set_opr *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_SET_OPR,
- cmd_flags,
- token);
- cmd_params = (struct dpci_cmd_set_opr *)cmd.params;
- cmd_params->index = index;
- cmd_params->options = options;
- cmd_params->oloe = cfg->oloe;
- cmd_params->oeane = cfg->oeane;
- cmd_params->olws = cfg->olws;
- cmd_params->oa = cfg->oa;
- cmd_params->oprrws = cfg->oprrws;
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpci_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCI object
- * @index: The queue index
- * @cfg: Returned OPR configuration
- * @qry: Returned OPR query
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpci_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry)
-{
- struct dpci_rsp_get_opr *rsp_params;
- struct dpci_cmd_get_opr *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCI_CMDID_GET_OPR,
- cmd_flags,
- token);
- cmd_params = (struct dpci_cmd_get_opr *)cmd.params;
- cmd_params->index = index;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpci_rsp_get_opr *)cmd.params;
- cfg->oloe = rsp_params->oloe;
- cfg->oeane = rsp_params->oeane;
- cfg->olws = rsp_params->olws;
- cfg->oa = rsp_params->oa;
- cfg->oprrws = rsp_params->oprrws;
- qry->rip = dpci_get_field(rsp_params->flags, RIP);
- qry->enable = dpci_get_field(rsp_params->flags, OPR_ENABLE);
- qry->nesn = le16_to_cpu(rsp_params->nesn);
- qry->ndsn = le16_to_cpu(rsp_params->ndsn);
- qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
- qry->tseq_nlis = dpci_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
- qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
- qry->hseq_nlis = dpci_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
- qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
- qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
- qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
- qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
- return 0;
-}
diff --git a/drivers/bus/fslmc/mc/dpcon.c b/drivers/bus/fslmc/mc/dpcon.c
index 2c46638dcb..e9bf364507 100644
--- a/drivers/bus/fslmc/mc/dpcon.c
+++ b/drivers/bus/fslmc/mc/dpcon.c
@@ -53,212 +53,6 @@ int dpcon_open(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpcon_close() - Close the control session of the object
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCON object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpcon_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_CLOSE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_create() - Create the DPCON object.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id; use in subsequent API calls
- *
- * Create the DPCON object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * This function accepts an authentication token of a parent
- * container that this object should be assigned to and returns
- * an object id. This object_id will be used in all subsequent calls to
- * this specific object.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpcon_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpcon_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpcon_cmd_create *dpcon_cmd;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- dpcon_cmd = (struct dpcon_cmd_create *)cmd.params;
- dpcon_cmd->num_priorities = cfg->num_priorities;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpcon_destroy() - Destroy the DPCON object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @obj_id: ID of DPCON object
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpcon_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t obj_id)
-{
- struct dpcon_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpcon_cmd_destroy *)cmd.params;
- cmd_params->object_id = cpu_to_le32(obj_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_enable() - Enable the DPCON
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCON object
- *
- * Return: '0' on Success; Error code otherwise
- */
-int dpcon_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_ENABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_disable() - Disable the DPCON
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCON object
- *
- * Return: '0' on Success; Error code otherwise
- */
-int dpcon_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_DISABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpcon_is_enabled() - Check if the DPCON is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCON object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpcon_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpcon_rsp_is_enabled *dpcon_rsp;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_IS_ENABLED,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- dpcon_rsp = (struct dpcon_rsp_is_enabled *)cmd.params;
- *en = dpcon_rsp->enabled & DPCON_ENABLE;
-
- return 0;
-}
-
-/**
- * dpcon_reset() - Reset the DPCON, returns the object to initial state.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPCON object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpcon_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_RESET,
- cmd_flags, token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpcon_get_attributes() - Retrieve DPCON attributes.
* @mc_io: Pointer to MC portal's I/O object
@@ -295,38 +89,3 @@ int dpcon_get_attributes(struct fsl_mc_io *mc_io,
return 0;
}
-
-/**
- * dpcon_get_api_version - Get Data Path Concentrator API version
- * @mc_io: Pointer to MC portal's DPCON object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of DPCON API
- * @minor_ver: Minor version of DPCON API
- *
- * Return: '0' on Success; Error code otherwise
- */
-int dpcon_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpcon_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPCON_CMDID_GET_API_VERSION,
- cmd_flags, 0);
-
- /* send command to mc */
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpcon_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..30640fd353 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -76,92 +76,6 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpdmai_create() - Create the DPDMAI object
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPDMAI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmai_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpdmai_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpdmai_cmd_create *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpdmai_cmd_create *)cmd.params;
- cmd_params->num_queues = cfg->num_queues;
- cmd_params->priorities[0] = cfg->priorities[0];
- cmd_params->priorities[1] = cfg->priorities[1];
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpdmai_destroy() - Destroy the DPDMAI object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpdmai_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dpdmai_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpdmai_cmd_destroy *)cmd.params;
- cmd_params->dpdmai_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpdmai_enable() - Enable the DPDMAI, allow sending and receiving frames.
* @mc_io: Pointer to MC portal's I/O object
@@ -208,64 +122,6 @@ int dpdmai_disable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpdmai_is_enabled() - Check if the DPDMAI is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMAI object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmai_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpdmai_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_IS_ENABLED,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmai_rsp_is_enabled *)cmd.params;
- *en = dpdmai_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
-/**
- * dpdmai_reset() - Reset the DPDMAI, returns the object to initial state.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMAI object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmai_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMAI_CMDID_RESET,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpdmai_get_attributes() - Retrieve DPDMAI attributes.
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..317924c856 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -76,95 +76,6 @@ int dpio_close(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpio_create() - Create the DPIO object.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPIO object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpio_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpio_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpio_cmd_create *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPIO_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpio_cmd_create *)cmd.params;
- cmd_params->num_priorities = cfg->num_priorities;
- dpio_set_field(cmd_params->channel_mode,
- CHANNEL_MODE,
- cfg->channel_mode);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpio_destroy() - Destroy the DPIO object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; Error code otherwise
- */
-int dpio_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dpio_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPIO_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
-
- /* set object id to destroy */
- cmd_params = (struct dpio_cmd_destroy *)cmd.params;
- cmd_params->dpio_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpio_enable() - Enable the DPIO, allow I/O portal operations.
* @mc_io: Pointer to MC portal's I/O object
@@ -211,40 +122,6 @@ int dpio_disable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpio_is_enabled() - Check if the DPIO is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPIO object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpio_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpio_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPIO_CMDID_IS_ENABLED, cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpio_rsp_is_enabled *)cmd.params;
- *en = dpio_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
/**
* dpio_reset() - Reset the DPIO, returns the object to initial state.
* @mc_io: Pointer to MC portal's I/O object
@@ -341,41 +218,6 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpio_get_stashing_destination() - Get the stashing destination..
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPIO object
- * @sdest: Returns the stashing destination value
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t *sdest)
-{
- struct dpio_stashing_dest *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpio_stashing_dest *)cmd.params;
- *sdest = rsp_params->sdest;
-
- return 0;
-}
-
/**
* dpio_add_static_dequeue_channel() - Add a static dequeue channel.
* @mc_io: Pointer to MC portal's I/O object
@@ -444,36 +286,3 @@ int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
-
-/**
- * dpio_get_api_version() - Get Data Path I/O API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path i/o API
- * @minor_ver: Minor version of data path i/o API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpio_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpio_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpio_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
index 8a021f55f1..f50131ba45 100644
--- a/drivers/bus/fslmc/mc/fsl_dpbp.h
+++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
@@ -34,17 +34,6 @@ struct dpbp_cfg {
uint32_t options;
};
-int dpbp_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpbp_cfg *cfg,
- uint32_t *obj_id);
-
-int dpbp_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t obj_id);
-
__rte_internal
int dpbp_enable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -55,11 +44,6 @@ int dpbp_disable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
-int dpbp_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
__rte_internal
int dpbp_reset(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -90,10 +74,6 @@ int dpbp_get_attributes(struct fsl_mc_io *mc_io,
* BPSCN write will attempt to allocate into a cache (coherent write)
*/
#define DPBP_NOTIF_OPT_COHERENT_WRITE 0x00000001
-int dpbp_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
__rte_internal
int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
index 81fd3438aa..9fdc3a8ea5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpci.h
+++ b/drivers/bus/fslmc/mc/fsl_dpci.h
@@ -37,10 +37,6 @@ int dpci_open(struct fsl_mc_io *mc_io,
int dpci_id,
uint16_t *token);
-int dpci_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* Enable the Order Restoration support
*/
@@ -66,34 +62,10 @@ struct dpci_cfg {
uint8_t num_of_priorities;
};
-int dpci_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpci_cfg *cfg,
- uint32_t *obj_id);
-
-int dpci_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
int dpci_enable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
-int dpci_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dpci_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
-int dpci_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dpci_attr - Structure representing DPCI attributes
* @id: DPCI object ID
@@ -224,25 +196,4 @@ int dpci_get_tx_queue(struct fsl_mc_io *mc_io,
uint8_t priority,
struct dpci_tx_queue_attr *attr);
-int dpci_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
-__rte_internal
-int dpci_set_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- uint8_t options,
- struct opr_cfg *cfg);
-
-__rte_internal
-int dpci_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry);
-
#endif /* __FSL_DPCI_H */
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 7caa6c68a1..0b3add5d52 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -26,10 +26,6 @@ int dpcon_open(struct fsl_mc_io *mc_io,
int dpcon_id,
uint16_t *token);
-int dpcon_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dpcon_cfg - Structure representing DPCON configuration
* @num_priorities: Number of priorities for the DPCON channel (1-8)
@@ -38,34 +34,6 @@ struct dpcon_cfg {
uint8_t num_priorities;
};
-int dpcon_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpcon_cfg *cfg,
- uint32_t *obj_id);
-
-int dpcon_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t obj_id);
-
-int dpcon_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dpcon_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dpcon_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
-int dpcon_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dpcon_attr - Structure representing DPCON attributes
* @id: DPCON object ID
@@ -84,9 +52,4 @@ int dpcon_get_attributes(struct fsl_mc_io *mc_io,
uint16_t token,
struct dpcon_attr *attr);
-int dpcon_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
#endif /* __FSL_DPCON_H */
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..eb1d3c1658 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -47,17 +47,6 @@ struct dpdmai_cfg {
uint8_t priorities[DPDMAI_PRIO_NUM];
};
-int dpdmai_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpdmai_cfg *cfg,
- uint32_t *obj_id);
-
-int dpdmai_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
__rte_internal
int dpdmai_enable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -68,15 +57,6 @@ int dpdmai_disable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
-int dpdmai_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
-int dpdmai_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..0ddcdb41ec 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -50,17 +50,6 @@ struct dpio_cfg {
};
-int dpio_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpio_cfg *cfg,
- uint32_t *obj_id);
-
-int dpio_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
__rte_internal
int dpio_enable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -71,11 +60,6 @@ int dpio_disable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
-int dpio_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
__rte_internal
int dpio_reset(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -87,11 +71,6 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
uint16_t token,
uint8_t sdest);
-int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t *sdest);
-
__rte_internal
int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -135,9 +114,4 @@ int dpio_get_attributes(struct fsl_mc_io *mc_io,
uint16_t token,
struct dpio_attr *attr);
-int dpio_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
#endif /* __FSL_DPIO_H */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d9619848d8..06b3e81f26 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -109,13 +109,6 @@ void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
}
}
-int dpaa2_dpbp_supported(void)
-{
- if (TAILQ_EMPTY(&dpbp_dev_list))
- return -1;
- return 0;
-}
-
static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
.dev_type = DPAA2_BPOOL,
.create = dpaa2_create_dpbp_device,
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ac24f01451..b72017bd32 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -454,9 +454,6 @@ struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
__rte_internal
void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
-__rte_internal
-int dpaa2_dpbp_supported(void);
-
__rte_internal
struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..12beb148fb 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -36,6 +36,4 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
__rte_internal
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
-uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
-
#endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index eb68c9cab5..b24c809fa1 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -50,14 +50,6 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
*/
int qbman_swp_update(struct qbman_swp *p, int stash_off);
-/**
- * qbman_swp_finish() - Create and destroy a functional object representing
- * the given QBMan portal descriptor.
- * @p: the qbman_swp object to be destroyed.
- *
- */
-void qbman_swp_finish(struct qbman_swp *p);
-
/**
* qbman_swp_invalidate() - Invalidate the cache enabled area of the QBMan
* portal. This is required to be called if a portal moved to another core
@@ -67,14 +59,6 @@ void qbman_swp_finish(struct qbman_swp *p);
*/
void qbman_swp_invalidate(struct qbman_swp *p);
-/**
- * qbman_swp_get_desc() - Get the descriptor of the given portal object.
- * @p: the given portal object.
- *
- * Return the descriptor for this portal.
- */
-const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
-
/**************/
/* Interrupts */
/**************/
@@ -92,32 +76,6 @@ const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
/* Volatile dequeue command interrupt */
#define QBMAN_SWP_INTERRUPT_VDCI ((uint32_t)0x00000020)
-/**
- * qbman_swp_interrupt_get_vanish() - Get the data in software portal
- * interrupt status disable register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_ISDR register.
- */
-uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p);
-
-/**
- * qbman_swp_interrupt_set_vanish() - Set the data in software portal
- * interrupt status disable register.
- * @p: the given software portal object.
- * @mask: The value to set in SWP_IDSR register.
- */
-void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask);
-
-/**
- * qbman_swp_interrupt_read_status() - Get the data in software portal
- * interrupt status register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_ISR register.
- */
-uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
-
/**
* qbman_swp_interrupt_clear_status() - Set the data in software portal
* interrupt status register.
@@ -127,13 +85,6 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
__rte_internal
void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
-/**
- * qbman_swp_dqrr_thrshld_read_status() - Get the data in software portal
- * DQRR interrupt threshold register.
- * @p: the given software portal object.
- */
-uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p);
-
/**
* qbman_swp_dqrr_thrshld_write() - Set the data in software portal
* DQRR interrupt threshold register.
@@ -142,13 +93,6 @@ uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p);
*/
void qbman_swp_dqrr_thrshld_write(struct qbman_swp *p, uint32_t mask);
-/**
- * qbman_swp_intr_timeout_read_status() - Get the data in software portal
- * Interrupt Time-Out period register.
- * @p: the given software portal object.
- */
-uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p);
-
/**
* qbman_swp_intr_timeout_write() - Set the data in software portal
* Interrupt Time-Out period register.
@@ -157,15 +101,6 @@ uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p);
*/
void qbman_swp_intr_timeout_write(struct qbman_swp *p, uint32_t mask);
-/**
- * qbman_swp_interrupt_get_trigger() - Get the data in software portal
- * interrupt enable register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_IER register.
- */
-uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
-
/**
* qbman_swp_interrupt_set_trigger() - Set the data in software portal
* interrupt enable register.
@@ -174,15 +109,6 @@ uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
*/
void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask);
-/**
- * qbman_swp_interrupt_get_inhibit() - Get the data in software portal
- * interrupt inhibit register.
- * @p: the given software portal object.
- *
- * Return the settings in SWP_IIR register.
- */
-int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p);
-
/**
* qbman_swp_interrupt_set_inhibit() - Set the data in software portal
* interrupt inhibit register.
@@ -268,21 +194,6 @@ int qbman_swp_dequeue_get_timeout(struct qbman_swp *s, unsigned int *timeout);
/* Push-mode dequeuing */
/* ------------------- */
-/* The user of a portal can enable and disable push-mode dequeuing of up to 16
- * channels independently. It does not specify this toggling by channel IDs, but
- * rather by specifying the index (from 0 to 15) that has been mapped to the
- * desired channel.
- */
-
-/**
- * qbman_swp_push_get() - Get the push dequeue setup.
- * @s: the software portal object.
- * @channel_idx: the channel index to query.
- * @enabled: returned boolean to show whether the push dequeue is enabled for
- * the given channel.
- */
-void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
-
/**
* qbman_swp_push_set() - Enable or disable push dequeue.
* @s: the software portal object.
@@ -363,17 +274,6 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
__rte_internal
void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
uint8_t numframes);
-/**
- * qbman_pull_desc_set_token() - Set dequeue token for pull command
- * @d: the dequeue descriptor
- * @token: the token to be set
- *
- * token is the value that shows up in the dequeue response that can be used to
- * detect when the results have been published. The easiest technique is to zero
- * result "storage" before issuing a dequeue, and use any non-zero 'token' value
- */
-void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
-
/* Exactly one of the following descriptor "actions" should be set. (Calling any
* one of these will replace the effect of any prior call to one of these.)
* - pull dequeue from the given frame queue (FQ)
@@ -387,30 +287,6 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
__rte_internal
void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
-/**
- * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues.
- * @wqid: composed of channel id and wqid within the channel.
- * @dct: the dequeue command type.
- */
-void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
- enum qbman_pull_type_e dct);
-
-/* qbman_pull_desc_set_channel() - Set channelid from which the dequeue command
- * dequeues.
- * @chid: the channel id to be dequeued.
- * @dct: the dequeue command type.
- */
-void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
- enum qbman_pull_type_e dct);
-
-/**
- * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
- *
- * @rad: 1 = Reschedule the FQ after dequeue.
- * 0 = Allow the FQ to remain active after dequeue.
- */
-void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
-
/**
* qbman_swp_pull() - Issue the pull dequeue command
* @s: the software portal object.
@@ -471,17 +347,6 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
__rte_internal
uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
-/**
- * qbman_get_dqrr_from_idx() - Use index to get the dqrr entry from the
- * given portal
- * @s: the given portal.
- * @idx: the dqrr index.
- *
- * Return dqrr entry object.
- */
-__rte_internal
-struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
-
/* ------------------------------------------------- */
/* Polling user-provided storage for dequeue results */
/* ------------------------------------------------- */
@@ -549,78 +414,6 @@ static inline int qbman_result_is_SCN(const struct qbman_result *dq)
return !qbman_result_is_DQ(dq);
}
-/* Recognise different notification types, only required if the user allows for
- * these to occur, and cares about them when they do.
- */
-
-/**
- * qbman_result_is_FQDAN() - Check for FQ Data Availability
- * @dq: the qbman_result object.
- *
- * Return 1 if this is FQDAN.
- */
-int qbman_result_is_FQDAN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CDAN() - Check for Channel Data Availability
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CDAN.
- */
-int qbman_result_is_CDAN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CSCN() - Check for Congestion State Change
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CSCN.
- */
-int qbman_result_is_CSCN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_BPSCN() - Check for Buffer Pool State Change.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is BPSCN.
- */
-int qbman_result_is_BPSCN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_CGCU() - Check for Congestion Group Count Update.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is CGCU.
- */
-int qbman_result_is_CGCU(const struct qbman_result *dq);
-
-/* Frame queue state change notifications; (FQDAN in theory counts too as it
- * leaves a FQ parked, but it is primarily a data availability notification)
- */
-
-/**
- * qbman_result_is_FQRN() - Check for FQ Retirement Notification.
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQRN.
- */
-int qbman_result_is_FQRN(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_FQRNI() - Check for FQ Retirement Immediate
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQRNI.
- */
-int qbman_result_is_FQRNI(const struct qbman_result *dq);
-
-/**
- * qbman_result_is_FQPN() - Check for FQ Park Notification
- * @dq: the qbman_result object to check.
- *
- * Return 1 if this is FQPN.
- */
-int qbman_result_is_FQPN(const struct qbman_result *dq);
-
/* Parsing frame dequeue results (qbman_result_is_DQ() must be TRUE)
*/
/* FQ empty */
@@ -695,30 +488,6 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
__rte_internal
uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
-/**
- * qbman_result_DQ_fqid() - Get the fqid in dequeue response
- * @dq: the dequeue result.
- *
- * Return fqid.
- */
-uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq);
-
-/**
- * qbman_result_DQ_byte_count() - Get the byte count in dequeue response
- * @dq: the dequeue result.
- *
- * Return the byte count remaining in the FQ.
- */
-uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq);
-
-/**
- * qbman_result_DQ_frame_count - Get the frame count in dequeue response
- * @dq: the dequeue result.
- *
- * Return the frame count remaining in the FQ.
- */
-uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
-
/**
* qbman_result_DQ_fqd_ctx() - Get the frame queue context in dequeue response
* @dq: the dequeue result.
@@ -780,66 +549,6 @@ uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn);
/* Get the CGID from the CSCN */
#define qbman_result_CSCN_cgid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
-/**
- * qbman_result_bpscn_bpid() - Get the bpid from BPSCN
- * @scn: the state change notification.
- *
- * Return the buffer pool id.
- */
-uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_has_free_bufs() - Check whether there are free
- * buffers in the pool from BPSCN.
- * @scn: the state change notification.
- *
- * Return the number of free buffers.
- */
-int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_is_depleted() - Check BPSCN to see whether the
- * buffer pool is depleted.
- * @scn: the state change notification.
- *
- * Return the status of buffer pool depletion.
- */
-int qbman_result_bpscn_is_depleted(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_is_surplus() - Check BPSCN to see whether the buffer
- * pool is surplus or not.
- * @scn: the state change notification.
- *
- * Return the status of buffer pool surplus.
- */
-int qbman_result_bpscn_is_surplus(const struct qbman_result *scn);
-
-/**
- * qbman_result_bpscn_ctx() - Get the BPSCN CTX from BPSCN message
- * @scn: the state change notification.
- *
- * Return the BPSCN context.
- */
-uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn);
-
-/* Parsing CGCU */
-/**
- * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid
- * @scn: the state change notification.
- *
- * Return the CGCU resource id.
- */
-uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn);
-
-/**
- * qbman_result_cgcu_icnt() - Get the I_CNT from CGCU
- * @scn: the state change notification.
- *
- * Return instantaneous count in the CGCU notification.
- */
-uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn);
-
/************/
/* Enqueues */
/************/
@@ -916,25 +625,6 @@ __rte_internal
void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
uint16_t opr_id, uint16_t seqnum, int incomplete);
-/**
- * qbman_eq_desc_set_orp_hole() - fill a hole in the order-restoration sequence
- * without any enqueue
- * @d: the enqueue descriptor.
- * @opr_id: the order point record id.
- * @seqnum: the order restoration sequence number.
- */
-void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint16_t opr_id,
- uint16_t seqnum);
-
-/**
- * qbman_eq_desc_set_orp_nesn() - advance NESN (Next Expected Sequence Number)
- * without any enqueue
- * @d: the enqueue descriptor.
- * @opr_id: the order point record id.
- * @seqnum: the order restoration sequence number.
- */
-void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
- uint16_t seqnum);
/**
* qbman_eq_desc_set_response() - Set the enqueue response info.
* @d: the enqueue descriptor
@@ -981,27 +671,6 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
__rte_internal
void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
-/**
- * qbman_eq_desc_set_qd() - Set Queuing Destination for the enqueue command.
- * @d: the enqueue descriptor
- * @qdid: the id of the queuing destination to be enqueued.
- * @qd_bin: the queuing destination bin
- * @qd_prio: the queuing destination priority.
- */
-__rte_internal
-void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
- uint16_t qd_bin, uint8_t qd_prio);
-
-/**
- * qbman_eq_desc_set_eqdi() - enable/disable EQDI interrupt
- * @d: the enqueue descriptor
- * @enable: boolean to enable/disable EQDI
- *
- * Determines whether or not the portal's EQDI interrupt source should be
- * asserted after the enqueue command is completed.
- */
-void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
-
/**
* qbman_eq_desc_set_dca() - Set DCA mode in the enqueue command.
* @d: the enqueue descriptor.
@@ -1060,19 +729,6 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
__rte_internal
uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
-/**
- * qbman_swp_enqueue() - Issue an enqueue command.
- * @s: the software portal used for enqueue.
- * @d: the enqueue descriptor.
- * @fd: the frame descriptor to be enqueued.
- *
- * Please note that 'fd' should only be NULL if the "action" of the
- * descriptor is "orp_hole" or "orp_nesn".
- *
- * Return 0 for a successful enqueue, -EBUSY if the EQCR is not ready.
- */
-int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
- const struct qbman_fd *fd);
/**
* qbman_swp_enqueue_multiple() - Enqueue multiple frames with same
eq descriptor
@@ -1171,13 +827,6 @@ void qbman_release_desc_clear(struct qbman_release_desc *d);
__rte_internal
void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
-/**
- * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI
- * interrupt source should be asserted after the release command is completed.
- * @d: the qbman release descriptor.
- */
-void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
-
/**
* qbman_swp_release() - Issue a buffer release command.
* @s: the software portal object.
@@ -1217,116 +866,4 @@ __rte_internal
int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
unsigned int num_buffers);
- /*****************/
- /* FQ management */
- /*****************/
-/**
- * qbman_swp_fq_schedule() - Move the fq to the scheduled state.
- * @s: the software portal object.
- * @fqid: the index of frame queue to be scheduled.
- *
- * There are a couple of different ways that a FQ can end up parked state,
- * This schedules it.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid);
-
-/**
- * qbman_swp_fq_force() - Force the FQ to fully scheduled state.
- * @s: the software portal object.
- * @fqid: the index of frame queue to be forced.
- *
- * Force eligible will force a tentatively-scheduled FQ to be fully-scheduled
- * and thus be available for selection by any channel-dequeuing behaviour (push
- * or pull). If the FQ is subsequently "dequeued" from the channel and is still
- * empty at the time this happens, the resulting dq_entry will have no FD.
- * (qbman_result_DQ_fd() will return NULL.)
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid);
-
-/**
- * These functions change the FQ flow-control stuff between XON/XOFF. (The
- * default is XON.) This setting doesn't affect enqueues to the FQ, just
- * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when
- * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is
- * changed to XOFF after it had already become truly-scheduled to a channel, and
- * a pull dequeue of that channel occurs that selects that FQ for dequeuing,
- * then the resulting dq_entry will have no FD. (qbman_result_DQ_fd() will
- * return NULL.)
- */
-/**
- * qbman_swp_fq_xon() - XON the frame queue.
- * @s: the software portal object.
- * @fqid: the index of frame queue.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid);
-/**
- * qbman_swp_fq_xoff() - XOFF the frame queue.
- * @s: the software portal object.
- * @fqid: the index of frame queue.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid);
-
- /**********************/
- /* Channel management */
- /**********************/
-
-/**
- * If the user has been allocated a channel object that is going to generate
- * CDANs to another channel, then these functions will be necessary.
- * CDAN-enabled channels only generate a single CDAN notification, after which
- * it they need to be reenabled before they'll generate another. (The idea is
- * that pull dequeuing will occur in reaction to the CDAN, followed by a
- * reenable step.) Each function generates a distinct command to hardware, so a
- * combination function is provided if the user wishes to modify the "context"
- * (which shows up in each CDAN message) each time they reenable, as a single
- * command to hardware.
- */
-
-/**
- * qbman_swp_CDAN_set_context() - Set CDAN context
- * @s: the software portal object.
- * @channelid: the channel index.
- * @ctx: the context to be set in CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
- uint64_t ctx);
-
-/**
- * qbman_swp_CDAN_enable() - Enable CDAN for the channel.
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid);
-
-/**
- * qbman_swp_CDAN_disable() - disable CDAN for the channel.
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid);
-
-/**
- * qbman_swp_CDAN_set_context_enable() - Set CDAN contest and enable CDAN
- * @s: the software portal object.
- * @channelid: the index of the channel to generate CDAN.
- * @ctx: the context set in CDAN.
- *
- * Return 0 for success, or negative error code for failure.
- */
-int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
- uint64_t ctx);
#endif /* !_FSL_QBMAN_PORTAL_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..2c6a7dcd16 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -59,8 +59,3 @@ uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
{
return (r->frm_cnt & 0x00FFFFFF);
}
-
-uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
-{
- return r->byte_cnt;
-}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 77c9d508c4..b8bcfb7189 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -82,10 +82,6 @@ qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
- const struct qbman_eq_desc *d,
- const struct qbman_fd *fd);
-static int
qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
@@ -377,80 +373,30 @@ int qbman_swp_update(struct qbman_swp *p, int stash_off)
return 0;
}
-void qbman_swp_finish(struct qbman_swp *p)
-{
-#ifdef QBMAN_CHECKING
- QBMAN_BUG_ON(p->mc.check != swp_mc_can_start);
-#endif
- qbman_swp_sys_finish(&p->sys);
- portal_idx_map[p->desc.idx] = NULL;
- free(p);
-}
-
-const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p)
-{
- return &p->desc;
-}
-
/**************/
/* Interrupts */
/**************/
-uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISDR);
-}
-
-void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask)
-{
- qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISDR, mask);
-}
-
-uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
-}
-
void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
{
qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
}
-uint32_t qbman_swp_dqrr_thrshld_read_status(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_DQRR_ITR);
-}
-
void qbman_swp_dqrr_thrshld_write(struct qbman_swp *p, uint32_t mask)
{
qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_DQRR_ITR, mask);
}
-uint32_t qbman_swp_intr_timeout_read_status(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ITPR);
-}
-
void qbman_swp_intr_timeout_write(struct qbman_swp *p, uint32_t mask)
{
qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ITPR, mask);
}
-uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IER);
-}
-
void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask)
{
qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IER, mask);
}
-int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p)
-{
- return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IIR);
-}
-
void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
{
qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IIR,
@@ -643,28 +589,6 @@ void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
}
-void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint16_t opr_id,
- uint16_t seqnum)
-{
- d->eq.verb |= 1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT;
- d->eq.verb &= ~QB_ENQUEUE_CMD_EC_OPTION_MASK;
- d->eq.orpid = opr_id;
- d->eq.seqnum = seqnum;
- d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
- d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT);
-}
-
-void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
- uint16_t seqnum)
-{
- d->eq.verb |= 1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT;
- d->eq.verb &= ~QB_ENQUEUE_CMD_EC_OPTION_MASK;
- d->eq.orpid = opr_id;
- d->eq.seqnum = seqnum;
- d->eq.seqnum &= ~(1 << QB_ENQUEUE_CMD_NLIS_SHIFT);
- d->eq.seqnum |= 1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT;
-}
-
void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
dma_addr_t storage_phys,
int stash)
@@ -684,23 +608,6 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
d->eq.tgtid = fqid;
}
-void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
- uint16_t qd_bin, uint8_t qd_prio)
-{
- d->eq.verb |= 1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT;
- d->eq.tgtid = qdid;
- d->eq.qdbin = qd_bin;
- d->eq.qpri = qd_prio;
-}
-
-void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
-{
- if (enable)
- d->eq.verb |= 1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT;
- else
- d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT);
-}
-
void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
uint8_t dqrr_idx, int park)
{
@@ -789,13 +696,6 @@ static int qbman_swp_enqueue_array_mode_mem_back(struct qbman_swp *s,
return 0;
}
-static inline int qbman_swp_enqueue_array_mode(struct qbman_swp *s,
- const struct qbman_eq_desc *d,
- const struct qbman_fd *fd)
-{
- return qbman_swp_enqueue_array_mode_ptr(s, d, fd);
-}
-
static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -873,44 +773,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
return 0;
}
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
- struct qbman_swp *s,
- const struct qbman_eq_desc *d,
- const struct qbman_fd *fd)
-{
- uint32_t *p;
- const uint32_t *cl = qb_cl(d);
- uint32_t eqcr_ci, full_mask, half_mask;
-
- half_mask = (s->eqcr.pi_ci_mask>>1);
- full_mask = s->eqcr.pi_ci_mask;
- if (!s->eqcr.available) {
- eqcr_ci = s->eqcr.ci;
- s->eqcr.ci = qbman_cinh_read(&s->sys,
- QBMAN_CINH_SWP_EQCR_CI) & full_mask;
- s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
- eqcr_ci, s->eqcr.ci);
- if (!s->eqcr.available)
- return -EBUSY;
- }
-
- p = qbman_cinh_write_start_wo_shadow(&s->sys,
- QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
- memcpy_byte_by_byte(&p[1], &cl[1], 28);
- memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
- lwsync();
-
- /* Set the verb byte, have to substitute in the valid-bit */
- p[0] = cl[0] | s->eqcr.pi_vb;
- s->eqcr.pi++;
- s->eqcr.pi &= full_mask;
- s->eqcr.available--;
- if (!(s->eqcr.pi & half_mask))
- s->eqcr.pi_vb ^= QB_VALID_BIT;
-
- return 0;
-}
-
static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -949,25 +811,6 @@ static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
return 0;
}
-static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
- const struct qbman_eq_desc *d,
- const struct qbman_fd *fd)
-{
- if (!s->stash_off)
- return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
- else
- return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
-}
-
-int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
- const struct qbman_fd *fd)
-{
- if (s->sys.eqcr_mode == qman_eqcr_vb_array)
- return qbman_swp_enqueue_array_mode(s, d, fd);
- else /* Use ring mode by default */
- return qbman_swp_enqueue_ring_mode(s, d, fd);
-}
-
static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1769,14 +1612,6 @@ int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
/* Static (push) dequeue */
/*************************/
-void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
-{
- uint16_t src = (s->sdq >> QB_SDQCR_SRC_SHIFT) & QB_SDQCR_SRC_MASK;
-
- QBMAN_BUG_ON(channel_idx > 15);
- *enabled = src | (1 << channel_idx);
-}
-
void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
{
uint16_t dqsrc;
@@ -1845,11 +1680,6 @@ void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
d->pull.numf = numframes - 1;
}
-void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
-{
- d->pull.tok = token;
-}
-
void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
{
d->pull.verb |= 1 << QB_VDQCR_VERB_DCT_SHIFT;
@@ -1857,34 +1687,6 @@ void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
d->pull.dq_src = fqid;
}
-void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
- enum qbman_pull_type_e dct)
-{
- d->pull.verb |= dct << QB_VDQCR_VERB_DCT_SHIFT;
- d->pull.verb |= qb_pull_dt_workqueue << QB_VDQCR_VERB_DT_SHIFT;
- d->pull.dq_src = wqid;
-}
-
-void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
- enum qbman_pull_type_e dct)
-{
- d->pull.verb |= dct << QB_VDQCR_VERB_DCT_SHIFT;
- d->pull.verb |= qb_pull_dt_channel << QB_VDQCR_VERB_DT_SHIFT;
- d->pull.dq_src = chid;
-}
-
-void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
-{
- if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
- if (rad)
- d->pull.verb |= 1 << QB_VDQCR_VERB_RAD_SHIFT;
- else
- d->pull.verb &= ~(1 << QB_VDQCR_VERB_RAD_SHIFT);
- } else {
- printf("The RAD feature is not valid when RLS = 0\n");
- }
-}
-
static int qbman_swp_pull_direct(struct qbman_swp *s,
struct qbman_pull_desc *d)
{
@@ -2303,47 +2105,6 @@ int qbman_result_is_DQ(const struct qbman_result *dq)
return __qbman_result_is_x(dq, QBMAN_RESULT_DQ);
}
-int qbman_result_is_FQDAN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_FQDAN);
-}
-
-int qbman_result_is_CDAN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_CDAN);
-}
-
-int qbman_result_is_CSCN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_CSCN_MEM) ||
- __qbman_result_is_x(dq, QBMAN_RESULT_CSCN_WQ);
-}
-
-int qbman_result_is_BPSCN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_BPSCN);
-}
-
-int qbman_result_is_CGCU(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_CGCU);
-}
-
-int qbman_result_is_FQRN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_FQRN);
-}
-
-int qbman_result_is_FQRNI(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_FQRNI);
-}
-
-int qbman_result_is_FQPN(const struct qbman_result *dq)
-{
- return __qbman_result_is_x(dq, QBMAN_RESULT_FQPN);
-}
-
/*********************************/
/* Parsing frame dequeue results */
/*********************************/
@@ -2365,21 +2126,6 @@ uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq)
return dq->dq.oprid;
}
-uint32_t qbman_result_DQ_fqid(const struct qbman_result *dq)
-{
- return dq->dq.fqid;
-}
-
-uint32_t qbman_result_DQ_byte_count(const struct qbman_result *dq)
-{
- return dq->dq.fq_byte_cnt;
-}
-
-uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq)
-{
- return dq->dq.fq_frm_cnt;
-}
-
uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq)
{
return dq->dq.fqd_ctx;
@@ -2408,47 +2154,6 @@ uint64_t qbman_result_SCN_ctx(const struct qbman_result *scn)
return scn->scn.ctx;
}
-/*****************/
-/* Parsing BPSCN */
-/*****************/
-uint16_t qbman_result_bpscn_bpid(const struct qbman_result *scn)
-{
- return (uint16_t)qbman_result_SCN_rid(scn) & 0x3FFF;
-}
-
-int qbman_result_bpscn_has_free_bufs(const struct qbman_result *scn)
-{
- return !(int)(qbman_result_SCN_state(scn) & 0x1);
-}
-
-int qbman_result_bpscn_is_depleted(const struct qbman_result *scn)
-{
- return (int)(qbman_result_SCN_state(scn) & 0x2);
-}
-
-int qbman_result_bpscn_is_surplus(const struct qbman_result *scn)
-{
- return (int)(qbman_result_SCN_state(scn) & 0x4);
-}
-
-uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn)
-{
- return qbman_result_SCN_ctx(scn);
-}
-
-/*****************/
-/* Parsing CGCU */
-/*****************/
-uint16_t qbman_result_cgcu_cgid(const struct qbman_result *scn)
-{
- return (uint16_t)qbman_result_SCN_rid(scn) & 0xFFFF;
-}
-
-uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn)
-{
- return qbman_result_SCN_ctx(scn);
-}
-
/********************/
/* Parsing EQ RESP */
/********************/
@@ -2492,14 +2197,6 @@ void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid)
d->br.bpid = bpid;
}
-void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable)
-{
- if (enable)
- d->br.verb |= 1 << QB_BR_RCDI_SHIFT;
- else
- d->br.verb &= ~(1 << QB_BR_RCDI_SHIFT);
-}
-
#define RAR_IDX(rar) ((rar) & 0x7)
#define RAR_VB(rar) ((rar) & 0x80)
#define RAR_SUCCESS(rar) ((rar) & 0x100)
@@ -2751,60 +2448,6 @@ struct qbman_alt_fq_state_rslt {
#define ALT_FQ_FQID_MASK 0x00FFFFFF
-static int qbman_swp_alt_fq_state(struct qbman_swp *s, uint32_t fqid,
- uint8_t alt_fq_verb)
-{
- struct qbman_alt_fq_state_desc *p;
- struct qbman_alt_fq_state_rslt *r;
-
- /* Start the management command */
- p = qbman_swp_mc_start(s);
- if (!p)
- return -EBUSY;
-
- p->fqid = fqid & ALT_FQ_FQID_MASK;
-
- /* Complete the management command */
- r = qbman_swp_mc_complete(s, p, alt_fq_verb);
- if (!r) {
- pr_err("qbman: mgmt cmd failed, no response (verb=0x%x)\n",
- alt_fq_verb);
- return -EIO;
- }
-
- /* Decode the outcome */
- QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != alt_fq_verb);
-
- /* Determine success or failure */
- if (r->rslt != QBMAN_MC_RSLT_OK) {
- pr_err("ALT FQID %d failed: verb = 0x%08x, code = 0x%02x\n",
- fqid, alt_fq_verb, r->rslt);
- return -EIO;
- }
-
- return 0;
-}
-
-int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid)
-{
- return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_SCHEDULE);
-}
-
-int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid)
-{
- return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_FORCE);
-}
-
-int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid)
-{
- return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XON);
-}
-
-int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid)
-{
- return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XOFF);
-}
-
/**********************/
/* Channel management */
/**********************/
@@ -2834,87 +2477,7 @@ struct qbman_cdan_ctrl_rslt {
#define CODE_CDAN_WE_EN 0x1
#define CODE_CDAN_WE_CTX 0x4
-static int qbman_swp_CDAN_set(struct qbman_swp *s, uint16_t channelid,
- uint8_t we_mask, uint8_t cdan_en,
- uint64_t ctx)
-{
- struct qbman_cdan_ctrl_desc *p;
- struct qbman_cdan_ctrl_rslt *r;
-
- /* Start the management command */
- p = qbman_swp_mc_start(s);
- if (!p)
- return -EBUSY;
-
- /* Encode the caller-provided attributes */
- p->ch = channelid;
- p->we = we_mask;
- if (cdan_en)
- p->ctrl = 1;
- else
- p->ctrl = 0;
- p->cdan_ctx = ctx;
-
- /* Complete the management command */
- r = qbman_swp_mc_complete(s, p, QBMAN_WQCHAN_CONFIGURE);
- if (!r) {
- pr_err("qbman: wqchan config failed, no response\n");
- return -EIO;
- }
-
- /* Decode the outcome */
- QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK)
- != QBMAN_WQCHAN_CONFIGURE);
-
- /* Determine success or failure */
- if (r->rslt != QBMAN_MC_RSLT_OK) {
- pr_err("CDAN cQID %d failed: code = 0x%02x\n",
- channelid, r->rslt);
- return -EIO;
- }
-
- return 0;
-}
-
-int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
- uint64_t ctx)
-{
- return qbman_swp_CDAN_set(s, channelid,
- CODE_CDAN_WE_CTX,
- 0, ctx);
-}
-
-int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid)
-{
- return qbman_swp_CDAN_set(s, channelid,
- CODE_CDAN_WE_EN,
- 1, 0);
-}
-
-int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid)
-{
- return qbman_swp_CDAN_set(s, channelid,
- CODE_CDAN_WE_EN,
- 0, 0);
-}
-
-int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
- uint64_t ctx)
-{
- return qbman_swp_CDAN_set(s, channelid,
- CODE_CDAN_WE_EN | CODE_CDAN_WE_CTX,
- 1, ctx);
-}
-
uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr)
{
return QBMAN_IDX_FROM_DQRR(dqrr);
}
-
-struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx)
-{
- struct qbman_result *dq;
-
- dq = qbman_cena_read(&s->sys, QBMAN_CENA_SWP_DQRR(idx));
- return dq;
-}
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 37d45dffe5..f6ded1717e 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -170,16 +170,6 @@ struct rte_fslmc_bus {
__rte_internal
void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
-/**
- * Unregister a DPAA2 driver.
- *
- * @param driver
- * A pointer to a rte_dpaa2_driver structure describing the driver
- * to be unregistered.
- */
-__rte_internal
-void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
-
/** Helper for DPAA2 device registration from driver (eth, crypto) instance */
#define RTE_PMD_REGISTER_DPAA2(nm, dpaa2_drv) \
RTE_INIT(dpaa2initfn_ ##nm) \
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index f44c1a7988..a95c0faa00 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -11,7 +11,6 @@ INTERNAL {
dpaa2_affine_qbman_swp;
dpaa2_alloc_dpbp_dev;
dpaa2_alloc_dq_storage;
- dpaa2_dpbp_supported;
dpaa2_dqrr_size;
dpaa2_eqcr_size;
dpaa2_free_dpbp_dev;
@@ -28,8 +27,6 @@ INTERNAL {
dpbp_get_num_free_bufs;
dpbp_open;
dpbp_reset;
- dpci_get_opr;
- dpci_set_opr;
dpci_set_rx_queue;
dpcon_get_attributes;
dpcon_open;
@@ -61,12 +58,10 @@ INTERNAL {
qbman_eq_desc_set_fq;
qbman_eq_desc_set_no_orp;
qbman_eq_desc_set_orp;
- qbman_eq_desc_set_qd;
qbman_eq_desc_set_response;
qbman_eq_desc_set_token;
qbman_fq_query_state;
qbman_fq_state_frame_count;
- qbman_get_dqrr_from_idx;
qbman_get_dqrr_idx;
qbman_pull_desc_clear;
qbman_pull_desc_set_fq;
@@ -103,7 +98,6 @@ INTERNAL {
rte_dpaa2_intr_disable;
rte_dpaa2_intr_enable;
rte_fslmc_driver_register;
- rte_fslmc_driver_unregister;
rte_fslmc_get_device_count;
rte_fslmc_object_register;
rte_global_active_dqs_list;
diff --git a/drivers/bus/ifpga/ifpga_common.c b/drivers/bus/ifpga/ifpga_common.c
index 78e2eaee4e..7281b169d0 100644
--- a/drivers/bus/ifpga/ifpga_common.c
+++ b/drivers/bus/ifpga/ifpga_common.c
@@ -52,29 +52,6 @@ int rte_ifpga_get_integer32_arg(const char *key __rte_unused,
return 0;
}
-int ifpga_get_integer64_arg(const char *key __rte_unused,
- const char *value, void *extra_args)
-{
- if (!value || !extra_args)
- return -EINVAL;
-
- *(uint64_t *)extra_args = strtoull(value, NULL, 0);
-
- return 0;
-}
-int ifpga_get_unsigned_long(const char *str, int base)
-{
- unsigned long num;
- char *end = NULL;
-
- errno = 0;
-
- num = strtoul(str, &end, base);
- if ((str[0] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
- return -1;
-
- return num;
-}
int ifpga_afu_id_cmp(const struct rte_afu_id *afu_id0,
const struct rte_afu_id *afu_id1)
diff --git a/drivers/bus/ifpga/ifpga_common.h b/drivers/bus/ifpga/ifpga_common.h
index f9254b9d5d..44381eb78d 100644
--- a/drivers/bus/ifpga/ifpga_common.h
+++ b/drivers/bus/ifpga/ifpga_common.h
@@ -9,9 +9,6 @@ int rte_ifpga_get_string_arg(const char *key __rte_unused,
const char *value, void *extra_args);
int rte_ifpga_get_integer32_arg(const char *key __rte_unused,
const char *value, void *extra_args);
-int ifpga_get_integer64_arg(const char *key __rte_unused,
- const char *value, void *extra_args);
-int ifpga_get_unsigned_long(const char *str, int base);
int ifpga_afu_id_cmp(const struct rte_afu_id *afu_id0,
const struct rte_afu_id *afu_id1);
diff --git a/drivers/common/dpaax/dpaa_of.c b/drivers/common/dpaax/dpaa_of.c
index bb2c8fc66b..ad96eb0b3d 100644
--- a/drivers/common/dpaax/dpaa_of.c
+++ b/drivers/common/dpaax/dpaa_of.c
@@ -242,33 +242,6 @@ of_init_path(const char *dt_path)
return 0;
}
-static void
-destroy_dir(struct dt_dir *d)
-{
- struct dt_file *f, *tmpf;
- struct dt_dir *dd, *tmpd;
-
- list_for_each_entry_safe(f, tmpf, &d->files, node.list) {
- list_del(&f->node.list);
- free(f);
- }
- list_for_each_entry_safe(dd, tmpd, &d->subdirs, node.list) {
- destroy_dir(dd);
- list_del(&dd->node.list);
- free(dd);
- }
-}
-
-void
-of_finish(void)
-{
- DPAAX_HWWARN(!alive, "Double-finish of device-tree driver!");
-
- destroy_dir(&root_dir);
- INIT_LIST_HEAD(&linear);
- alive = 0;
-}
-
static const struct dt_dir *
next_linear(const struct dt_dir *f)
{
diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
index aed6bf98b0..0ba3794e9b 100644
--- a/drivers/common/dpaax/dpaa_of.h
+++ b/drivers/common/dpaax/dpaa_of.h
@@ -161,11 +161,6 @@ bool of_device_is_compatible(const struct device_node *dev_node,
__rte_internal
int of_init_path(const char *dt_path);
-/* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
- * full reload is desired without a process exit.
- */
-void of_finish(void);
-
/* Use of this wrapper is recommended. */
static inline int of_init(void)
{
diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c
index 91bee65e7b..357e62c164 100644
--- a/drivers/common/dpaax/dpaax_iova_table.c
+++ b/drivers/common/dpaax/dpaax_iova_table.c
@@ -346,45 +346,6 @@ dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
return 0;
}
-/* dpaax_iova_table_dump
- * Dump the table, with its entries, on screen. Only works in Debug Mode
- * Not for weak hearted - the tables can get quite large
- */
-void
-dpaax_iova_table_dump(void)
-{
- unsigned int i, j;
- struct dpaax_iovat_element *entry;
-
- /* In case DEBUG is not enabled, some 'if' conditions might misbehave
- * as they have nothing else in them except a DPAAX_DEBUG() which if
- * tuned out would leave 'if' naked.
- */
- if (rte_log_get_global_level() < RTE_LOG_DEBUG) {
- DPAAX_ERR("Set log level to Debug for PA->Table dump!");
- return;
- }
-
- DPAAX_DEBUG(" === Start of PA->VA Translation Table ===");
- if (dpaax_iova_table_p == NULL)
- DPAAX_DEBUG("\tNULL");
-
- entry = dpaax_iova_table_p->entries;
- for (i = 0; i < dpaax_iova_table_p->count; i++) {
- DPAAX_DEBUG("\t(%16i),(%16"PRIu64"),(%16zu),(%16p)",
- i, entry[i].start, entry[i].len, entry[i].pages);
- DPAAX_DEBUG("\t\t (PA), (VA)");
- for (j = 0; j < (entry->len/DPAAX_MEM_SPLIT); j++) {
- if (entry[i].pages[j] == 0)
- continue;
- DPAAX_DEBUG("\t\t(%16"PRIx64"),(%16"PRIx64")",
- (entry[i].start + (j * sizeof(uint64_t))),
- entry[i].pages[j]);
- }
- }
- DPAAX_DEBUG(" === End of PA->VA Translation Table ===");
-}
-
static void
dpaax_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
void *arg __rte_unused)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..8c3ce45f6a 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -67,8 +67,6 @@ __rte_internal
void dpaax_iova_table_depopulate(void);
__rte_internal
int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
-__rte_internal
-void dpaax_iova_table_dump(void);
static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
diff --git a/drivers/common/dpaax/version.map b/drivers/common/dpaax/version.map
index ee1ca6801c..7390954793 100644
--- a/drivers/common/dpaax/version.map
+++ b/drivers/common/dpaax/version.map
@@ -2,7 +2,6 @@ INTERNAL {
global:
dpaax_iova_table_depopulate;
- dpaax_iova_table_dump;
dpaax_iova_table_p;
dpaax_iova_table_populate;
dpaax_iova_table_update;
diff --git a/drivers/common/iavf/iavf_common.c b/drivers/common/iavf/iavf_common.c
index c951b7d787..025c9e9ece 100644
--- a/drivers/common/iavf/iavf_common.c
+++ b/drivers/common/iavf/iavf_common.c
@@ -43,214 +43,6 @@ enum iavf_status iavf_set_mac_type(struct iavf_hw *hw)
return status;
}
-/**
- * iavf_aq_str - convert AQ err code to a string
- * @hw: pointer to the HW structure
- * @aq_err: the AQ error code to convert
- **/
-const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err)
-{
- switch (aq_err) {
- case IAVF_AQ_RC_OK:
- return "OK";
- case IAVF_AQ_RC_EPERM:
- return "IAVF_AQ_RC_EPERM";
- case IAVF_AQ_RC_ENOENT:
- return "IAVF_AQ_RC_ENOENT";
- case IAVF_AQ_RC_ESRCH:
- return "IAVF_AQ_RC_ESRCH";
- case IAVF_AQ_RC_EINTR:
- return "IAVF_AQ_RC_EINTR";
- case IAVF_AQ_RC_EIO:
- return "IAVF_AQ_RC_EIO";
- case IAVF_AQ_RC_ENXIO:
- return "IAVF_AQ_RC_ENXIO";
- case IAVF_AQ_RC_E2BIG:
- return "IAVF_AQ_RC_E2BIG";
- case IAVF_AQ_RC_EAGAIN:
- return "IAVF_AQ_RC_EAGAIN";
- case IAVF_AQ_RC_ENOMEM:
- return "IAVF_AQ_RC_ENOMEM";
- case IAVF_AQ_RC_EACCES:
- return "IAVF_AQ_RC_EACCES";
- case IAVF_AQ_RC_EFAULT:
- return "IAVF_AQ_RC_EFAULT";
- case IAVF_AQ_RC_EBUSY:
- return "IAVF_AQ_RC_EBUSY";
- case IAVF_AQ_RC_EEXIST:
- return "IAVF_AQ_RC_EEXIST";
- case IAVF_AQ_RC_EINVAL:
- return "IAVF_AQ_RC_EINVAL";
- case IAVF_AQ_RC_ENOTTY:
- return "IAVF_AQ_RC_ENOTTY";
- case IAVF_AQ_RC_ENOSPC:
- return "IAVF_AQ_RC_ENOSPC";
- case IAVF_AQ_RC_ENOSYS:
- return "IAVF_AQ_RC_ENOSYS";
- case IAVF_AQ_RC_ERANGE:
- return "IAVF_AQ_RC_ERANGE";
- case IAVF_AQ_RC_EFLUSHED:
- return "IAVF_AQ_RC_EFLUSHED";
- case IAVF_AQ_RC_BAD_ADDR:
- return "IAVF_AQ_RC_BAD_ADDR";
- case IAVF_AQ_RC_EMODE:
- return "IAVF_AQ_RC_EMODE";
- case IAVF_AQ_RC_EFBIG:
- return "IAVF_AQ_RC_EFBIG";
- }
-
- snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
- return hw->err_str;
-}
-
-/**
- * iavf_stat_str - convert status err code to a string
- * @hw: pointer to the HW structure
- * @stat_err: the status error code to convert
- **/
-const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err)
-{
- switch (stat_err) {
- case IAVF_SUCCESS:
- return "OK";
- case IAVF_ERR_NVM:
- return "IAVF_ERR_NVM";
- case IAVF_ERR_NVM_CHECKSUM:
- return "IAVF_ERR_NVM_CHECKSUM";
- case IAVF_ERR_PHY:
- return "IAVF_ERR_PHY";
- case IAVF_ERR_CONFIG:
- return "IAVF_ERR_CONFIG";
- case IAVF_ERR_PARAM:
- return "IAVF_ERR_PARAM";
- case IAVF_ERR_MAC_TYPE:
- return "IAVF_ERR_MAC_TYPE";
- case IAVF_ERR_UNKNOWN_PHY:
- return "IAVF_ERR_UNKNOWN_PHY";
- case IAVF_ERR_LINK_SETUP:
- return "IAVF_ERR_LINK_SETUP";
- case IAVF_ERR_ADAPTER_STOPPED:
- return "IAVF_ERR_ADAPTER_STOPPED";
- case IAVF_ERR_INVALID_MAC_ADDR:
- return "IAVF_ERR_INVALID_MAC_ADDR";
- case IAVF_ERR_DEVICE_NOT_SUPPORTED:
- return "IAVF_ERR_DEVICE_NOT_SUPPORTED";
- case IAVF_ERR_MASTER_REQUESTS_PENDING:
- return "IAVF_ERR_MASTER_REQUESTS_PENDING";
- case IAVF_ERR_INVALID_LINK_SETTINGS:
- return "IAVF_ERR_INVALID_LINK_SETTINGS";
- case IAVF_ERR_AUTONEG_NOT_COMPLETE:
- return "IAVF_ERR_AUTONEG_NOT_COMPLETE";
- case IAVF_ERR_RESET_FAILED:
- return "IAVF_ERR_RESET_FAILED";
- case IAVF_ERR_SWFW_SYNC:
- return "IAVF_ERR_SWFW_SYNC";
- case IAVF_ERR_NO_AVAILABLE_VSI:
- return "IAVF_ERR_NO_AVAILABLE_VSI";
- case IAVF_ERR_NO_MEMORY:
- return "IAVF_ERR_NO_MEMORY";
- case IAVF_ERR_BAD_PTR:
- return "IAVF_ERR_BAD_PTR";
- case IAVF_ERR_RING_FULL:
- return "IAVF_ERR_RING_FULL";
- case IAVF_ERR_INVALID_PD_ID:
- return "IAVF_ERR_INVALID_PD_ID";
- case IAVF_ERR_INVALID_QP_ID:
- return "IAVF_ERR_INVALID_QP_ID";
- case IAVF_ERR_INVALID_CQ_ID:
- return "IAVF_ERR_INVALID_CQ_ID";
- case IAVF_ERR_INVALID_CEQ_ID:
- return "IAVF_ERR_INVALID_CEQ_ID";
- case IAVF_ERR_INVALID_AEQ_ID:
- return "IAVF_ERR_INVALID_AEQ_ID";
- case IAVF_ERR_INVALID_SIZE:
- return "IAVF_ERR_INVALID_SIZE";
- case IAVF_ERR_INVALID_ARP_INDEX:
- return "IAVF_ERR_INVALID_ARP_INDEX";
- case IAVF_ERR_INVALID_FPM_FUNC_ID:
- return "IAVF_ERR_INVALID_FPM_FUNC_ID";
- case IAVF_ERR_QP_INVALID_MSG_SIZE:
- return "IAVF_ERR_QP_INVALID_MSG_SIZE";
- case IAVF_ERR_QP_TOOMANY_WRS_POSTED:
- return "IAVF_ERR_QP_TOOMANY_WRS_POSTED";
- case IAVF_ERR_INVALID_FRAG_COUNT:
- return "IAVF_ERR_INVALID_FRAG_COUNT";
- case IAVF_ERR_QUEUE_EMPTY:
- return "IAVF_ERR_QUEUE_EMPTY";
- case IAVF_ERR_INVALID_ALIGNMENT:
- return "IAVF_ERR_INVALID_ALIGNMENT";
- case IAVF_ERR_FLUSHED_QUEUE:
- return "IAVF_ERR_FLUSHED_QUEUE";
- case IAVF_ERR_INVALID_PUSH_PAGE_INDEX:
- return "IAVF_ERR_INVALID_PUSH_PAGE_INDEX";
- case IAVF_ERR_INVALID_IMM_DATA_SIZE:
- return "IAVF_ERR_INVALID_IMM_DATA_SIZE";
- case IAVF_ERR_TIMEOUT:
- return "IAVF_ERR_TIMEOUT";
- case IAVF_ERR_OPCODE_MISMATCH:
- return "IAVF_ERR_OPCODE_MISMATCH";
- case IAVF_ERR_CQP_COMPL_ERROR:
- return "IAVF_ERR_CQP_COMPL_ERROR";
- case IAVF_ERR_INVALID_VF_ID:
- return "IAVF_ERR_INVALID_VF_ID";
- case IAVF_ERR_INVALID_HMCFN_ID:
- return "IAVF_ERR_INVALID_HMCFN_ID";
- case IAVF_ERR_BACKING_PAGE_ERROR:
- return "IAVF_ERR_BACKING_PAGE_ERROR";
- case IAVF_ERR_NO_PBLCHUNKS_AVAILABLE:
- return "IAVF_ERR_NO_PBLCHUNKS_AVAILABLE";
- case IAVF_ERR_INVALID_PBLE_INDEX:
- return "IAVF_ERR_INVALID_PBLE_INDEX";
- case IAVF_ERR_INVALID_SD_INDEX:
- return "IAVF_ERR_INVALID_SD_INDEX";
- case IAVF_ERR_INVALID_PAGE_DESC_INDEX:
- return "IAVF_ERR_INVALID_PAGE_DESC_INDEX";
- case IAVF_ERR_INVALID_SD_TYPE:
- return "IAVF_ERR_INVALID_SD_TYPE";
- case IAVF_ERR_MEMCPY_FAILED:
- return "IAVF_ERR_MEMCPY_FAILED";
- case IAVF_ERR_INVALID_HMC_OBJ_INDEX:
- return "IAVF_ERR_INVALID_HMC_OBJ_INDEX";
- case IAVF_ERR_INVALID_HMC_OBJ_COUNT:
- return "IAVF_ERR_INVALID_HMC_OBJ_COUNT";
- case IAVF_ERR_INVALID_SRQ_ARM_LIMIT:
- return "IAVF_ERR_INVALID_SRQ_ARM_LIMIT";
- case IAVF_ERR_SRQ_ENABLED:
- return "IAVF_ERR_SRQ_ENABLED";
- case IAVF_ERR_ADMIN_QUEUE_ERROR:
- return "IAVF_ERR_ADMIN_QUEUE_ERROR";
- case IAVF_ERR_ADMIN_QUEUE_TIMEOUT:
- return "IAVF_ERR_ADMIN_QUEUE_TIMEOUT";
- case IAVF_ERR_BUF_TOO_SHORT:
- return "IAVF_ERR_BUF_TOO_SHORT";
- case IAVF_ERR_ADMIN_QUEUE_FULL:
- return "IAVF_ERR_ADMIN_QUEUE_FULL";
- case IAVF_ERR_ADMIN_QUEUE_NO_WORK:
- return "IAVF_ERR_ADMIN_QUEUE_NO_WORK";
- case IAVF_ERR_BAD_IWARP_CQE:
- return "IAVF_ERR_BAD_IWARP_CQE";
- case IAVF_ERR_NVM_BLANK_MODE:
- return "IAVF_ERR_NVM_BLANK_MODE";
- case IAVF_ERR_NOT_IMPLEMENTED:
- return "IAVF_ERR_NOT_IMPLEMENTED";
- case IAVF_ERR_PE_DOORBELL_NOT_ENABLED:
- return "IAVF_ERR_PE_DOORBELL_NOT_ENABLED";
- case IAVF_ERR_DIAG_TEST_FAILED:
- return "IAVF_ERR_DIAG_TEST_FAILED";
- case IAVF_ERR_NOT_READY:
- return "IAVF_ERR_NOT_READY";
- case IAVF_NOT_SUPPORTED:
- return "IAVF_NOT_SUPPORTED";
- case IAVF_ERR_FIRMWARE_API_VERSION:
- return "IAVF_ERR_FIRMWARE_API_VERSION";
- case IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
- return "IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
- }
-
- snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
- return hw->err_str;
-}
-
/**
* iavf_debug_aq
* @hw: debug mask related to admin queue
@@ -362,164 +154,6 @@ enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw,
return status;
}
-/**
- * iavf_aq_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- **/
-STATIC enum iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
- u16 vsi_id, bool pf_lut,
- u8 *lut, u16 lut_size,
- bool set)
-{
- enum iavf_status status;
- struct iavf_aq_desc desc;
- struct iavf_aqc_get_set_rss_lut *cmd_resp =
- (struct iavf_aqc_get_set_rss_lut *)&desc.params.raw;
-
- if (set)
- iavf_fill_default_direct_cmd_desc(&desc,
- iavf_aqc_opc_set_rss_lut);
- else
- iavf_fill_default_direct_cmd_desc(&desc,
- iavf_aqc_opc_get_rss_lut);
-
- /* Indirect command */
- desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD);
-
- cmd_resp->vsi_id =
- CPU_TO_LE16((u16)((vsi_id <<
- IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
- IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
- cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_LUT_VSI_VALID);
-
- if (pf_lut)
- cmd_resp->flags |= CPU_TO_LE16((u16)
- ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
- IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
- IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
- else
- cmd_resp->flags |= CPU_TO_LE16((u16)
- ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
- IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
- IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
-
- status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL);
-
- return status;
-}
-
-/**
- * iavf_aq_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- **/
-enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
-{
- return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
- false);
-}
-
-/**
- * iavf_aq_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- **/
-enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
-{
- return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * iavf_aq_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- **/
-STATIC enum iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw,
- u16 vsi_id,
- struct iavf_aqc_get_set_rss_key_data *key,
- bool set)
-{
- enum iavf_status status;
- struct iavf_aq_desc desc;
- struct iavf_aqc_get_set_rss_key *cmd_resp =
- (struct iavf_aqc_get_set_rss_key *)&desc.params.raw;
- u16 key_size = sizeof(struct iavf_aqc_get_set_rss_key_data);
-
- if (set)
- iavf_fill_default_direct_cmd_desc(&desc,
- iavf_aqc_opc_set_rss_key);
- else
- iavf_fill_default_direct_cmd_desc(&desc,
- iavf_aqc_opc_get_rss_key);
-
- /* Indirect command */
- desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD);
-
- cmd_resp->vsi_id =
- CPU_TO_LE16((u16)((vsi_id <<
- IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
- IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
- cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_KEY_VSI_VALID);
-
- status = iavf_asq_send_command(hw, &desc, key, key_size, NULL);
-
- return status;
-}
-
-/**
- * iavf_aq_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- **/
-enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw,
- u16 vsi_id,
- struct iavf_aqc_get_set_rss_key_data *key)
-{
- return iavf_aq_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * iavf_aq_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- **/
-enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw,
- u16 vsi_id,
- struct iavf_aqc_get_set_rss_key_data *key)
-{
- return iavf_aq_get_set_rss_key(hw, vsi_id, key, true);
-}
-
/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the
* hardware to a bit-field that can be used by SW to more easily determine the
* packet type.
@@ -885,30 +519,6 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = {
IAVF_PTT_UNUSED_ENTRY(255)
};
-/**
- * iavf_validate_mac_addr - Validate unicast MAC address
- * @mac_addr: pointer to MAC address
- *
- * Tests a MAC address to ensure it is a valid Individual Address
- **/
-enum iavf_status iavf_validate_mac_addr(u8 *mac_addr)
-{
- enum iavf_status status = IAVF_SUCCESS;
-
- DEBUGFUNC("iavf_validate_mac_addr");
-
- /* Broadcast addresses ARE multicast addresses
- * Make sure it is not a multicast address
- * Reject the zero address
- */
- if (IAVF_IS_MULTICAST(mac_addr) ||
- (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
- mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
- status = IAVF_ERR_INVALID_MAC_ADDR;
-
- return status;
-}
-
/**
* iavf_aq_send_msg_to_pf
* @hw: pointer to the hardware structure
@@ -989,38 +599,3 @@ void iavf_vf_parse_hw_config(struct iavf_hw *hw,
vsi_res++;
}
}
-
-/**
- * iavf_vf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a VF_RESET message to the PF. Does not wait for response from PF
- * as none will be forthcoming. Immediately after calling this function,
- * the admin queue should be shut down and (optionally) reinitialized.
- **/
-enum iavf_status iavf_vf_reset(struct iavf_hw *hw)
-{
- return iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
- IAVF_SUCCESS, NULL, 0, NULL);
-}
-
-/**
-* iavf_aq_clear_all_wol_filters
-* @hw: pointer to the hw struct
-* @cmd_details: pointer to command details structure or NULL
-*
-* Get information for the reason of a Wake Up event
-**/
-enum iavf_status iavf_aq_clear_all_wol_filters(struct iavf_hw *hw,
- struct iavf_asq_cmd_details *cmd_details)
-{
- struct iavf_aq_desc desc;
- enum iavf_status status;
-
- iavf_fill_default_direct_cmd_desc(&desc,
- iavf_aqc_opc_clear_all_wol_filters);
-
- status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
diff --git a/drivers/common/iavf/iavf_prototype.h b/drivers/common/iavf/iavf_prototype.h
index f34e77db0f..5d5deacfe2 100644
--- a/drivers/common/iavf/iavf_prototype.h
+++ b/drivers/common/iavf/iavf_prototype.h
@@ -30,7 +30,6 @@ enum iavf_status iavf_shutdown_arq(struct iavf_hw *hw);
u16 iavf_clean_asq(struct iavf_hw *hw);
void iavf_free_adminq_asq(struct iavf_hw *hw);
void iavf_free_adminq_arq(struct iavf_hw *hw);
-enum iavf_status iavf_validate_mac_addr(u8 *mac_addr);
void iavf_adminq_init_ring_data(struct iavf_hw *hw);
__rte_internal
enum iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
@@ -51,19 +50,6 @@ void iavf_idle_aq(struct iavf_hw *hw);
bool iavf_check_asq_alive(struct iavf_hw *hw);
enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading);
-enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw,
- u16 seid,
- struct iavf_aqc_get_set_rss_key_data *key);
-enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw,
- u16 seid,
- struct iavf_aqc_get_set_rss_key_data *key);
-const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err);
-const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err);
-
__rte_internal
enum iavf_status iavf_set_mac_type(struct iavf_hw *hw);
@@ -83,7 +69,6 @@ void iavf_destroy_spinlock(struct iavf_spinlock *sp);
__rte_internal
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
-enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
__rte_internal
enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
enum virtchnl_ops v_opcode,
@@ -95,6 +80,4 @@ enum iavf_status iavf_aq_debug_dump(struct iavf_hw *hw, u8 cluster_id,
void *buff, u16 *ret_buff_size,
u8 *ret_next_table, u32 *ret_next_index,
struct iavf_asq_cmd_details *cmd_details);
-enum iavf_status iavf_aq_clear_all_wol_filters(struct iavf_hw *hw,
- struct iavf_asq_cmd_details *cmd_details);
#endif /* _IAVF_PROTOTYPE_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
index 6df1e8ea63..e65fe602f7 100644
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ b/drivers/common/octeontx2/otx2_mbox.c
@@ -381,19 +381,6 @@ otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
}
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int avail;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- avail = mbox->tx_size - mdev->msg_size - msgs_offset();
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return avail;
-}
-
int
otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
{
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
index f6d884c198..7d9c018597 100644
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ b/drivers/common/octeontx2/otx2_mbox.h
@@ -1785,7 +1785,6 @@ int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
__rte_internal
int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
__rte_internal
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
int size, int size_rsp);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index aa7fad6d70..d23e58ff6d 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -399,25 +399,6 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
return 0;
}
-int
-bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
-{
- struct rte_cryptodev *cryptodev;
-
- if (fsdev == NULL)
- return -ENODEV;
- if (fsdev->sym_dev == NULL)
- return 0;
-
- /* free crypto device */
- cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
- rte_cryptodev_pmd_destroy(cryptodev);
- fsdev->sym_rte_dev.name = NULL;
- fsdev->sym_dev = NULL;
-
- return 0;
-}
-
static struct cryptodev_driver bcmfs_crypto_drv;
RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
cryptodev_bcmfs_sym_driver,
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
index 65d7046090..d9ddd024ff 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -32,7 +32,4 @@ struct bcmfs_sym_dev_private {
int
bcmfs_sym_dev_create(struct bcmfs_device *fdev);
-int
-bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
-
#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
index dc2def580f..81994d9d56 100644
--- a/drivers/crypto/bcmfs/bcmfs_vfio.c
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -74,34 +74,10 @@ bcmfs_attach_vfio(struct bcmfs_device *dev)
return 0;
}
-
-void
-bcmfs_release_vfio(struct bcmfs_device *dev)
-{
- int ret;
-
- if (dev == NULL)
- return;
-
- /* unmap the addr */
- munmap(dev->mmap_addr, dev->mmap_size);
- /* release the device */
- ret = rte_vfio_release_device(dev->dirname, dev->name,
- dev->vfio_dev_fd);
- if (ret < 0) {
- BCMFS_LOG(ERR, "cannot release device");
- return;
- }
-}
#else
int
bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
{
return -1;
}
-
-void
-bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
-{
-}
#endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
index d0fdf6483f..4177bc1fee 100644
--- a/drivers/crypto/bcmfs/bcmfs_vfio.h
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -10,8 +10,4 @@
int
bcmfs_attach_vfio(struct bcmfs_device *dev);
-/* Release the bcmfs device from vfio */
-void
-bcmfs_release_vfio(struct bcmfs_device *dev);
-
#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/caam_jr/caam_jr_pvt.h b/drivers/crypto/caam_jr/caam_jr_pvt.h
index 552d6b9b1b..60cf1fa45b 100644
--- a/drivers/crypto/caam_jr/caam_jr_pvt.h
+++ b/drivers/crypto/caam_jr/caam_jr_pvt.h
@@ -222,7 +222,6 @@ struct uio_job_ring {
int uio_minor_number;
};
-int sec_cleanup(void);
int sec_configure(void);
void sec_uio_job_rings_init(void);
struct uio_job_ring *config_job_ring(void);
diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c
index e4ee102344..60c551e4f2 100644
--- a/drivers/crypto/caam_jr/caam_jr_uio.c
+++ b/drivers/crypto/caam_jr/caam_jr_uio.c
@@ -471,34 +471,6 @@ sec_configure(void)
return config_jr_no;
}
-int
-sec_cleanup(void)
-{
- int i;
- struct uio_job_ring *job_ring;
-
- for (i = 0; i < g_uio_jr_num; i++) {
- job_ring = &g_uio_job_ring[i];
- /* munmap SEC's register memory */
- if (job_ring->register_base_addr) {
- munmap(job_ring->register_base_addr,
- job_ring->map_size);
- job_ring->register_base_addr = NULL;
- }
- /* I need to close the fd after shutdown UIO commands need to be
- * sent using the fd
- */
- if (job_ring->uio_fd != -1) {
- CAAM_JR_INFO(
- "Closed device file for job ring %d , fd = %d",
- job_ring->jr_id, job_ring->uio_fd);
- close(job_ring->uio_fd);
- job_ring->uio_fd = -1;
- }
- }
- return 0;
-}
-
void
sec_uio_job_rings_init(void)
{
diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index 664ddc1747..fc34b6a639 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -62,26 +62,6 @@ ccp_allot_queue(struct rte_cryptodev *cdev, int slot_req)
return NULL;
}
-int
-ccp_read_hwrng(uint32_t *value)
-{
- struct ccp_device *dev;
-
- TAILQ_FOREACH(dev, &ccp_list, next) {
- void *vaddr = (void *)(dev->pci.mem_resource[2].addr);
-
- while (dev->hwrng_retries++ < CCP_MAX_TRNG_RETRIES) {
- *value = CCP_READ_REG(vaddr, TRNG_OUT_REG);
- if (*value) {
- dev->hwrng_retries = 0;
- return 0;
- }
- }
- dev->hwrng_retries = 0;
- }
- return -1;
-}
-
static const struct rte_memzone *
ccp_queue_dma_zone_reserve(const char *queue_name,
uint32_t queue_size,
@@ -180,28 +160,6 @@ ccp_bitmap_set(unsigned long *map, unsigned int start, int len)
}
}
-static void
-ccp_bitmap_clear(unsigned long *map, unsigned int start, int len)
-{
- unsigned long *p = map + WORD_OFFSET(start);
- const unsigned int size = start + len;
- int bits_to_clear = BITS_PER_WORD - (start % BITS_PER_WORD);
- unsigned long mask_to_clear = CCP_BITMAP_FIRST_WORD_MASK(start);
-
- while (len - bits_to_clear >= 0) {
- *p &= ~mask_to_clear;
- len -= bits_to_clear;
- bits_to_clear = BITS_PER_WORD;
- mask_to_clear = ~0UL;
- p++;
- }
- if (len) {
- mask_to_clear &= CCP_BITMAP_LAST_WORD_MASK(size);
- *p &= ~mask_to_clear;
- }
-}
-
-
static unsigned long
_ccp_find_next_bit(const unsigned long *addr,
unsigned long nbits,
@@ -312,29 +270,6 @@ ccp_lsb_alloc(struct ccp_queue *cmd_q, unsigned int count)
return 0;
}
-static void __rte_unused
-ccp_lsb_free(struct ccp_queue *cmd_q,
- unsigned int start,
- unsigned int count)
-{
- int lsbno = start / LSB_SIZE;
-
- if (!start)
- return;
-
- if (cmd_q->lsb == lsbno) {
- /* An entry from the private LSB */
- ccp_bitmap_clear(cmd_q->lsbmap, start % LSB_SIZE, count);
- } else {
- /* From the shared LSBs */
- struct ccp_device *ccp = cmd_q->dev;
-
- rte_spinlock_lock(&ccp->lsb_lock);
- ccp_bitmap_clear(ccp->lsbmap, start, count);
- rte_spinlock_unlock(&ccp->lsb_lock);
- }
-}
-
static int
ccp_find_lsb_regions(struct ccp_queue *cmd_q, uint64_t status)
{
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index 37e04218ce..8bfce5d9fb 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -484,12 +484,4 @@ int ccp_probe_devices(const struct rte_pci_id *ccp_id);
*/
struct ccp_queue *ccp_allot_queue(struct rte_cryptodev *dev, int slot_req);
-/**
- * read hwrng value
- *
- * @param trng_value data pointer to write RNG value
- * @return 0 on success otherwise -1
- */
-int ccp_read_hwrng(uint32_t *trng_value);
-
#endif /* _CCP_DEV_H_ */
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..52bfd72f50 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -80,96 +80,6 @@ int dpseci_close(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpseci_create() - Create the DPSECI object
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPSECI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpseci_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpseci_cmd_create *cmd_params;
- struct mc_command cmd = { 0 };
- int err, i;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpseci_cmd_create *)cmd.params;
- for (i = 0; i < 8; i++)
- cmd_params->priorities[i] = cfg->priorities[i];
- for (i = 0; i < 8; i++)
- cmd_params->priorities2[i] = cfg->priorities[8 + i];
- cmd_params->num_tx_queues = cfg->num_tx_queues;
- cmd_params->num_rx_queues = cfg->num_rx_queues;
- cmd_params->options = cpu_to_le32(cfg->options);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpseci_destroy() - Destroy the DPSECI object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpseci_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dpseci_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpseci_cmd_destroy *)cmd.params;
- cmd_params->dpseci_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpseci_enable() - Enable the DPSECI, allow sending and receiving frames.
* @mc_io: Pointer to MC portal's I/O object
@@ -216,41 +126,6 @@ int dpseci_disable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpseci_is_enabled() - Check if the DPSECI is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dpseci_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_IS_ENABLED,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpseci_rsp_is_enabled *)cmd.params;
- *en = dpseci_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
/**
* dpseci_reset() - Reset the DPSECI, returns the object to initial state.
* @mc_io: Pointer to MC portal's I/O object
@@ -446,59 +321,6 @@ int dpseci_get_tx_queue(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpseci_get_sec_attr() - Retrieve SEC accelerator attributes.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @attr: Returned SEC attributes
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpseci_sec_attr *attr)
-{
- struct dpseci_rsp_get_sec_attr *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_SEC_ATTR,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpseci_rsp_get_sec_attr *)cmd.params;
- attr->ip_id = le16_to_cpu(rsp_params->ip_id);
- attr->major_rev = rsp_params->major_rev;
- attr->minor_rev = rsp_params->minor_rev;
- attr->era = rsp_params->era;
- attr->deco_num = rsp_params->deco_num;
- attr->zuc_auth_acc_num = rsp_params->zuc_auth_acc_num;
- attr->zuc_enc_acc_num = rsp_params->zuc_enc_acc_num;
- attr->snow_f8_acc_num = rsp_params->snow_f8_acc_num;
- attr->snow_f9_acc_num = rsp_params->snow_f9_acc_num;
- attr->crc_acc_num = rsp_params->crc_acc_num;
- attr->pk_acc_num = rsp_params->pk_acc_num;
- attr->kasumi_acc_num = rsp_params->kasumi_acc_num;
- attr->rng_acc_num = rsp_params->rng_acc_num;
- attr->md_acc_num = rsp_params->md_acc_num;
- attr->arc4_acc_num = rsp_params->arc4_acc_num;
- attr->des_acc_num = rsp_params->des_acc_num;
- attr->aes_acc_num = rsp_params->aes_acc_num;
- attr->ccha_acc_num = rsp_params->ccha_acc_num;
- attr->ptha_acc_num = rsp_params->ptha_acc_num;
-
- return 0;
-}
-
/**
* dpseci_get_sec_counters() - Retrieve SEC accelerator counters.
* @mc_io: Pointer to MC portal's I/O object
@@ -540,226 +362,3 @@ int dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
return 0;
}
-
-/**
- * dpseci_get_api_version() - Get Data Path SEC Interface API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path sec API
- * @minor_ver: Minor version of data path sec API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpseci_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpseci_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
-
-/**
- * dpseci_set_opr() - Set Order Restoration configuration.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @index: The queue index
- * @options: Configuration mode options
- * can be OPR_OPT_CREATE or OPR_OPT_RETIRE
- * @cfg: Configuration options for the OPR
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_set_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- uint8_t options,
- struct opr_cfg *cfg)
-{
- struct dpseci_cmd_set_opr *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_SET_OPR,
- cmd_flags,
- token);
- cmd_params = (struct dpseci_cmd_set_opr *)cmd.params;
- cmd_params->index = index;
- cmd_params->options = options;
- cmd_params->oloe = cfg->oloe;
- cmd_params->oeane = cfg->oeane;
- cmd_params->olws = cfg->olws;
- cmd_params->oa = cfg->oa;
- cmd_params->oprrws = cfg->oprrws;
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpseci_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @index: The queue index
- * @cfg: Returned OPR configuration
- * @qry: Returned OPR query
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpseci_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry)
-{
- struct dpseci_rsp_get_opr *rsp_params;
- struct dpseci_cmd_get_opr *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_OPR,
- cmd_flags,
- token);
- cmd_params = (struct dpseci_cmd_get_opr *)cmd.params;
- cmd_params->index = index;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpseci_rsp_get_opr *)cmd.params;
- cfg->oloe = rsp_params->oloe;
- cfg->oeane = rsp_params->oeane;
- cfg->olws = rsp_params->olws;
- cfg->oa = rsp_params->oa;
- cfg->oprrws = rsp_params->oprrws;
- qry->rip = dpseci_get_field(rsp_params->flags, RIP);
- qry->enable = dpseci_get_field(rsp_params->flags, OPR_ENABLE);
- qry->nesn = le16_to_cpu(rsp_params->nesn);
- qry->ndsn = le16_to_cpu(rsp_params->ndsn);
- qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
- qry->tseq_nlis = dpseci_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
- qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
- qry->hseq_nlis = dpseci_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
- qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
- qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
- qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
- qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
- return 0;
-}
-
-/**
- * dpseci_set_congestion_notification() - Set congestion group
- * notification configuration
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @cfg: congestion notification configuration
- *
- * Return: '0' on success, error code otherwise
- */
-int dpseci_set_congestion_notification(
- struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- const struct dpseci_congestion_notification_cfg *cfg)
-{
- struct dpseci_cmd_set_congestion_notification *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(
- DPSECI_CMDID_SET_CONGESTION_NOTIFICATION,
- cmd_flags,
- token);
-
- cmd_params =
- (struct dpseci_cmd_set_congestion_notification *)cmd.params;
- cmd_params->dest_id = cfg->dest_cfg.dest_id;
- cmd_params->dest_priority = cfg->dest_cfg.priority;
- cmd_params->message_ctx = cfg->message_ctx;
- cmd_params->message_iova = cfg->message_iova;
- cmd_params->notification_mode = cfg->notification_mode;
- cmd_params->threshold_entry = cfg->threshold_entry;
- cmd_params->threshold_exit = cfg->threshold_exit;
- dpseci_set_field(cmd_params->type_units,
- DEST_TYPE,
- cfg->dest_cfg.dest_type);
- dpseci_set_field(cmd_params->type_units,
- CG_UNITS,
- cfg->units);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpseci_get_congestion_notification() - Get congestion group
- * notification configuration
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSECI object
- * @cfg: congestion notification configuration
- *
- * Return: '0' on success, error code otherwise
- */
-int dpseci_get_congestion_notification(
- struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpseci_congestion_notification_cfg *cfg)
-{
- struct dpseci_cmd_set_congestion_notification *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(
- DPSECI_CMDID_GET_CONGESTION_NOTIFICATION,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params =
- (struct dpseci_cmd_set_congestion_notification *)cmd.params;
-
- cfg->dest_cfg.dest_id = le32_to_cpu(rsp_params->dest_id);
- cfg->dest_cfg.priority = rsp_params->dest_priority;
- cfg->notification_mode = le16_to_cpu(rsp_params->notification_mode);
- cfg->message_ctx = le64_to_cpu(rsp_params->message_ctx);
- cfg->message_iova = le64_to_cpu(rsp_params->message_iova);
- cfg->threshold_entry = le32_to_cpu(rsp_params->threshold_entry);
- cfg->threshold_exit = le32_to_cpu(rsp_params->threshold_exit);
- cfg->units = dpseci_get_field(rsp_params->type_units, CG_UNITS);
- cfg->dest_cfg.dest_type = dpseci_get_field(rsp_params->type_units,
- DEST_TYPE);
-
- return 0;
-}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index 279e8f4d4a..fbbfd40815 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -61,17 +61,6 @@ struct dpseci_cfg {
uint8_t priorities[DPSECI_MAX_QUEUE_NUM];
};
-int dpseci_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpseci_cfg *cfg,
- uint32_t *obj_id);
-
-int dpseci_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
int dpseci_enable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
@@ -80,11 +69,6 @@ int dpseci_disable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
-int dpseci_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
int dpseci_reset(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
@@ -287,11 +271,6 @@ struct dpseci_sec_attr {
uint8_t ptha_acc_num;
};
-int dpseci_get_sec_attr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpseci_sec_attr *attr);
-
/**
* struct dpseci_sec_counters - Structure representing global SEC counters and
* not per dpseci counters
@@ -318,25 +297,6 @@ int dpseci_get_sec_counters(struct fsl_mc_io *mc_io,
uint16_t token,
struct dpseci_sec_counters *counters);
-int dpseci_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
-int dpseci_set_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- uint8_t options,
- struct opr_cfg *cfg);
-
-int dpseci_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry);
-
/**
* enum dpseci_congestion_unit - DPSECI congestion units
* @DPSECI_CONGESTION_UNIT_BYTES: bytes units
@@ -405,16 +365,4 @@ struct dpseci_congestion_notification_cfg {
uint16_t notification_mode;
};
-int dpseci_set_congestion_notification(
- struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- const struct dpseci_congestion_notification_cfg *cfg);
-
-int dpseci_get_congestion_notification(
- struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpseci_congestion_notification_cfg *cfg);
-
#endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/virtio/virtio_pci.c b/drivers/crypto/virtio/virtio_pci.c
index ae069794a6..40bd748094 100644
--- a/drivers/crypto/virtio/virtio_pci.c
+++ b/drivers/crypto/virtio/virtio_pci.c
@@ -246,13 +246,6 @@ vtpci_read_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
}
-void
-vtpci_write_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
- const void *src, int length)
-{
- VTPCI_OPS(hw)->write_dev_cfg(hw, offset, src, length);
-}
-
uint64_t
vtpci_cryptodev_negotiate_features(struct virtio_crypto_hw *hw,
uint64_t host_features)
@@ -298,12 +291,6 @@ vtpci_cryptodev_get_status(struct virtio_crypto_hw *hw)
return VTPCI_OPS(hw)->get_status(hw);
}
-uint8_t
-vtpci_cryptodev_isr(struct virtio_crypto_hw *hw)
-{
- return VTPCI_OPS(hw)->get_isr(hw);
-}
-
static void *
get_cfg_addr(struct rte_pci_device *dev, struct virtio_pci_cap *cap)
{
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index d9a214dfd0..3092b56952 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -242,12 +242,7 @@ void vtpci_cryptodev_set_status(struct virtio_crypto_hw *hw, uint8_t status);
uint64_t vtpci_cryptodev_negotiate_features(struct virtio_crypto_hw *hw,
uint64_t host_features);
-void vtpci_write_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
- const void *src, int length);
-
void vtpci_read_cryptodev_config(struct virtio_crypto_hw *hw, size_t offset,
void *dst, int length);
-uint8_t vtpci_cryptodev_isr(struct virtio_crypto_hw *hw);
-
#endif /* _VIRTIO_PCI_H_ */
diff --git a/drivers/event/dlb/dlb_priv.h b/drivers/event/dlb/dlb_priv.h
index 58ff4287df..deaf467090 100644
--- a/drivers/event/dlb/dlb_priv.h
+++ b/drivers/event/dlb/dlb_priv.h
@@ -470,8 +470,6 @@ void dlb_eventdev_dump(struct rte_eventdev *dev, FILE *f);
int dlb_xstats_init(struct dlb_eventdev *dlb);
-void dlb_xstats_uninit(struct dlb_eventdev *dlb);
-
int dlb_eventdev_xstats_get(const struct rte_eventdev *dev,
enum rte_event_dev_xstats_mode mode,
uint8_t queue_port_id, const unsigned int ids[],
diff --git a/drivers/event/dlb/dlb_xstats.c b/drivers/event/dlb/dlb_xstats.c
index 5f4c590307..6678a8b322 100644
--- a/drivers/event/dlb/dlb_xstats.c
+++ b/drivers/event/dlb/dlb_xstats.c
@@ -578,13 +578,6 @@ dlb_xstats_init(struct dlb_eventdev *dlb)
return 0;
}
-void
-dlb_xstats_uninit(struct dlb_eventdev *dlb)
-{
- rte_free(dlb->xstats);
- dlb->xstats_count = 0;
-}
-
int
dlb_eventdev_xstats_get_names(const struct rte_eventdev *dev,
enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h
index b73cf3ff14..56bd4ebe1b 100644
--- a/drivers/event/dlb2/dlb2_priv.h
+++ b/drivers/event/dlb2/dlb2_priv.h
@@ -536,8 +536,6 @@ void dlb2_eventdev_dump(struct rte_eventdev *dev, FILE *f);
int dlb2_xstats_init(struct dlb2_eventdev *dlb2);
-void dlb2_xstats_uninit(struct dlb2_eventdev *dlb2);
-
int dlb2_eventdev_xstats_get(const struct rte_eventdev *dev,
enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
const unsigned int ids[], uint64_t values[], unsigned int n);
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index 8c3c3cda94..574fca89e8 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -634,13 +634,6 @@ dlb2_xstats_init(struct dlb2_eventdev *dlb2)
return 0;
}
-void
-dlb2_xstats_uninit(struct dlb2_eventdev *dlb2)
-{
- rte_free(dlb2->xstats);
- dlb2->xstats_count = 0;
-}
-
int
dlb2_eventdev_xstats_get_names(const struct rte_eventdev *dev,
enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
diff --git a/drivers/event/opdl/opdl_ring.c b/drivers/event/opdl/opdl_ring.c
index 69392b56bb..3ddfcaf67c 100644
--- a/drivers/event/opdl/opdl_ring.c
+++ b/drivers/event/opdl/opdl_ring.c
@@ -586,52 +586,6 @@ opdl_stage_claim_multithread(struct opdl_stage *s, void *entries,
return i;
}
-/* Claim and copy slot pointers, optimised for single-thread operation */
-static __rte_always_inline uint32_t
-opdl_stage_claim_copy_singlethread(struct opdl_stage *s, void *entries,
- uint32_t num_entries, uint32_t *seq, bool block)
-{
- num_entries = num_to_process(s, num_entries, block);
- if (num_entries == 0)
- return 0;
- copy_entries_out(s->t, s->head, entries, num_entries);
- if (seq != NULL)
- *seq = s->head;
- s->head += num_entries;
- return num_entries;
-}
-
-/* Thread-safe version of function to claim and copy pointers to slots */
-static __rte_always_inline uint32_t
-opdl_stage_claim_copy_multithread(struct opdl_stage *s, void *entries,
- uint32_t num_entries, uint32_t *seq, bool block)
-{
- uint32_t old_head;
-
- move_head_atomically(s, &num_entries, &old_head, block, true);
- if (num_entries == 0)
- return 0;
- copy_entries_out(s->t, old_head, entries, num_entries);
- if (seq != NULL)
- *seq = old_head;
- return num_entries;
-}
-
-static __rte_always_inline void
-opdl_stage_disclaim_singlethread_n(struct opdl_stage *s,
- uint32_t num_entries)
-{
- uint32_t old_tail = s->shared.tail;
-
- if (unlikely(num_entries > (s->head - old_tail))) {
- PMD_DRV_LOG(WARNING, "Attempt to disclaim (%u) more than claimed (%u)",
- num_entries, s->head - old_tail);
- num_entries = s->head - old_tail;
- }
- __atomic_store_n(&s->shared.tail, num_entries + old_tail,
- __ATOMIC_RELEASE);
-}
-
uint32_t
opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
bool block)
@@ -644,26 +598,6 @@ opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
block);
}
-uint32_t
-opdl_ring_copy_from_burst(struct opdl_ring *t, struct opdl_stage *s,
- const void *entries, uint32_t num_entries, bool block)
-{
- uint32_t head = s->head;
-
- num_entries = num_to_process(s, num_entries, block);
-
- if (num_entries == 0)
- return 0;
-
- copy_entries_in(t, head, entries, num_entries);
-
- s->head += num_entries;
- __atomic_store_n(&s->shared.tail, s->head, __ATOMIC_RELEASE);
-
- return num_entries;
-
-}
-
uint32_t
opdl_ring_copy_to_burst(struct opdl_ring *t, struct opdl_stage *s,
void *entries, uint32_t num_entries, bool block)
@@ -682,25 +616,6 @@ opdl_ring_copy_to_burst(struct opdl_ring *t, struct opdl_stage *s,
return num_entries;
}
-uint32_t
-opdl_stage_find_num_available(struct opdl_stage *s, uint32_t num_entries)
-{
- /* return (num_to_process(s, num_entries, false)); */
-
- if (available(s) >= num_entries)
- return num_entries;
-
- update_available_seq(s);
-
- uint32_t avail = available(s);
-
- if (avail == 0) {
- rte_pause();
- return 0;
- }
- return (avail <= num_entries) ? avail : num_entries;
-}
-
uint32_t
opdl_stage_claim(struct opdl_stage *s, void *entries,
uint32_t num_entries, uint32_t *seq, bool block, bool atomic)
@@ -713,41 +628,6 @@ opdl_stage_claim(struct opdl_stage *s, void *entries,
seq, block);
}
-uint32_t
-opdl_stage_claim_copy(struct opdl_stage *s, void *entries,
- uint32_t num_entries, uint32_t *seq, bool block)
-{
- if (s->threadsafe == false)
- return opdl_stage_claim_copy_singlethread(s, entries,
- num_entries, seq, block);
- else
- return opdl_stage_claim_copy_multithread(s, entries,
- num_entries, seq, block);
-}
-
-void
-opdl_stage_disclaim_n(struct opdl_stage *s, uint32_t num_entries,
- bool block)
-{
-
- if (s->threadsafe == false) {
- opdl_stage_disclaim_singlethread_n(s, s->num_claimed);
- } else {
- struct claim_manager *disclaims =
- &s->pending_disclaims[rte_lcore_id()];
-
- if (unlikely(num_entries > s->num_slots)) {
- PMD_DRV_LOG(WARNING, "Attempt to disclaim (%u) more than claimed (%u)",
- num_entries, disclaims->num_claimed);
- num_entries = disclaims->num_claimed;
- }
-
- num_entries = RTE_MIN(num_entries + disclaims->num_to_disclaim,
- disclaims->num_claimed);
- opdl_stage_disclaim_multithread_n(s, num_entries, block);
- }
-}
-
int
opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries, bool block)
{
@@ -769,12 +649,6 @@ opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries, bool block)
return num_entries;
}
-uint32_t
-opdl_ring_available(struct opdl_ring *t)
-{
- return opdl_stage_available(&t->stages[0]);
-}
-
uint32_t
opdl_stage_available(struct opdl_stage *s)
{
@@ -782,14 +656,6 @@ opdl_stage_available(struct opdl_stage *s)
return available(s);
}
-void
-opdl_ring_flush(struct opdl_ring *t)
-{
- struct opdl_stage *s = input_stage(t);
-
- wait_for_available(s, s->num_slots);
-}
-
/******************** Non performance sensitive functions ********************/
/* Initial setup of a new stage's context */
@@ -962,12 +828,6 @@ opdl_ring_create(const char *name, uint32_t num_slots, uint32_t slot_size,
return NULL;
}
-void *
-opdl_ring_get_slot(const struct opdl_ring *t, uint32_t index)
-{
- return get_slot(t, index);
-}
-
bool
opdl_ring_cas_slot(struct opdl_stage *s, const struct rte_event *ev,
uint32_t index, bool atomic)
@@ -1046,24 +906,6 @@ opdl_ring_cas_slot(struct opdl_stage *s, const struct rte_event *ev,
return ev_updated;
}
-int
-opdl_ring_get_socket(const struct opdl_ring *t)
-{
- return t->socket;
-}
-
-uint32_t
-opdl_ring_get_num_slots(const struct opdl_ring *t)
-{
- return t->num_slots;
-}
-
-const char *
-opdl_ring_get_name(const struct opdl_ring *t)
-{
- return t->name;
-}
-
/* Check dependency list is valid for a given opdl_ring */
static int
check_deps(struct opdl_ring *t, struct opdl_stage *deps[],
@@ -1146,36 +988,6 @@ opdl_stage_deps_add(struct opdl_ring *t, struct opdl_stage *s,
return ret;
}
-struct opdl_stage *
-opdl_ring_get_input_stage(const struct opdl_ring *t)
-{
- return input_stage(t);
-}
-
-int
-opdl_stage_set_deps(struct opdl_stage *s, struct opdl_stage *deps[],
- uint32_t num_deps)
-{
- unsigned int i;
- int ret;
-
- if ((num_deps == 0) || (!deps)) {
- PMD_DRV_LOG(ERR, "cannot set NULL dependencies");
- return -EINVAL;
- }
-
- ret = check_deps(s->t, deps, num_deps);
- if (ret < 0)
- return ret;
-
- /* Update deps */
- for (i = 0; i < num_deps; i++)
- s->deps[i] = &deps[i]->shared;
- s->num_deps = num_deps;
-
- return 0;
-}
-
struct opdl_ring *
opdl_stage_get_opdl_ring(const struct opdl_stage *s)
{
@@ -1245,25 +1057,3 @@ opdl_ring_free(struct opdl_ring *t)
if (rte_memzone_free(mz) != 0)
PMD_DRV_LOG(ERR, "Cannot free memzone for %s", t->name);
}
-
-/* search a opdl_ring from its name */
-struct opdl_ring *
-opdl_ring_lookup(const char *name)
-{
- const struct rte_memzone *mz;
- char mz_name[RTE_MEMZONE_NAMESIZE];
-
- snprintf(mz_name, sizeof(mz_name), "%s%s", LIB_NAME, name);
-
- mz = rte_memzone_lookup(mz_name);
- if (mz == NULL)
- return NULL;
-
- return mz->addr;
-}
-
-void
-opdl_ring_set_stage_threadsafe(struct opdl_stage *s, bool threadsafe)
-{
- s->threadsafe = threadsafe;
-}
diff --git a/drivers/event/opdl/opdl_ring.h b/drivers/event/opdl/opdl_ring.h
index 14ababe0bb..c9e2ab6b1b 100644
--- a/drivers/event/opdl/opdl_ring.h
+++ b/drivers/event/opdl/opdl_ring.h
@@ -83,57 +83,6 @@ struct opdl_ring *
opdl_ring_create(const char *name, uint32_t num_slots, uint32_t slot_size,
uint32_t max_num_stages, int socket);
-/**
- * Get pointer to individual slot in a opdl_ring.
- *
- * @param t
- * The opdl_ring.
- * @param index
- * Index of slot. If greater than the number of slots it will be masked to be
- * within correct range.
- *
- * @return
- * A pointer to that slot.
- */
-void *
-opdl_ring_get_slot(const struct opdl_ring *t, uint32_t index);
-
-/**
- * Get NUMA socket used by a opdl_ring.
- *
- * @param t
- * The opdl_ring.
- *
- * @return
- * NUMA socket.
- */
-int
-opdl_ring_get_socket(const struct opdl_ring *t);
-
-/**
- * Get number of slots in a opdl_ring.
- *
- * @param t
- * The opdl_ring.
- *
- * @return
- * Number of slots.
- */
-uint32_t
-opdl_ring_get_num_slots(const struct opdl_ring *t);
-
-/**
- * Get name of a opdl_ring.
- *
- * @param t
- * The opdl_ring.
- *
- * @return
- * Name string.
- */
-const char *
-opdl_ring_get_name(const struct opdl_ring *t);
-
/**
* Adds a new processing stage to a specified opdl_ring instance. Adding a stage
* while there are entries in the opdl_ring being processed will cause undefined
@@ -160,38 +109,6 @@ opdl_ring_get_name(const struct opdl_ring *t);
struct opdl_stage *
opdl_stage_add(struct opdl_ring *t, bool threadsafe, bool is_input);
-/**
- * Returns the input stage of a opdl_ring to be used by other API functions.
- *
- * @param t
- * The opdl_ring.
- *
- * @return
- * A pointer to the input stage.
- */
-struct opdl_stage *
-opdl_ring_get_input_stage(const struct opdl_ring *t);
-
-/**
- * Sets the dependencies for a stage (clears all the previous deps!). Changing
- * dependencies while there are entries in the opdl_ring being processed will
- * cause undefined behaviour.
- *
- * @param s
- * The stage to set the dependencies for.
- * @param deps
- * An array of pointers to other stages that this stage will depends on. The
- * other stages must be part of the same opdl_ring!
- * @param num_deps
- * The size of the deps array. This must be > 0.
- *
- * @return
- * 0 on success, a negative value on error.
- */
-int
-opdl_stage_set_deps(struct opdl_stage *s, struct opdl_stage *deps[],
- uint32_t num_deps);
-
/**
* Returns the opdl_ring that a stage belongs to.
*
@@ -228,32 +145,6 @@ uint32_t
opdl_ring_input(struct opdl_ring *t, const void *entries, uint32_t num_entries,
bool block);
-/**
- * Inputs a new batch of entries into a opdl stage. This function is only
- * threadsafe (with the same opdl parameter) if the threadsafe parameter of
- * opdl_create() was true. For performance reasons, this function does not
- * check input parameters.
- *
- * @param t
- * The opdl ring to input entries in to.
- * @param s
- * The stage to copy entries to.
- * @param entries
- * An array of entries that will be copied in to the opdl ring.
- * @param num_entries
- * The size of the entries array.
- * @param block
- * If this is true, the function blocks until enough slots are available to
- * input all the requested entries. If false, then the function inputs as
- * many entries as currently possible.
- *
- * @return
- * The number of entries successfully input.
- */
-uint32_t
-opdl_ring_copy_from_burst(struct opdl_ring *t, struct opdl_stage *s,
- const void *entries, uint32_t num_entries, bool block);
-
/**
* Copy a batch of entries from the opdl ring. This function is only
* threadsafe (with the same opdl parameter) if the threadsafe parameter of
@@ -368,41 +259,6 @@ opdl_stage_claim_check(struct opdl_stage *s, void **entries,
uint32_t num_entries, uint32_t *seq, bool block,
opdl_ring_check_entries_t *check, void *arg);
-/**
- * Before processing a batch of entries, a stage must first claim them to get
- * access. This function is threadsafe using same opdl_stage parameter if
- * the stage was created with threadsafe set to true, otherwise it is only
- * threadsafe with a different opdl_stage per thread.
- *
- * The difference between this function and opdl_stage_claim() is that this
- * function copies the entries from the opdl_ring. Note that any changes made to
- * the copied entries will not be reflected back in to the entries in the
- * opdl_ring, so this function probably only makes sense if the entries are
- * pointers to other data. For performance reasons, this function does not check
- * input parameters.
- *
- * @param s
- * The opdl_ring stage to read entries in.
- * @param entries
- * An array of entries that will be filled in by this function.
- * @param num_entries
- * The number of entries to attempt to claim for processing (and the size of
- * the entries array).
- * @param seq
- * If not NULL, this is set to the value of the internal stage sequence number
- * associated with the first entry returned.
- * @param block
- * If this is true, the function blocks until num_entries slots are available
- * to process. If false, then the function claims as many entries as
- * currently possible.
- *
- * @return
- * The number of entries copied in to the entries array.
- */
-uint32_t
-opdl_stage_claim_copy(struct opdl_stage *s, void *entries,
- uint32_t num_entries, uint32_t *seq, bool block);
-
/**
* This function must be called when a stage has finished its processing of
* entries, to make them available to any dependent stages. All entries that are
@@ -433,48 +289,6 @@ int
opdl_stage_disclaim(struct opdl_stage *s, uint32_t num_entries,
bool block);
-/**
- * This function can be called when a stage has finished its processing of
- * entries, to make them available to any dependent stages. The difference
- * between this function and opdl_stage_disclaim() is that here only a
- * portion of entries are disclaimed, not all of them. For performance reasons,
- * this function does not check input parameters.
- *
- * @param s
- * The opdl_ring stage in which to disclaim entries.
- *
- * @param num_entries
- * The number of entries to disclaim.
- *
- * @param block
- * Entries are always made available to a stage in the same order that they
- * were input in the stage. If a stage is multithread safe, this may mean that
- * full disclaiming of a batch of entries can not be considered complete until
- * all earlier threads in the stage have disclaimed. If this parameter is true
- * then the function blocks until the specified number of entries has been
- * disclaimed (or there are no more entries to disclaim). Otherwise it
- * disclaims as many claims as currently possible and an attempt to disclaim
- * them is made the next time a claim or disclaim function for this stage on
- * this thread is called.
- *
- * In a single threaded stage, this parameter has no effect.
- */
-void
-opdl_stage_disclaim_n(struct opdl_stage *s, uint32_t num_entries,
- bool block);
-
-/**
- * Check how many entries can be input.
- *
- * @param t
- * The opdl_ring instance to check.
- *
- * @return
- * The number of new entries currently allowed to be input.
- */
-uint32_t
-opdl_ring_available(struct opdl_ring *t);
-
/**
* Check how many entries can be processed in a stage.
*
@@ -487,23 +301,6 @@ opdl_ring_available(struct opdl_ring *t);
uint32_t
opdl_stage_available(struct opdl_stage *s);
-/**
- * Check how many entries are available to be processed.
- *
- * NOTE : DOES NOT CHANGE ANY STATE WITHIN THE STAGE
- *
- * @param s
- * The stage to check.
- *
- * @param num_entries
- * The number of entries to check for availability.
- *
- * @return
- * The number of entries currently available to be processed in this stage.
- */
-uint32_t
-opdl_stage_find_num_available(struct opdl_stage *s, uint32_t num_entries);
-
/**
* Create empty stage instance and return the pointer.
*
@@ -543,15 +340,6 @@ opdl_stage_set_queue_id(struct opdl_stage *s,
void
opdl_ring_dump(const struct opdl_ring *t, FILE *f);
-/**
- * Blocks until all entries in a opdl_ring have been processed by all stages.
- *
- * @param t
- * The opdl_ring instance to flush.
- */
-void
-opdl_ring_flush(struct opdl_ring *t);
-
/**
* Deallocates all resources used by a opdl_ring instance
*
@@ -561,30 +349,6 @@ opdl_ring_flush(struct opdl_ring *t);
void
opdl_ring_free(struct opdl_ring *t);
-/**
- * Search for a opdl_ring by its name
- *
- * @param name
- * The name of the opdl_ring.
- * @return
- * The pointer to the opdl_ring matching the name, or NULL if not found.
- *
- */
-struct opdl_ring *
-opdl_ring_lookup(const char *name);
-
-/**
- * Set a opdl_stage to threadsafe variable.
- *
- * @param s
- * The opdl_stage.
- * @param threadsafe
- * Threadsafe value.
- */
-void
-opdl_ring_set_stage_threadsafe(struct opdl_stage *s, bool threadsafe);
-
-
/**
* Compare the event descriptor with original version in the ring.
* if key field event descriptor is changed by application, then
diff --git a/drivers/net/ark/ark_ddm.c b/drivers/net/ark/ark_ddm.c
index 91d1179d88..2a6aa93ffe 100644
--- a/drivers/net/ark/ark_ddm.c
+++ b/drivers/net/ark/ark_ddm.c
@@ -92,19 +92,6 @@ ark_ddm_dump(struct ark_ddm_t *ddm, const char *msg)
);
}
-void
-ark_ddm_dump_stats(struct ark_ddm_t *ddm, const char *msg)
-{
- struct ark_ddm_stats_t *stats = &ddm->stats;
-
- ARK_PMD_LOG(INFO, "DDM Stats: %s"
- ARK_SU64 ARK_SU64 ARK_SU64
- "\n", msg,
- "Bytes:", stats->tx_byte_count,
- "Packets:", stats->tx_pkt_count,
- "MBufs", stats->tx_mbuf_count);
-}
-
int
ark_ddm_is_stopped(struct ark_ddm_t *ddm)
{
diff --git a/drivers/net/ark/ark_ddm.h b/drivers/net/ark/ark_ddm.h
index 5456b4b5cc..5b722b6ede 100644
--- a/drivers/net/ark/ark_ddm.h
+++ b/drivers/net/ark/ark_ddm.h
@@ -141,7 +141,6 @@ void ark_ddm_reset(struct ark_ddm_t *ddm);
void ark_ddm_stats_reset(struct ark_ddm_t *ddm);
void ark_ddm_setup(struct ark_ddm_t *ddm, rte_iova_t cons_addr,
uint32_t interval);
-void ark_ddm_dump_stats(struct ark_ddm_t *ddm, const char *msg);
void ark_ddm_dump(struct ark_ddm_t *ddm, const char *msg);
int ark_ddm_is_stopped(struct ark_ddm_t *ddm);
uint64_t ark_ddm_queue_byte_count(struct ark_ddm_t *ddm);
diff --git a/drivers/net/ark/ark_pktchkr.c b/drivers/net/ark/ark_pktchkr.c
index b8fb69497d..5a7e686f0e 100644
--- a/drivers/net/ark/ark_pktchkr.c
+++ b/drivers/net/ark/ark_pktchkr.c
@@ -15,7 +15,6 @@
#include "ark_logs.h"
static int set_arg(char *arg, char *val);
-static int ark_pktchkr_is_gen_forever(ark_pkt_chkr_t handle);
#define ARK_MAX_STR_LEN 64
union OPTV {
@@ -136,15 +135,6 @@ ark_pktchkr_stop(ark_pkt_chkr_t handle)
ARK_PMD_LOG(DEBUG, "Pktchk %d stopped.\n", inst->ordinal);
}
-int
-ark_pktchkr_is_running(ark_pkt_chkr_t handle)
-{
- struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
- uint32_t r = inst->sregs->pkt_start_stop;
-
- return ((r & 1) == 1);
-}
-
static void
ark_pktchkr_set_pkt_ctrl(ark_pkt_chkr_t handle,
uint32_t gen_forever,
@@ -173,48 +163,6 @@ ark_pktchkr_set_pkt_ctrl(ark_pkt_chkr_t handle,
inst->cregs->pkt_ctrl = r;
}
-static
-int
-ark_pktchkr_is_gen_forever(ark_pkt_chkr_t handle)
-{
- struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
- uint32_t r = inst->cregs->pkt_ctrl;
-
- return (((r >> 24) & 1) == 1);
-}
-
-int
-ark_pktchkr_wait_done(ark_pkt_chkr_t handle)
-{
- struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-
- if (ark_pktchkr_is_gen_forever(handle)) {
- ARK_PMD_LOG(NOTICE, "Pktchk wait_done will not terminate"
- " because gen_forever=1\n");
- return -1;
- }
- int wait_cycle = 10;
-
- while (!ark_pktchkr_stopped(handle) && (wait_cycle > 0)) {
- usleep(1000);
- wait_cycle--;
- ARK_PMD_LOG(DEBUG, "Waiting for packet checker %d's"
- " internal pktgen to finish sending...\n",
- inst->ordinal);
- ARK_PMD_LOG(DEBUG, "Pktchk %d's pktgen done.\n",
- inst->ordinal);
- }
- return 0;
-}
-
-int
-ark_pktchkr_get_pkts_sent(ark_pkt_chkr_t handle)
-{
- struct ark_pkt_chkr_inst *inst = (struct ark_pkt_chkr_inst *)handle;
-
- return inst->cregs->pkts_sent;
-}
-
void
ark_pktchkr_set_payload_byte(ark_pkt_chkr_t handle, uint32_t b)
{
diff --git a/drivers/net/ark/ark_pktchkr.h b/drivers/net/ark/ark_pktchkr.h
index b362281776..2b0ba17d90 100644
--- a/drivers/net/ark/ark_pktchkr.h
+++ b/drivers/net/ark/ark_pktchkr.h
@@ -69,8 +69,6 @@ void ark_pktchkr_uninit(ark_pkt_chkr_t handle);
void ark_pktchkr_run(ark_pkt_chkr_t handle);
int ark_pktchkr_stopped(ark_pkt_chkr_t handle);
void ark_pktchkr_stop(ark_pkt_chkr_t handle);
-int ark_pktchkr_is_running(ark_pkt_chkr_t handle);
-int ark_pktchkr_get_pkts_sent(ark_pkt_chkr_t handle);
void ark_pktchkr_set_payload_byte(ark_pkt_chkr_t handle, uint32_t b);
void ark_pktchkr_set_pkt_size_min(ark_pkt_chkr_t handle, uint32_t x);
void ark_pktchkr_set_pkt_size_max(ark_pkt_chkr_t handle, uint32_t x);
@@ -83,6 +81,5 @@ void ark_pktchkr_set_hdr_dW(ark_pkt_chkr_t handle, uint32_t *hdr);
void ark_pktchkr_parse(char *args);
void ark_pktchkr_setup(ark_pkt_chkr_t handle);
void ark_pktchkr_dump_stats(ark_pkt_chkr_t handle);
-int ark_pktchkr_wait_done(ark_pkt_chkr_t handle);
#endif
diff --git a/drivers/net/ark/ark_pktdir.c b/drivers/net/ark/ark_pktdir.c
index 25e1218310..00bf165bff 100644
--- a/drivers/net/ark/ark_pktdir.c
+++ b/drivers/net/ark/ark_pktdir.c
@@ -26,31 +26,9 @@ ark_pktdir_init(void *base)
return inst;
}
-void
-ark_pktdir_uninit(ark_pkt_dir_t handle)
-{
- struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
-
- rte_free(inst);
-}
-
void
ark_pktdir_setup(ark_pkt_dir_t handle, uint32_t v)
{
struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
inst->regs->ctrl = v;
}
-
-uint32_t
-ark_pktdir_status(ark_pkt_dir_t handle)
-{
- struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
- return inst->regs->ctrl;
-}
-
-uint32_t
-ark_pktdir_stall_cnt(ark_pkt_dir_t handle)
-{
- struct ark_pkt_dir_inst *inst = (struct ark_pkt_dir_inst *)handle;
- return inst->regs->stall_cnt;
-}
diff --git a/drivers/net/ark/ark_pktdir.h b/drivers/net/ark/ark_pktdir.h
index 4afd128f95..e7f2026a00 100644
--- a/drivers/net/ark/ark_pktdir.h
+++ b/drivers/net/ark/ark_pktdir.h
@@ -33,9 +33,6 @@ struct ark_pkt_dir_inst {
};
ark_pkt_dir_t ark_pktdir_init(void *base);
-void ark_pktdir_uninit(ark_pkt_dir_t handle);
void ark_pktdir_setup(ark_pkt_dir_t handle, uint32_t v);
-uint32_t ark_pktdir_stall_cnt(ark_pkt_dir_t handle);
-uint32_t ark_pktdir_status(ark_pkt_dir_t handle);
#endif
diff --git a/drivers/net/ark/ark_pktgen.c b/drivers/net/ark/ark_pktgen.c
index 4a02662a46..9769c46b47 100644
--- a/drivers/net/ark/ark_pktgen.c
+++ b/drivers/net/ark/ark_pktgen.c
@@ -186,33 +186,6 @@ ark_pktgen_is_gen_forever(ark_pkt_gen_t handle)
return (((r >> 24) & 1) == 1);
}
-void
-ark_pktgen_wait_done(ark_pkt_gen_t handle)
-{
- struct ark_pkt_gen_inst *inst = (struct ark_pkt_gen_inst *)handle;
- int wait_cycle = 10;
-
- if (ark_pktgen_is_gen_forever(handle))
- ARK_PMD_LOG(NOTICE, "Pktgen wait_done will not terminate"
- " because gen_forever=1\n");
-
- while (!ark_pktgen_tx_done(handle) && (wait_cycle > 0)) {
- usleep(1000);
- wait_cycle--;
- ARK_PMD_LOG(DEBUG,
- "Waiting for pktgen %d to finish sending...\n",
- inst->ordinal);
- }
- ARK_PMD_LOG(DEBUG, "Pktgen %d done.\n", inst->ordinal);
-}
-
-uint32_t
-ark_pktgen_get_pkts_sent(ark_pkt_gen_t handle)
-{
- struct ark_pkt_gen_inst *inst = (struct ark_pkt_gen_inst *)handle;
- return inst->regs->pkts_sent;
-}
-
void
ark_pktgen_set_payload_byte(ark_pkt_gen_t handle, uint32_t b)
{
diff --git a/drivers/net/ark/ark_pktgen.h b/drivers/net/ark/ark_pktgen.h
index c61dfee6db..cc78577d3d 100644
--- a/drivers/net/ark/ark_pktgen.h
+++ b/drivers/net/ark/ark_pktgen.h
@@ -60,8 +60,6 @@ uint32_t ark_pktgen_is_gen_forever(ark_pkt_gen_t handle);
uint32_t ark_pktgen_is_running(ark_pkt_gen_t handle);
uint32_t ark_pktgen_tx_done(ark_pkt_gen_t handle);
void ark_pktgen_reset(ark_pkt_gen_t handle);
-void ark_pktgen_wait_done(ark_pkt_gen_t handle);
-uint32_t ark_pktgen_get_pkts_sent(ark_pkt_gen_t handle);
void ark_pktgen_set_payload_byte(ark_pkt_gen_t handle, uint32_t b);
void ark_pktgen_set_pkt_spacing(ark_pkt_gen_t handle, uint32_t x);
void ark_pktgen_set_pkt_size_min(ark_pkt_gen_t handle, uint32_t x);
diff --git a/drivers/net/ark/ark_udm.c b/drivers/net/ark/ark_udm.c
index a740d36d43..2132f4e972 100644
--- a/drivers/net/ark/ark_udm.c
+++ b/drivers/net/ark/ark_udm.c
@@ -135,21 +135,6 @@ ark_udm_dump_stats(struct ark_udm_t *udm, const char *msg)
"MBuf Count", udm->stats.rx_mbuf_count);
}
-void
-ark_udm_dump_queue_stats(struct ark_udm_t *udm, const char *msg, uint16_t qid)
-{
- ARK_PMD_LOG(INFO, "UDM Queue %3u Stats: %s"
- ARK_SU64 ARK_SU64
- ARK_SU64 ARK_SU64
- ARK_SU64 "\n",
- qid, msg,
- "Pkts Received", udm->qstats.q_packet_count,
- "Pkts Finalized", udm->qstats.q_ff_packet_count,
- "Pkts Dropped", udm->qstats.q_pkt_drop,
- "Bytes Count", udm->qstats.q_byte_count,
- "MBuf Count", udm->qstats.q_mbuf_count);
-}
-
void
ark_udm_dump(struct ark_udm_t *udm, const char *msg)
{
diff --git a/drivers/net/ark/ark_udm.h b/drivers/net/ark/ark_udm.h
index 5846c825b8..7f0d3c2a5e 100644
--- a/drivers/net/ark/ark_udm.h
+++ b/drivers/net/ark/ark_udm.h
@@ -145,8 +145,6 @@ void ark_udm_configure(struct ark_udm_t *udm,
void ark_udm_write_addr(struct ark_udm_t *udm, rte_iova_t addr);
void ark_udm_stats_reset(struct ark_udm_t *udm);
void ark_udm_dump_stats(struct ark_udm_t *udm, const char *msg);
-void ark_udm_dump_queue_stats(struct ark_udm_t *udm, const char *msg,
- uint16_t qid);
void ark_udm_dump(struct ark_udm_t *udm, const char *msg);
void ark_udm_dump_perf(struct ark_udm_t *udm, const char *msg);
void ark_udm_dump_setup(struct ark_udm_t *udm, uint16_t q_id);
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/atlantic/hw_atl/hw_atl_b0.c
index 7d0e724019..415099e04a 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_b0.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.c
@@ -480,20 +480,6 @@ int hw_atl_b0_hw_ring_tx_init(struct aq_hw_s *self, uint64_t base_addr,
return aq_hw_err_from_flags(self);
}
-int hw_atl_b0_hw_irq_enable(struct aq_hw_s *self, u64 mask)
-{
- hw_atl_itr_irq_msk_setlsw_set(self, LODWORD(mask));
- return aq_hw_err_from_flags(self);
-}
-
-int hw_atl_b0_hw_irq_disable(struct aq_hw_s *self, u64 mask)
-{
- hw_atl_itr_irq_msk_clearlsw_set(self, LODWORD(mask));
- hw_atl_itr_irq_status_clearlsw_set(self, LODWORD(mask));
-
- return aq_hw_err_from_flags(self);
-}
-
int hw_atl_b0_hw_irq_read(struct aq_hw_s *self, u64 *mask)
{
*mask = hw_atl_itr_irq_statuslsw_get(self);
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.h b/drivers/net/atlantic/hw_atl/hw_atl_b0.h
index d1ba2aceb3..4a155d2bc7 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_b0.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.h
@@ -35,8 +35,6 @@ int hw_atl_b0_hw_rss_hash_set(struct aq_hw_s *self,
int hw_atl_b0_hw_rss_set(struct aq_hw_s *self,
struct aq_rss_parameters *rss_params);
-int hw_atl_b0_hw_irq_enable(struct aq_hw_s *self, u64 mask);
-int hw_atl_b0_hw_irq_disable(struct aq_hw_s *self, u64 mask);
int hw_atl_b0_hw_irq_read(struct aq_hw_s *self, u64 *mask);
#endif /* HW_ATL_B0_H */
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/atlantic/hw_atl/hw_atl_llh.c
index 2dc5be2ff1..b29419bce3 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_llh.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_llh.c
@@ -22,28 +22,6 @@ u32 hw_atl_reg_glb_cpu_sem_get(struct aq_hw_s *aq_hw, u32 semaphore)
return aq_hw_read_reg(aq_hw, HW_ATL_GLB_CPU_SEM_ADR(semaphore));
}
-void hw_atl_glb_glb_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 glb_reg_res_dis)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_GLB_REG_RES_DIS_ADR,
- HW_ATL_GLB_REG_RES_DIS_MSK,
- HW_ATL_GLB_REG_RES_DIS_SHIFT,
- glb_reg_res_dis);
-}
-
-void hw_atl_glb_soft_res_set(struct aq_hw_s *aq_hw, u32 soft_res)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_GLB_SOFT_RES_ADR,
- HW_ATL_GLB_SOFT_RES_MSK,
- HW_ATL_GLB_SOFT_RES_SHIFT, soft_res);
-}
-
-u32 hw_atl_glb_soft_res_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_GLB_SOFT_RES_ADR,
- HW_ATL_GLB_SOFT_RES_MSK,
- HW_ATL_GLB_SOFT_RES_SHIFT);
-}
-
u32 hw_atl_reg_glb_mif_id_get(struct aq_hw_s *aq_hw)
{
return aq_hw_read_reg(aq_hw, HW_ATL_GLB_MIF_ID_ADR);
@@ -275,13 +253,6 @@ void hw_atl_itr_irq_msk_setlsw_set(struct aq_hw_s *aq_hw, u32 irq_msk_setlsw)
aq_hw_write_reg(aq_hw, HW_ATL_ITR_IMSRLSW_ADR, irq_msk_setlsw);
}
-void hw_atl_itr_irq_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 irq_reg_res_dis)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_ITR_REG_RES_DSBL_ADR,
- HW_ATL_ITR_REG_RES_DSBL_MSK,
- HW_ATL_ITR_REG_RES_DSBL_SHIFT, irq_reg_res_dis);
-}
-
void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
u32 irq_status_clearlsw)
{
@@ -293,18 +264,6 @@ u32 hw_atl_itr_irq_statuslsw_get(struct aq_hw_s *aq_hw)
return aq_hw_read_reg(aq_hw, HW_ATL_ITR_ISRLSW_ADR);
}
-u32 hw_atl_itr_res_irq_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_ITR_RES_ADR, HW_ATL_ITR_RES_MSK,
- HW_ATL_ITR_RES_SHIFT);
-}
-
-void hw_atl_itr_res_irq_set(struct aq_hw_s *aq_hw, u32 res_irq)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_ITR_RES_ADR, HW_ATL_ITR_RES_MSK,
- HW_ATL_ITR_RES_SHIFT, res_irq);
-}
-
/* rdm */
void hw_atl_rdm_cpu_id_set(struct aq_hw_s *aq_hw, u32 cpuid, u32 dca)
{
@@ -374,13 +333,6 @@ void hw_atl_rdm_rx_desc_head_splitting_set(struct aq_hw_s *aq_hw,
rx_desc_head_splitting);
}
-u32 hw_atl_rdm_rx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_RDM_DESCDHD_ADR(descriptor),
- HW_ATL_RDM_DESCDHD_MSK,
- HW_ATL_RDM_DESCDHD_SHIFT);
-}
-
void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
u32 descriptor)
{
@@ -389,15 +341,6 @@ void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
rx_desc_len);
}
-void hw_atl_rdm_rx_desc_res_set(struct aq_hw_s *aq_hw, u32 rx_desc_res,
- u32 descriptor)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_DESCDRESET_ADR(descriptor),
- HW_ATL_RDM_DESCDRESET_MSK,
- HW_ATL_RDM_DESCDRESET_SHIFT,
- rx_desc_res);
-}
-
void hw_atl_rdm_rx_desc_wr_wb_irq_en_set(struct aq_hw_s *aq_hw,
u32 rx_desc_wr_wb_irq_en)
{
@@ -425,15 +368,6 @@ void hw_atl_rdm_rx_pld_dca_en_set(struct aq_hw_s *aq_hw, u32 rx_pld_dca_en,
rx_pld_dca_en);
}
-void hw_atl_rdm_rdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
- u32 rdm_intr_moder_en)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RDM_INT_RIM_EN_ADR,
- HW_ATL_RDM_INT_RIM_EN_MSK,
- HW_ATL_RDM_INT_RIM_EN_SHIFT,
- rdm_intr_moder_en);
-}
-
/* reg */
void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
u32 regidx)
@@ -441,21 +375,11 @@ void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
aq_hw_write_reg(aq_hw, HW_ATL_GEN_INTR_MAP_ADR(regidx), gen_intr_map);
}
-u32 hw_atl_reg_gen_irq_status_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg(aq_hw, HW_ATL_GEN_INTR_STAT_ADR);
-}
-
void hw_atl_reg_irq_glb_ctl_set(struct aq_hw_s *aq_hw, u32 intr_glb_ctl)
{
aq_hw_write_reg(aq_hw, HW_ATL_INTR_GLB_CTL_ADR, intr_glb_ctl);
}
-void hw_atl_reg_irq_thr_set(struct aq_hw_s *aq_hw, u32 intr_thr, u32 throttle)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_INTR_THR_ADR(throttle), intr_thr);
-}
-
void hw_atl_reg_rx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
u32 rx_dma_desc_base_addrlsw,
u32 descriptor)
@@ -472,11 +396,6 @@ void hw_atl_reg_rx_dma_desc_base_addressmswset(struct aq_hw_s *aq_hw,
rx_dma_desc_base_addrmsw);
}
-u32 hw_atl_reg_rx_dma_desc_status_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
- return aq_hw_read_reg(aq_hw, HW_ATL_RX_DMA_DESC_STAT_ADR(descriptor));
-}
-
void hw_atl_reg_rx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
u32 rx_dma_desc_tail_ptr,
u32 descriptor)
@@ -506,26 +425,6 @@ void hw_atl_reg_rx_flr_rss_control1set(struct aq_hw_s *aq_hw,
rx_flr_rss_control1);
}
-void hw_atl_reg_rx_flr_control2_set(struct aq_hw_s *aq_hw,
- u32 rx_filter_control2)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_RX_FLR_CONTROL2_ADR, rx_filter_control2);
-}
-
-void hw_atl_reg_rx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
- u32 rx_intr_moderation_ctl,
- u32 queue)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_RX_INTR_MODERATION_CTL_ADR(queue),
- rx_intr_moderation_ctl);
-}
-
-void hw_atl_reg_tx_dma_debug_ctl_set(struct aq_hw_s *aq_hw,
- u32 tx_dma_debug_ctl)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_TX_DMA_DEBUG_CTL_ADR, tx_dma_debug_ctl);
-}
-
void hw_atl_reg_tx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
u32 tx_dma_desc_base_addrlsw,
u32 descriptor)
@@ -552,22 +451,7 @@ void hw_atl_reg_tx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
tx_dma_desc_tail_ptr);
}
-void hw_atl_reg_tx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
- u32 tx_intr_moderation_ctl,
- u32 queue)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_TX_INTR_MODERATION_CTL_ADR(queue),
- tx_intr_moderation_ctl);
-}
-
/* RPB: rx packet buffer */
-void hw_atl_rpb_dma_sys_lbk_set(struct aq_hw_s *aq_hw, u32 dma_sys_lbk)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPB_DMA_SYS_LBK_ADR,
- HW_ATL_RPB_DMA_SYS_LBK_MSK,
- HW_ATL_RPB_DMA_SYS_LBK_SHIFT, dma_sys_lbk);
-}
-
void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
u32 rx_traf_class_mode)
{
@@ -577,13 +461,6 @@ void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
rx_traf_class_mode);
}
-u32 hw_atl_rpb_rpf_rx_traf_class_mode_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_RPB_RPF_RX_TC_MODE_ADR,
- HW_ATL_RPB_RPF_RX_TC_MODE_MSK,
- HW_ATL_RPB_RPF_RX_TC_MODE_SHIFT);
-}
-
void hw_atl_rpb_rx_buff_en_set(struct aq_hw_s *aq_hw, u32 rx_buff_en)
{
aq_hw_write_reg_bit(aq_hw, HW_ATL_RPB_RX_BUF_EN_ADR,
@@ -664,15 +541,6 @@ void hw_atl_rpfl2broadcast_flr_act_set(struct aq_hw_s *aq_hw,
HW_ATL_RPFL2BC_ACT_SHIFT, l2broadcast_flr_act);
}
-void hw_atl_rpfl2multicast_flr_en_set(struct aq_hw_s *aq_hw,
- u32 l2multicast_flr_en,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPFL2MC_ENF_ADR(filter),
- HW_ATL_RPFL2MC_ENF_MSK,
- HW_ATL_RPFL2MC_ENF_SHIFT, l2multicast_flr_en);
-}
-
void hw_atl_rpfl2promiscuous_mode_en_set(struct aq_hw_s *aq_hw,
u32 l2promiscuous_mode_en)
{
@@ -813,15 +681,6 @@ void hw_atl_rpf_rss_redir_wr_en_set(struct aq_hw_s *aq_hw, u32 rss_redir_wr_en)
HW_ATL_RPF_RSS_REDIR_WR_ENI_SHIFT, rss_redir_wr_en);
}
-void hw_atl_rpf_tpo_to_rpf_sys_lbk_set(struct aq_hw_s *aq_hw,
- u32 tpo_to_rpf_sys_lbk)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_TPO_RPF_SYS_LBK_ADR,
- HW_ATL_RPF_TPO_RPF_SYS_LBK_MSK,
- HW_ATL_RPF_TPO_RPF_SYS_LBK_SHIFT,
- tpo_to_rpf_sys_lbk);
-}
-
void hw_atl_rpf_vlan_inner_etht_set(struct aq_hw_s *aq_hw, u32 vlan_inner_etht)
{
aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_INNER_TPID_ADR,
@@ -847,24 +706,6 @@ void hw_atl_rpf_vlan_prom_mode_en_set(struct aq_hw_s *aq_hw,
vlan_prom_mode_en);
}
-void hw_atl_rpf_vlan_accept_untagged_packets_set(struct aq_hw_s *aq_hw,
- u32 vlan_acc_untagged_packets)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_ADR,
- HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_MSK,
- HW_ATL_RPF_VL_ACCEPT_UNTAGGED_MODE_SHIFT,
- vlan_acc_untagged_packets);
-}
-
-void hw_atl_rpf_vlan_untagged_act_set(struct aq_hw_s *aq_hw,
- u32 vlan_untagged_act)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_UNTAGGED_ACT_ADR,
- HW_ATL_RPF_VL_UNTAGGED_ACT_MSK,
- HW_ATL_RPF_VL_UNTAGGED_ACT_SHIFT,
- vlan_untagged_act);
-}
-
void hw_atl_rpf_vlan_flr_en_set(struct aq_hw_s *aq_hw, u32 vlan_flr_en,
u32 filter)
{
@@ -892,73 +733,6 @@ void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr,
vlan_id_flr);
}
-void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_ENF_ADR(filter),
- HW_ATL_RPF_ET_ENF_MSK,
- HW_ATL_RPF_ET_ENF_SHIFT, etht_flr_en);
-}
-
-void hw_atl_rpf_etht_user_priority_en_set(struct aq_hw_s *aq_hw,
- u32 etht_user_priority_en, u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_UPFEN_ADR(filter),
- HW_ATL_RPF_ET_UPFEN_MSK, HW_ATL_RPF_ET_UPFEN_SHIFT,
- etht_user_priority_en);
-}
-
-void hw_atl_rpf_etht_rx_queue_en_set(struct aq_hw_s *aq_hw,
- u32 etht_rx_queue_en,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_RXQFEN_ADR(filter),
- HW_ATL_RPF_ET_RXQFEN_MSK,
- HW_ATL_RPF_ET_RXQFEN_SHIFT,
- etht_rx_queue_en);
-}
-
-void hw_atl_rpf_etht_user_priority_set(struct aq_hw_s *aq_hw,
- u32 etht_user_priority,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_UPF_ADR(filter),
- HW_ATL_RPF_ET_UPF_MSK,
- HW_ATL_RPF_ET_UPF_SHIFT, etht_user_priority);
-}
-
-void hw_atl_rpf_etht_rx_queue_set(struct aq_hw_s *aq_hw, u32 etht_rx_queue,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_RXQF_ADR(filter),
- HW_ATL_RPF_ET_RXQF_MSK,
- HW_ATL_RPF_ET_RXQF_SHIFT, etht_rx_queue);
-}
-
-void hw_atl_rpf_etht_mgt_queue_set(struct aq_hw_s *aq_hw, u32 etht_mgt_queue,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_MNG_RXQF_ADR(filter),
- HW_ATL_RPF_ET_MNG_RXQF_MSK,
- HW_ATL_RPF_ET_MNG_RXQF_SHIFT,
- etht_mgt_queue);
-}
-
-void hw_atl_rpf_etht_flr_act_set(struct aq_hw_s *aq_hw, u32 etht_flr_act,
- u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_ACTF_ADR(filter),
- HW_ATL_RPF_ET_ACTF_MSK,
- HW_ATL_RPF_ET_ACTF_SHIFT, etht_flr_act);
-}
-
-void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_ET_VALF_ADR(filter),
- HW_ATL_RPF_ET_VALF_MSK,
- HW_ATL_RPF_ET_VALF_SHIFT, etht_flr);
-}
-
/* RPO: rx packet offload */
void hw_atl_rpo_ipv4header_crc_offload_en_set(struct aq_hw_s *aq_hw,
u32 ipv4header_crc_offload_en)
@@ -1156,13 +930,6 @@ void hw_atl_tdm_tx_desc_en_set(struct aq_hw_s *aq_hw, u32 tx_desc_en,
tx_desc_en);
}
-u32 hw_atl_tdm_tx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_TDM_DESCDHD_ADR(descriptor),
- HW_ATL_TDM_DESCDHD_MSK,
- HW_ATL_TDM_DESCDHD_SHIFT);
-}
-
void hw_atl_tdm_tx_desc_len_set(struct aq_hw_s *aq_hw, u32 tx_desc_len,
u32 descriptor)
{
@@ -1191,15 +958,6 @@ void hw_atl_tdm_tx_desc_wr_wb_threshold_set(struct aq_hw_s *aq_hw,
tx_desc_wr_wb_threshold);
}
-void hw_atl_tdm_tdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
- u32 tdm_irq_moderation_en)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_TDM_INT_MOD_EN_ADR,
- HW_ATL_TDM_INT_MOD_EN_MSK,
- HW_ATL_TDM_INT_MOD_EN_SHIFT,
- tdm_irq_moderation_en);
-}
-
/* thm */
void hw_atl_thm_lso_tcp_flag_of_first_pkt_set(struct aq_hw_s *aq_hw,
u32 lso_tcp_flag_of_first_pkt)
@@ -1236,13 +994,6 @@ void hw_atl_tpb_tx_buff_en_set(struct aq_hw_s *aq_hw, u32 tx_buff_en)
HW_ATL_TPB_TX_BUF_EN_SHIFT, tx_buff_en);
}
-u32 hw_atl_rpb_tps_tx_tc_mode_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_TPB_TX_TC_MODE_ADDR,
- HW_ATL_TPB_TX_TC_MODE_MSK,
- HW_ATL_TPB_TX_TC_MODE_SHIFT);
-}
-
void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw,
u32 tx_traf_class_mode)
{
@@ -1272,15 +1023,6 @@ void hw_atl_tpb_tx_buff_lo_threshold_per_tc_set(struct aq_hw_s *aq_hw,
tx_buff_lo_threshold_per_tc);
}
-void hw_atl_tpb_tx_dma_sys_lbk_en_set(struct aq_hw_s *aq_hw,
- u32 tx_dma_sys_lbk_en)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_TPB_DMA_SYS_LBK_ADR,
- HW_ATL_TPB_DMA_SYS_LBK_MSK,
- HW_ATL_TPB_DMA_SYS_LBK_SHIFT,
- tx_dma_sys_lbk_en);
-}
-
void hw_atl_tpb_tx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
u32 tx_pkt_buff_size_per_tc,
u32 buffer)
@@ -1319,15 +1061,6 @@ void hw_atl_tpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
tcp_udp_crc_offload_en);
}
-void hw_atl_tpo_tx_pkt_sys_lbk_en_set(struct aq_hw_s *aq_hw,
- u32 tx_pkt_sys_lbk_en)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_TPO_PKT_SYS_LBK_ADR,
- HW_ATL_TPO_PKT_SYS_LBK_MSK,
- HW_ATL_TPO_PKT_SYS_LBK_SHIFT,
- tx_pkt_sys_lbk_en);
-}
-
/* TPS: tx packet scheduler */
void hw_atl_tps_tx_pkt_shed_data_arb_mode_set(struct aq_hw_s *aq_hw,
u32 tx_pkt_shed_data_arb_mode)
@@ -1422,58 +1155,7 @@ void hw_atl_tx_tx_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 tx_reg_res_dis)
HW_ATL_TX_REG_RES_DSBL_SHIFT, tx_reg_res_dis);
}
-/* msm */
-u32 hw_atl_msm_reg_access_status_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg_bit(aq_hw, HW_ATL_MSM_REG_ACCESS_BUSY_ADR,
- HW_ATL_MSM_REG_ACCESS_BUSY_MSK,
- HW_ATL_MSM_REG_ACCESS_BUSY_SHIFT);
-}
-
-void hw_atl_msm_reg_addr_for_indirect_addr_set(struct aq_hw_s *aq_hw,
- u32 reg_addr_for_indirect_addr)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_ADDR_ADR,
- HW_ATL_MSM_REG_ADDR_MSK,
- HW_ATL_MSM_REG_ADDR_SHIFT,
- reg_addr_for_indirect_addr);
-}
-
-void hw_atl_msm_reg_rd_strobe_set(struct aq_hw_s *aq_hw, u32 reg_rd_strobe)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_RD_STROBE_ADR,
- HW_ATL_MSM_REG_RD_STROBE_MSK,
- HW_ATL_MSM_REG_RD_STROBE_SHIFT,
- reg_rd_strobe);
-}
-
-u32 hw_atl_msm_reg_rd_data_get(struct aq_hw_s *aq_hw)
-{
- return aq_hw_read_reg(aq_hw, HW_ATL_MSM_REG_RD_DATA_ADR);
-}
-
-void hw_atl_msm_reg_wr_data_set(struct aq_hw_s *aq_hw, u32 reg_wr_data)
-{
- aq_hw_write_reg(aq_hw, HW_ATL_MSM_REG_WR_DATA_ADR, reg_wr_data);
-}
-
-void hw_atl_msm_reg_wr_strobe_set(struct aq_hw_s *aq_hw, u32 reg_wr_strobe)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_MSM_REG_WR_STROBE_ADR,
- HW_ATL_MSM_REG_WR_STROBE_MSK,
- HW_ATL_MSM_REG_WR_STROBE_SHIFT,
- reg_wr_strobe);
-}
-
/* pci */
-void hw_atl_pci_pci_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 pci_reg_res_dis)
-{
- aq_hw_write_reg_bit(aq_hw, HW_ATL_PCI_REG_RES_DSBL_ADR,
- HW_ATL_PCI_REG_RES_DSBL_MSK,
- HW_ATL_PCI_REG_RES_DSBL_SHIFT,
- pci_reg_res_dis);
-}
-
void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
u32 glb_cpu_scratch_scp,
u32 scratch_scp)
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_llh.h b/drivers/net/atlantic/hw_atl/hw_atl_llh.h
index e30083cea5..493fd88934 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_llh.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_llh.h
@@ -21,15 +21,6 @@ void hw_atl_reg_glb_cpu_sem_set(struct aq_hw_s *aq_hw, u32 glb_cpu_sem,
/* get global microprocessor semaphore */
u32 hw_atl_reg_glb_cpu_sem_get(struct aq_hw_s *aq_hw, u32 semaphore);
-/* set global register reset disable */
-void hw_atl_glb_glb_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 glb_reg_res_dis);
-
-/* set soft reset */
-void hw_atl_glb_soft_res_set(struct aq_hw_s *aq_hw, u32 soft_res);
-
-/* get soft reset */
-u32 hw_atl_glb_soft_res_get(struct aq_hw_s *aq_hw);
-
/* stats */
u32 hw_atl_rpb_rx_dma_drop_pkt_cnt_get(struct aq_hw_s *aq_hw);
@@ -130,9 +121,6 @@ void hw_atl_itr_irq_msk_clearlsw_set(struct aq_hw_s *aq_hw,
/* set interrupt mask set lsw */
void hw_atl_itr_irq_msk_setlsw_set(struct aq_hw_s *aq_hw, u32 irq_msk_setlsw);
-/* set interrupt register reset disable */
-void hw_atl_itr_irq_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 irq_reg_res_dis);
-
/* set interrupt status clear lsw */
void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
u32 irq_status_clearlsw);
@@ -140,12 +128,6 @@ void hw_atl_itr_irq_status_clearlsw_set(struct aq_hw_s *aq_hw,
/* get interrupt status lsw */
u32 hw_atl_itr_irq_statuslsw_get(struct aq_hw_s *aq_hw);
-/* get reset interrupt */
-u32 hw_atl_itr_res_irq_get(struct aq_hw_s *aq_hw);
-
-/* set reset interrupt */
-void hw_atl_itr_res_irq_set(struct aq_hw_s *aq_hw, u32 res_irq);
-
/* rdm */
/* set cpu id */
@@ -175,9 +157,6 @@ void hw_atl_rdm_rx_desc_head_splitting_set(struct aq_hw_s *aq_hw,
u32 rx_desc_head_splitting,
u32 descriptor);
-/* get rx descriptor head pointer */
-u32 hw_atl_rdm_rx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
/* set rx descriptor length */
void hw_atl_rdm_rx_desc_len_set(struct aq_hw_s *aq_hw, u32 rx_desc_len,
u32 descriptor);
@@ -199,29 +178,15 @@ void hw_atl_rdm_rx_desc_head_buff_size_set(struct aq_hw_s *aq_hw,
u32 rx_desc_head_buff_size,
u32 descriptor);
-/* set rx descriptor reset */
-void hw_atl_rdm_rx_desc_res_set(struct aq_hw_s *aq_hw, u32 rx_desc_res,
- u32 descriptor);
-
-/* Set RDM Interrupt Moderation Enable */
-void hw_atl_rdm_rdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
- u32 rdm_intr_moder_en);
-
/* reg */
/* set general interrupt mapping register */
void hw_atl_reg_gen_irq_map_set(struct aq_hw_s *aq_hw, u32 gen_intr_map,
u32 regidx);
-/* get general interrupt status register */
-u32 hw_atl_reg_gen_irq_status_get(struct aq_hw_s *aq_hw);
-
/* set interrupt global control register */
void hw_atl_reg_irq_glb_ctl_set(struct aq_hw_s *aq_hw, u32 intr_glb_ctl);
-/* set interrupt throttle register */
-void hw_atl_reg_irq_thr_set(struct aq_hw_s *aq_hw, u32 intr_thr, u32 throttle);
-
/* set rx dma descriptor base address lsw */
void hw_atl_reg_rx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
u32 rx_dma_desc_base_addrlsw,
@@ -232,9 +197,6 @@ void hw_atl_reg_rx_dma_desc_base_addressmswset(struct aq_hw_s *aq_hw,
u32 rx_dma_desc_base_addrmsw,
u32 descriptor);
-/* get rx dma descriptor status register */
-u32 hw_atl_reg_rx_dma_desc_status_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
/* set rx dma descriptor tail pointer register */
void hw_atl_reg_rx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
u32 rx_dma_desc_tail_ptr,
@@ -252,18 +214,6 @@ void hw_atl_reg_rx_flr_mcst_flr_set(struct aq_hw_s *aq_hw, u32 rx_flr_mcst_flr,
void hw_atl_reg_rx_flr_rss_control1set(struct aq_hw_s *aq_hw,
u32 rx_flr_rss_control1);
-/* Set RX Filter Control Register 2 */
-void hw_atl_reg_rx_flr_control2_set(struct aq_hw_s *aq_hw, u32 rx_flr_control2);
-
-/* Set RX Interrupt Moderation Control Register */
-void hw_atl_reg_rx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
- u32 rx_intr_moderation_ctl,
- u32 queue);
-
-/* set tx dma debug control */
-void hw_atl_reg_tx_dma_debug_ctl_set(struct aq_hw_s *aq_hw,
- u32 tx_dma_debug_ctl);
-
/* set tx dma descriptor base address lsw */
void hw_atl_reg_tx_dma_desc_base_addresslswset(struct aq_hw_s *aq_hw,
u32 tx_dma_desc_base_addrlsw,
@@ -279,11 +229,6 @@ void hw_atl_reg_tx_dma_desc_tail_ptr_set(struct aq_hw_s *aq_hw,
u32 tx_dma_desc_tail_ptr,
u32 descriptor);
-/* Set TX Interrupt Moderation Control Register */
-void hw_atl_reg_tx_intr_moder_ctrl_set(struct aq_hw_s *aq_hw,
- u32 tx_intr_moderation_ctl,
- u32 queue);
-
/* set global microprocessor scratch pad */
void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
u32 glb_cpu_scratch_scp,
@@ -291,16 +236,10 @@ void hw_atl_reg_glb_cpu_scratch_scp_set(struct aq_hw_s *aq_hw,
/* rpb */
-/* set dma system loopback */
-void hw_atl_rpb_dma_sys_lbk_set(struct aq_hw_s *aq_hw, u32 dma_sys_lbk);
-
/* set rx traffic class mode */
void hw_atl_rpb_rpf_rx_traf_class_mode_set(struct aq_hw_s *aq_hw,
u32 rx_traf_class_mode);
-/* get rx traffic class mode */
-u32 hw_atl_rpb_rpf_rx_traf_class_mode_get(struct aq_hw_s *aq_hw);
-
/* set rx buffer enable */
void hw_atl_rpb_rx_buff_en_set(struct aq_hw_s *aq_hw, u32 rx_buff_en);
@@ -341,11 +280,6 @@ void hw_atl_rpfl2broadcast_en_set(struct aq_hw_s *aq_hw, u32 l2broadcast_en);
void hw_atl_rpfl2broadcast_flr_act_set(struct aq_hw_s *aq_hw,
u32 l2broadcast_flr_act);
-/* set l2 multicast filter enable */
-void hw_atl_rpfl2multicast_flr_en_set(struct aq_hw_s *aq_hw,
- u32 l2multicast_flr_en,
- u32 filter);
-
/* set l2 promiscuous mode enable */
void hw_atl_rpfl2promiscuous_mode_en_set(struct aq_hw_s *aq_hw,
u32 l2promiscuous_mode_en);
@@ -403,10 +337,6 @@ u32 hw_atl_rpf_rss_redir_wr_en_get(struct aq_hw_s *aq_hw);
/* set rss redirection write enable */
void hw_atl_rpf_rss_redir_wr_en_set(struct aq_hw_s *aq_hw, u32 rss_redir_wr_en);
-/* set tpo to rpf system loopback */
-void hw_atl_rpf_tpo_to_rpf_sys_lbk_set(struct aq_hw_s *aq_hw,
- u32 tpo_to_rpf_sys_lbk);
-
/* set vlan inner ethertype */
void hw_atl_rpf_vlan_inner_etht_set(struct aq_hw_s *aq_hw, u32 vlan_inner_etht);
@@ -417,14 +347,6 @@ void hw_atl_rpf_vlan_outer_etht_set(struct aq_hw_s *aq_hw, u32 vlan_outer_etht);
void hw_atl_rpf_vlan_prom_mode_en_set(struct aq_hw_s *aq_hw,
u32 vlan_prom_mode_en);
-/* Set VLAN untagged action */
-void hw_atl_rpf_vlan_untagged_act_set(struct aq_hw_s *aq_hw,
- u32 vlan_untagged_act);
-
-/* Set VLAN accept untagged packets */
-void hw_atl_rpf_vlan_accept_untagged_packets_set(struct aq_hw_s *aq_hw,
- u32 vlan_acc_untagged_packets);
-
/* Set VLAN filter enable */
void hw_atl_rpf_vlan_flr_en_set(struct aq_hw_s *aq_hw, u32 vlan_flr_en,
u32 filter);
@@ -437,40 +359,6 @@ void hw_atl_rpf_vlan_flr_act_set(struct aq_hw_s *aq_hw, u32 vlan_filter_act,
void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr,
u32 filter);
-/* set ethertype filter enable */
-void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en,
- u32 filter);
-
-/* set ethertype user-priority enable */
-void hw_atl_rpf_etht_user_priority_en_set(struct aq_hw_s *aq_hw,
- u32 etht_user_priority_en,
- u32 filter);
-
-/* set ethertype rx queue enable */
-void hw_atl_rpf_etht_rx_queue_en_set(struct aq_hw_s *aq_hw,
- u32 etht_rx_queue_en,
- u32 filter);
-
-/* set ethertype rx queue */
-void hw_atl_rpf_etht_rx_queue_set(struct aq_hw_s *aq_hw, u32 etht_rx_queue,
- u32 filter);
-
-/* set ethertype user-priority */
-void hw_atl_rpf_etht_user_priority_set(struct aq_hw_s *aq_hw,
- u32 etht_user_priority,
- u32 filter);
-
-/* set ethertype management queue */
-void hw_atl_rpf_etht_mgt_queue_set(struct aq_hw_s *aq_hw, u32 etht_mgt_queue,
- u32 filter);
-
-/* set ethertype filter action */
-void hw_atl_rpf_etht_flr_act_set(struct aq_hw_s *aq_hw, u32 etht_flr_act,
- u32 filter);
-
-/* set ethertype filter */
-void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter);
-
/* rpo */
/* set ipv4 header checksum offload enable */
@@ -552,9 +440,6 @@ void hw_atl_tdm_tx_dca_mode_set(struct aq_hw_s *aq_hw, u32 tx_dca_mode);
void hw_atl_tdm_tx_desc_dca_en_set(struct aq_hw_s *aq_hw, u32 tx_desc_dca_en,
u32 dca);
-/* get tx descriptor head pointer */
-u32 hw_atl_tdm_tx_desc_head_ptr_get(struct aq_hw_s *aq_hw, u32 descriptor);
-
/* set tx descriptor length */
void hw_atl_tdm_tx_desc_len_set(struct aq_hw_s *aq_hw, u32 tx_desc_len,
u32 descriptor);
@@ -568,9 +453,6 @@ void hw_atl_tdm_tx_desc_wr_wb_threshold_set(struct aq_hw_s *aq_hw,
u32 tx_desc_wr_wb_threshold,
u32 descriptor);
-/* Set TDM Interrupt Moderation Enable */
-void hw_atl_tdm_tdm_intr_moder_en_set(struct aq_hw_s *aq_hw,
- u32 tdm_irq_moderation_en);
/* thm */
/* set lso tcp flag of first packet */
@@ -591,9 +473,6 @@ void hw_atl_thm_lso_tcp_flag_of_middle_pkt_set(struct aq_hw_s *aq_hw,
void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw,
u32 tx_traf_class_mode);
-/* get TX Traffic Class Mode */
-u32 hw_atl_rpb_tps_tx_tc_mode_get(struct aq_hw_s *aq_hw);
-
/* set tx buffer enable */
void hw_atl_tpb_tx_buff_en_set(struct aq_hw_s *aq_hw, u32 tx_buff_en);
@@ -607,10 +486,6 @@ void hw_atl_tpb_tx_buff_lo_threshold_per_tc_set(struct aq_hw_s *aq_hw,
u32 tx_buff_lo_threshold_per_tc,
u32 buffer);
-/* set tx dma system loopback enable */
-void hw_atl_tpb_tx_dma_sys_lbk_en_set(struct aq_hw_s *aq_hw,
- u32 tx_dma_sys_lbk_en);
-
/* set tx packet buffer size (per tc) */
void hw_atl_tpb_tx_pkt_buff_size_per_tc_set(struct aq_hw_s *aq_hw,
u32 tx_pkt_buff_size_per_tc,
@@ -630,10 +505,6 @@ void hw_atl_tpo_ipv4header_crc_offload_en_set(struct aq_hw_s *aq_hw,
void hw_atl_tpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
u32 tcp_udp_crc_offload_en);
-/* set tx pkt system loopback enable */
-void hw_atl_tpo_tx_pkt_sys_lbk_en_set(struct aq_hw_s *aq_hw,
- u32 tx_pkt_sys_lbk_en);
-
/* tps */
/* set tx packet scheduler data arbitration mode */
@@ -681,32 +552,8 @@ void hw_atl_tps_tx_pkt_shed_tc_data_weight_set(struct aq_hw_s *aq_hw,
/* set tx register reset disable */
void hw_atl_tx_tx_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 tx_reg_res_dis);
-/* msm */
-
-/* get register access status */
-u32 hw_atl_msm_reg_access_status_get(struct aq_hw_s *aq_hw);
-
-/* set register address for indirect address */
-void hw_atl_msm_reg_addr_for_indirect_addr_set(struct aq_hw_s *aq_hw,
- u32 reg_addr_for_indirect_addr);
-
-/* set register read strobe */
-void hw_atl_msm_reg_rd_strobe_set(struct aq_hw_s *aq_hw, u32 reg_rd_strobe);
-
-/* get register read data */
-u32 hw_atl_msm_reg_rd_data_get(struct aq_hw_s *aq_hw);
-
-/* set register write data */
-void hw_atl_msm_reg_wr_data_set(struct aq_hw_s *aq_hw, u32 reg_wr_data);
-
-/* set register write strobe */
-void hw_atl_msm_reg_wr_strobe_set(struct aq_hw_s *aq_hw, u32 reg_wr_strobe);
-
/* pci */
-/* set pci register reset disable */
-void hw_atl_pci_pci_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 pci_reg_res_dis);
-
/* set uP Force Interrupt */
void hw_atl_mcp_up_force_intr_set(struct aq_hw_s *aq_hw, u32 up_force_intr);
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
index 84d11ab3a5..c94f5112f1 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
@@ -682,37 +682,6 @@ static int hw_atl_utils_get_mac_permanent(struct aq_hw_s *self,
return err;
}
-unsigned int hw_atl_utils_mbps_2_speed_index(unsigned int mbps)
-{
- unsigned int ret = 0U;
-
- switch (mbps) {
- case 100U:
- ret = 5U;
- break;
-
- case 1000U:
- ret = 4U;
- break;
-
- case 2500U:
- ret = 3U;
- break;
-
- case 5000U:
- ret = 1U;
- break;
-
- case 10000U:
- ret = 0U;
- break;
-
- default:
- break;
- }
- return ret;
-}
-
void hw_atl_utils_hw_chip_features_init(struct aq_hw_s *self, u32 *p)
{
u32 chip_features = 0U;
@@ -795,11 +764,6 @@ int hw_atl_utils_update_stats(struct aq_hw_s *self)
return 0;
}
-struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self)
-{
- return &self->curr_stats;
-}
-
static const u32 hw_atl_utils_hw_mac_regs[] = {
0x00005580U, 0x00005590U, 0x000055B0U, 0x000055B4U,
0x000055C0U, 0x00005B00U, 0x00005B04U, 0x00005B08U,
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.h b/drivers/net/atlantic/hw_atl/hw_atl_utils.h
index d8fab010cf..f5e2b472a9 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.h
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.h
@@ -617,8 +617,6 @@ void hw_atl_utils_mpi_set(struct aq_hw_s *self,
int hw_atl_utils_mpi_get_link_status(struct aq_hw_s *self);
-unsigned int hw_atl_utils_mbps_2_speed_index(unsigned int mbps);
-
unsigned int hw_atl_utils_hw_get_reg_length(void);
int hw_atl_utils_hw_get_regs(struct aq_hw_s *self,
@@ -633,8 +631,6 @@ int hw_atl_utils_get_fw_version(struct aq_hw_s *self, u32 *fw_version);
int hw_atl_utils_update_stats(struct aq_hw_s *self);
-struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self);
-
int hw_atl_utils_fw_downld_dwords(struct aq_hw_s *self, u32 a,
u32 *p, u32 cnt);
diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c
index 61f99c6408..7ade8f42d3 100644
--- a/drivers/net/bnx2x/ecore_sp.c
+++ b/drivers/net/bnx2x/ecore_sp.c
@@ -456,23 +456,6 @@ static void __ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
}
}
-/**
- * ecore_vlan_mac_h_write_unlock - unlock the vlan mac head list writer lock
- *
- * @sc: device handle
- * @o: vlan_mac object
- *
- * @details Notice if a pending execution exists, it would perform it -
- * possibly releasing and reclaiming the execution queue lock.
- */
-void ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
- struct ecore_vlan_mac_obj *o)
-{
- ECORE_SPIN_LOCK_BH(&o->exe_queue.lock);
- __ecore_vlan_mac_h_write_unlock(sc, o);
- ECORE_SPIN_UNLOCK_BH(&o->exe_queue.lock);
-}
-
/**
* __ecore_vlan_mac_h_read_lock - lock the vlan mac head list reader lock
*
diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h
index d58072dac0..bfb55e8d01 100644
--- a/drivers/net/bnx2x/ecore_sp.h
+++ b/drivers/net/bnx2x/ecore_sp.h
@@ -1871,8 +1871,6 @@ void ecore_vlan_mac_h_read_unlock(struct bnx2x_softc *sc,
struct ecore_vlan_mac_obj *o);
int ecore_vlan_mac_h_write_lock(struct bnx2x_softc *sc,
struct ecore_vlan_mac_obj *o);
-void ecore_vlan_mac_h_write_unlock(struct bnx2x_softc *sc,
- struct ecore_vlan_mac_obj *o);
int ecore_config_vlan_mac(struct bnx2x_softc *sc,
struct ecore_vlan_mac_ramrod_params *p);
diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c
index b65126d718..67ebdaaa44 100644
--- a/drivers/net/bnx2x/elink.c
+++ b/drivers/net/bnx2x/elink.c
@@ -1154,931 +1154,6 @@ static uint32_t elink_get_cfg_pin(struct bnx2x_softc *sc, uint32_t pin_cfg,
return ELINK_STATUS_OK;
}
-/******************************************************************/
-/* ETS section */
-/******************************************************************/
-static void elink_ets_e2e3a0_disabled(struct elink_params *params)
-{
- /* ETS disabled configuration*/
- struct bnx2x_softc *sc = params->sc;
-
- ELINK_DEBUG_P0(sc, "ETS E2E3 disabled configuration");
-
- /* mapping between entry priority to client number (0,1,2 -debug and
- * management clients, 3 - COS0 client, 4 - COS client)(HIGHEST)
- * 3bits client num.
- * PRI4 | PRI3 | PRI2 | PRI1 | PRI0
- * cos1-100 cos0-011 dbg1-010 dbg0-001 MCP-000
- */
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT, 0x4688);
- /* Bitmap of 5bits length. Each bit specifies whether the entry behaves
- * as strict. Bits 0,1,2 - debug and management entries, 3 -
- * COS0 entry, 4 - COS1 entry.
- * COS1 | COS0 | DEBUG1 | DEBUG0 | MGMT
- * bit4 bit3 bit2 bit1 bit0
- * MCP and debug are strict
- */
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x7);
- /* defines which entries (clients) are subjected to WFQ arbitration */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0);
- /* For strict priority entries defines the number of consecutive
- * slots for the highest priority.
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
- /* mapping between the CREDIT_WEIGHT registers and actual client
- * numbers
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP, 0);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, 0);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, 0);
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0, 0);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1, 0);
- REG_WR(sc, PBF_REG_HIGH_PRIORITY_COS_NUM, 0);
- /* ETS mode disable */
- REG_WR(sc, PBF_REG_ETS_ENABLED, 0);
- /* If ETS mode is enabled (there is no strict priority) defines a WFQ
- * weight for COS0/COS1.
- */
- REG_WR(sc, PBF_REG_COS0_WEIGHT, 0x2710);
- REG_WR(sc, PBF_REG_COS1_WEIGHT, 0x2710);
- /* Upper bound that COS0_WEIGHT can reach in the WFQ arbiter */
- REG_WR(sc, PBF_REG_COS0_UPPER_BOUND, 0x989680);
- REG_WR(sc, PBF_REG_COS1_UPPER_BOUND, 0x989680);
- /* Defines the number of consecutive slots for the strict priority */
- REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0);
-}
-/******************************************************************************
- * Description:
- * Getting min_w_val will be set according to line speed .
- *.
- ******************************************************************************/
-static uint32_t elink_ets_get_min_w_val_nig(const struct elink_vars *vars)
-{
- uint32_t min_w_val = 0;
- /* Calculate min_w_val.*/
- if (vars->link_up) {
- if (vars->line_speed == ELINK_SPEED_20000)
- min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_20GBPS;
- else
- min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_UP_TO_10GBPS;
- } else {
- min_w_val = ELINK_ETS_E3B0_NIG_MIN_W_VAL_20GBPS;
- }
- /* If the link isn't up (static configuration for example ) The
- * link will be according to 20GBPS.
- */
- return min_w_val;
-}
-/******************************************************************************
- * Description:
- * Getting credit upper bound form min_w_val.
- *.
- ******************************************************************************/
-static uint32_t elink_ets_get_credit_upper_bound(const uint32_t min_w_val)
-{
- const uint32_t credit_upper_bound = (uint32_t)
- ELINK_MAXVAL((150 * min_w_val),
- ELINK_MAX_PACKET_SIZE);
- return credit_upper_bound;
-}
-/******************************************************************************
- * Description:
- * Set credit upper bound for NIG.
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_set_credit_upper_bound_nig(
- const struct elink_params *params,
- const uint32_t min_w_val)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint8_t port = params->port;
- const uint32_t credit_upper_bound =
- elink_ets_get_credit_upper_bound(min_w_val);
-
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_0 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0, credit_upper_bound);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_1 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1, credit_upper_bound);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_2 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_2, credit_upper_bound);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_3 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_3, credit_upper_bound);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_4 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_4, credit_upper_bound);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_UPPER_BOUND_5 :
- NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_5, credit_upper_bound);
-
- if (!port) {
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_6,
- credit_upper_bound);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_7,
- credit_upper_bound);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_8,
- credit_upper_bound);
- }
-}
-/******************************************************************************
- * Description:
- * Will return the NIG ETS registers to init values.Except
- * credit_upper_bound.
- * That isn't used in this configuration (No WFQ is enabled) and will be
- * configured according to spec
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_nig_disabled(const struct elink_params *params,
- const struct elink_vars *vars)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint8_t port = params->port;
- const uint32_t min_w_val = elink_ets_get_min_w_val_nig(vars);
- /* Mapping between entry priority to client number (0,1,2 -debug and
- * management clients, 3 - COS0 client, 4 - COS1, ... 8 -
- * COS5)(HIGHEST) 4bits client num.TODO_ETS - Should be done by
- * reset value or init tool
- */
- if (port) {
- REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_LSB, 0x543210);
- REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_MSB, 0x0);
- } else {
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_LSB, 0x76543210);
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_MSB, 0x8);
- }
- /* For strict priority entries defines the number of consecutive
- * slots for the highest priority.
- */
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS :
- NIG_REG_P1_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
- /* Mapping between the CREDIT_WEIGHT registers and actual client
- * numbers
- */
- if (port) {
- /*Port 1 has 6 COS*/
- REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_CREDIT_MAP2_LSB, 0x210543);
- REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_CREDIT_MAP2_MSB, 0x0);
- } else {
- /*Port 0 has 9 COS*/
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP2_LSB,
- 0x43210876);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP2_MSB, 0x5);
- }
-
- /* Bitmap of 5bits length. Each bit specifies whether the entry behaves
- * as strict. Bits 0,1,2 - debug and management entries, 3 -
- * COS0 entry, 4 - COS1 entry.
- * COS1 | COS0 | DEBUG1 | DEBUG0 | MGMT
- * bit4 bit3 bit2 bit1 bit0
- * MCP and debug are strict
- */
- if (port)
- REG_WR(sc, NIG_REG_P1_TX_ARB_CLIENT_IS_STRICT, 0x3f);
- else
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x1ff);
- /* defines which entries (clients) are subjected to WFQ arbitration */
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_SUBJECT2WFQ :
- NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0);
-
- /* Please notice the register address are note continuous and a
- * for here is note appropriate.In 2 port mode port0 only COS0-5
- * can be used. DEBUG1,DEBUG1,MGMT are never used for WFQ* In 4
- * port mode port1 only COS0-2 can be used. DEBUG1,DEBUG1,MGMT
- * are never used for WFQ
- */
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_0 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, 0x0);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_1 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, 0x0);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_2 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_2, 0x0);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_3 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_3, 0x0);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_4 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_4, 0x0);
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_5 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_5, 0x0);
- if (!port) {
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_6, 0x0);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_7, 0x0);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_8, 0x0);
- }
-
- elink_ets_e3b0_set_credit_upper_bound_nig(params, min_w_val);
-}
-/******************************************************************************
- * Description:
- * Set credit upper bound for PBF.
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_set_credit_upper_bound_pbf(
- const struct elink_params *params,
- const uint32_t min_w_val)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint32_t credit_upper_bound =
- elink_ets_get_credit_upper_bound(min_w_val);
- const uint8_t port = params->port;
- uint32_t base_upper_bound = 0;
- uint8_t max_cos = 0;
- uint8_t i = 0;
- /* In 2 port mode port0 has COS0-5 that can be used for WFQ.In 4
- * port mode port1 has COS0-2 that can be used for WFQ.
- */
- if (!port) {
- base_upper_bound = PBF_REG_COS0_UPPER_BOUND_P0;
- max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
- } else {
- base_upper_bound = PBF_REG_COS0_UPPER_BOUND_P1;
- max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1;
- }
-
- for (i = 0; i < max_cos; i++)
- REG_WR(sc, base_upper_bound + (i << 2), credit_upper_bound);
-}
-
-/******************************************************************************
- * Description:
- * Will return the PBF ETS registers to init values.Except
- * credit_upper_bound.
- * That isn't used in this configuration (No WFQ is enabled) and will be
- * configured according to spec
- *.
- ******************************************************************************/
-static void elink_ets_e3b0_pbf_disabled(const struct elink_params *params)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint8_t port = params->port;
- const uint32_t min_w_val_pbf = ELINK_ETS_E3B0_PBF_MIN_W_VAL;
- uint8_t i = 0;
- uint32_t base_weight = 0;
- uint8_t max_cos = 0;
-
- /* Mapping between entry priority to client number 0 - COS0
- * client, 2 - COS1, ... 5 - COS5)(HIGHEST) 4bits client num.
- * TODO_ETS - Should be done by reset value or init tool
- */
- if (port)
- /* 0x688 (|011|0 10|00 1|000) */
- REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P1, 0x688);
- else
- /* (10 1|100 |011|0 10|00 1|000) */
- REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P0, 0x2C688);
-
- /* TODO_ETS - Should be done by reset value or init tool */
- if (port)
- /* 0x688 (|011|0 10|00 1|000)*/
- REG_WR(sc, PBF_REG_ETS_ARB_CLIENT_CREDIT_MAP_P1, 0x688);
- else
- /* 0x2C688 (10 1|100 |011|0 10|00 1|000) */
- REG_WR(sc, PBF_REG_ETS_ARB_CLIENT_CREDIT_MAP_P0, 0x2C688);
-
- REG_WR(sc, (port) ? PBF_REG_ETS_ARB_NUM_STRICT_ARB_SLOTS_P1 :
- PBF_REG_ETS_ARB_NUM_STRICT_ARB_SLOTS_P0, 0x100);
-
-
- REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P1 :
- PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P0, 0);
-
- REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P1 :
- PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P0, 0);
- /* In 2 port mode port0 has COS0-5 that can be used for WFQ.
- * In 4 port mode port1 has COS0-2 that can be used for WFQ.
- */
- if (!port) {
- base_weight = PBF_REG_COS0_WEIGHT_P0;
- max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
- } else {
- base_weight = PBF_REG_COS0_WEIGHT_P1;
- max_cos = ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1;
- }
-
- for (i = 0; i < max_cos; i++)
- REG_WR(sc, base_weight + (0x4 * i), 0);
-
- elink_ets_e3b0_set_credit_upper_bound_pbf(params, min_w_val_pbf);
-}
-/******************************************************************************
- * Description:
- * E3B0 disable will return basicly the values to init values.
- *.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params,
- const struct elink_vars *vars)
-{
- struct bnx2x_softc *sc = params->sc;
-
- if (!CHIP_IS_E3B0(sc)) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_disabled the chip isn't E3B0");
- return ELINK_STATUS_ERROR;
- }
-
- elink_ets_e3b0_nig_disabled(params, vars);
-
- elink_ets_e3b0_pbf_disabled(params);
-
- return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- * Disable will return basicly the values to init values.
- *
- ******************************************************************************/
-elink_status_t elink_ets_disabled(struct elink_params *params,
- struct elink_vars *vars)
-{
- struct bnx2x_softc *sc = params->sc;
- elink_status_t elink_status = ELINK_STATUS_OK;
-
- if ((CHIP_IS_E2(sc)) || (CHIP_IS_E3A0(sc))) {
- elink_ets_e2e3a0_disabled(params);
- } else if (CHIP_IS_E3B0(sc)) {
- elink_status = elink_ets_e3b0_disabled(params, vars);
- } else {
- ELINK_DEBUG_P0(sc, "elink_ets_disabled - chip not supported");
- return ELINK_STATUS_ERROR;
- }
-
- return elink_status;
-}
-
-/******************************************************************************
- * Description
- * Set the COS mappimg to SP and BW until this point all the COS are not
- * set as SP or BW.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_cli_map(const struct elink_params *params,
- __rte_unused const struct elink_ets_params *ets_params,
- const uint8_t cos_sp_bitmap,
- const uint8_t cos_bw_bitmap)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint8_t port = params->port;
- const uint8_t nig_cli_sp_bitmap = 0x7 | (cos_sp_bitmap << 3);
- const uint8_t pbf_cli_sp_bitmap = cos_sp_bitmap;
- const uint8_t nig_cli_subject2wfq_bitmap = cos_bw_bitmap << 3;
- const uint8_t pbf_cli_subject2wfq_bitmap = cos_bw_bitmap;
-
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_STRICT :
- NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, nig_cli_sp_bitmap);
-
- REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P1 :
- PBF_REG_ETS_ARB_CLIENT_IS_STRICT_P0, pbf_cli_sp_bitmap);
-
- REG_WR(sc, (port) ? NIG_REG_P1_TX_ARB_CLIENT_IS_SUBJECT2WFQ :
- NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
- nig_cli_subject2wfq_bitmap);
-
- REG_WR(sc, (port) ? PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P1 :
- PBF_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ_P0,
- pbf_cli_subject2wfq_bitmap);
-
- return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- * This function is needed because NIG ARB_CREDIT_WEIGHT_X are
- * not continues and ARB_CREDIT_WEIGHT_0 + offset is suitable.
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_set_cos_bw(struct bnx2x_softc *sc,
- const uint8_t cos_entry,
- const uint32_t min_w_val_nig,
- const uint32_t min_w_val_pbf,
- const uint16_t total_bw,
- const uint8_t bw,
- const uint8_t port)
-{
- uint32_t nig_reg_address_crd_weight = 0;
- uint32_t pbf_reg_address_crd_weight = 0;
- /* Calculate and set BW for this COS - use 1 instead of 0 for BW */
- const uint32_t cos_bw_nig = ((bw ? bw : 1) * min_w_val_nig) / total_bw;
- const uint32_t cos_bw_pbf = ((bw ? bw : 1) * min_w_val_pbf) / total_bw;
-
- switch (cos_entry) {
- case 0:
- nig_reg_address_crd_weight =
- (port) ? NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_0 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0;
- pbf_reg_address_crd_weight = (port) ?
- PBF_REG_COS0_WEIGHT_P1 : PBF_REG_COS0_WEIGHT_P0;
- break;
- case 1:
- nig_reg_address_crd_weight = (port) ?
- NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_1 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1;
- pbf_reg_address_crd_weight = (port) ?
- PBF_REG_COS1_WEIGHT_P1 : PBF_REG_COS1_WEIGHT_P0;
- break;
- case 2:
- nig_reg_address_crd_weight = (port) ?
- NIG_REG_P1_TX_ARB_CREDIT_WEIGHT_2 :
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_2;
-
- pbf_reg_address_crd_weight = (port) ?
- PBF_REG_COS2_WEIGHT_P1 : PBF_REG_COS2_WEIGHT_P0;
- break;
- case 3:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_address_crd_weight =
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_3;
- pbf_reg_address_crd_weight =
- PBF_REG_COS3_WEIGHT_P0;
- break;
- case 4:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_address_crd_weight =
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_4;
- pbf_reg_address_crd_weight = PBF_REG_COS4_WEIGHT_P0;
- break;
- case 5:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_address_crd_weight =
- NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_5;
- pbf_reg_address_crd_weight = PBF_REG_COS5_WEIGHT_P0;
- break;
- }
-
- REG_WR(sc, nig_reg_address_crd_weight, cos_bw_nig);
-
- REG_WR(sc, pbf_reg_address_crd_weight, cos_bw_pbf);
-
- return ELINK_STATUS_OK;
-}
-/******************************************************************************
- * Description:
- * Calculate the total BW.A value of 0 isn't legal.
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_get_total_bw(
- const struct elink_params *params,
- struct elink_ets_params *ets_params,
- uint16_t *total_bw)
-{
- struct bnx2x_softc *sc = params->sc;
- uint8_t cos_idx = 0;
- uint8_t is_bw_cos_exist = 0;
-
- *total_bw = 0;
- /* Calculate total BW requested */
- for (cos_idx = 0; cos_idx < ets_params->num_of_cos; cos_idx++) {
- if (ets_params->cos[cos_idx].state == elink_cos_state_bw) {
- is_bw_cos_exist = 1;
- if (!ets_params->cos[cos_idx].params.bw_params.bw) {
- ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config BW"
- " was set to 0");
- /* This is to prevent a state when ramrods
- * can't be sent
- */
- ets_params->cos[cos_idx].params.bw_params.bw
- = 1;
- }
- *total_bw +=
- ets_params->cos[cos_idx].params.bw_params.bw;
- }
- }
-
- /* Check total BW is valid */
- if ((is_bw_cos_exist == 1) && (*total_bw != 100)) {
- if (*total_bw == 0) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_E3B0_config total BW shouldn't be 0");
- return ELINK_STATUS_ERROR;
- }
- ELINK_DEBUG_P0(sc,
- "elink_ets_E3B0_config total BW should be 100");
- /* We can handle a case whre the BW isn't 100 this can happen
- * if the TC are joined.
- */
- }
- return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- * Invalidate all the sp_pri_to_cos.
- *
- ******************************************************************************/
-static void elink_ets_e3b0_sp_pri_to_cos_init(uint8_t *sp_pri_to_cos)
-{
- uint8_t pri = 0;
- for (pri = 0; pri < ELINK_DCBX_MAX_NUM_COS; pri++)
- sp_pri_to_cos[pri] = DCBX_INVALID_COS;
-}
-/******************************************************************************
- * Description:
- * Calculate and set the SP (ARB_PRIORITY_CLIENT) NIG and PBF registers
- * according to sp_pri_to_cos.
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_sp_pri_to_cos_set(
- const struct elink_params *params,
- uint8_t *sp_pri_to_cos,
- const uint8_t pri,
- const uint8_t cos_entry)
-{
- struct bnx2x_softc *sc = params->sc;
- const uint8_t port = params->port;
- const uint8_t max_num_of_cos = (port) ?
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-
- if (pri >= max_num_of_cos) {
- ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_pri_to_cos_set invalid "
- "parameter Illegal strict priority");
- return ELINK_STATUS_ERROR;
- }
-
- if (sp_pri_to_cos[pri] != DCBX_INVALID_COS) {
- ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_pri_to_cos_set invalid "
- "parameter There can't be two COS's with "
- "the same strict pri");
- return ELINK_STATUS_ERROR;
- }
-
- sp_pri_to_cos[pri] = cos_entry;
- return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- * Returns the correct value according to COS and priority in
- * the sp_pri_cli register.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg(const uint8_t cos,
- const uint8_t cos_offset,
- const uint8_t pri_set,
- const uint8_t pri_offset,
- const uint8_t entry_size)
-{
- uint64_t pri_cli_nig = 0;
- pri_cli_nig = ((uint64_t)(cos + cos_offset)) << (entry_size *
- (pri_set + pri_offset));
-
- return pri_cli_nig;
-}
-/******************************************************************************
- * Description:
- * Returns the correct value according to COS and priority in the
- * sp_pri_cli register for NIG.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg_nig(const uint8_t cos,
- const uint8_t pri_set)
-{
- /* MCP Dbg0 and dbg1 are always with higher strict pri*/
- const uint8_t nig_cos_offset = 3;
- const uint8_t nig_pri_offset = 3;
-
- return elink_e3b0_sp_get_pri_cli_reg(cos, nig_cos_offset, pri_set,
- nig_pri_offset, 4);
-}
-
-/******************************************************************************
- * Description:
- * Returns the correct value according to COS and priority in the
- * sp_pri_cli register for PBF.
- *
- ******************************************************************************/
-static uint64_t elink_e3b0_sp_get_pri_cli_reg_pbf(const uint8_t cos,
- const uint8_t pri_set)
-{
- const uint8_t pbf_cos_offset = 0;
- const uint8_t pbf_pri_offset = 0;
-
- return elink_e3b0_sp_get_pri_cli_reg(cos, pbf_cos_offset, pri_set,
- pbf_pri_offset, 3);
-}
-
-/******************************************************************************
- * Description:
- * Calculate and set the SP (ARB_PRIORITY_CLIENT) NIG and PBF registers
- * according to sp_pri_to_cos.(which COS has higher priority)
- *
- ******************************************************************************/
-static elink_status_t elink_ets_e3b0_sp_set_pri_cli_reg(
- const struct elink_params *params,
- uint8_t *sp_pri_to_cos)
-{
- struct bnx2x_softc *sc = params->sc;
- uint8_t i = 0;
- const uint8_t port = params->port;
- /* MCP Dbg0 and dbg1 are always with higher strict pri*/
- uint64_t pri_cli_nig = 0x210;
- uint32_t pri_cli_pbf = 0x0;
- uint8_t pri_set = 0;
- uint8_t pri_bitmask = 0;
- const uint8_t max_num_of_cos = (port) ?
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
-
- uint8_t cos_bit_to_set = (1 << max_num_of_cos) - 1;
-
- /* Set all the strict priority first */
- for (i = 0; i < max_num_of_cos; i++) {
- if (sp_pri_to_cos[i] != DCBX_INVALID_COS) {
- if (sp_pri_to_cos[i] >= ELINK_DCBX_MAX_NUM_COS) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_sp_set_pri_cli_reg "
- "invalid cos entry");
- return ELINK_STATUS_ERROR;
- }
-
- pri_cli_nig |= elink_e3b0_sp_get_pri_cli_reg_nig(
- sp_pri_to_cos[i], pri_set);
-
- pri_cli_pbf |= elink_e3b0_sp_get_pri_cli_reg_pbf(
- sp_pri_to_cos[i], pri_set);
- pri_bitmask = 1 << sp_pri_to_cos[i];
- /* COS is used remove it from bitmap.*/
- if (!(pri_bitmask & cos_bit_to_set)) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_sp_set_pri_cli_reg "
- "invalid There can't be two COS's with"
- " the same strict pri");
- return ELINK_STATUS_ERROR;
- }
- cos_bit_to_set &= ~pri_bitmask;
- pri_set++;
- }
- }
-
- /* Set all the Non strict priority i= COS*/
- for (i = 0; i < max_num_of_cos; i++) {
- pri_bitmask = 1 << i;
- /* Check if COS was already used for SP */
- if (pri_bitmask & cos_bit_to_set) {
- /* COS wasn't used for SP */
- pri_cli_nig |= elink_e3b0_sp_get_pri_cli_reg_nig(
- i, pri_set);
-
- pri_cli_pbf |= elink_e3b0_sp_get_pri_cli_reg_pbf(
- i, pri_set);
- /* COS is used remove it from bitmap.*/
- cos_bit_to_set &= ~pri_bitmask;
- pri_set++;
- }
- }
-
- if (pri_set != max_num_of_cos) {
- ELINK_DEBUG_P0(sc, "elink_ets_e3b0_sp_set_pri_cli_reg not all "
- "entries were set");
- return ELINK_STATUS_ERROR;
- }
-
- if (port) {
- /* Only 6 usable clients*/
- REG_WR(sc, NIG_REG_P1_TX_ARB_PRIORITY_CLIENT2_LSB,
- (uint32_t)pri_cli_nig);
-
- REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P1, pri_cli_pbf);
- } else {
- /* Only 9 usable clients*/
- const uint32_t pri_cli_nig_lsb = (uint32_t)(pri_cli_nig);
- const uint32_t pri_cli_nig_msb = (uint32_t)
- ((pri_cli_nig >> 32) & 0xF);
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_LSB,
- pri_cli_nig_lsb);
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT2_MSB,
- pri_cli_nig_msb);
-
- REG_WR(sc, PBF_REG_ETS_ARB_PRIORITY_CLIENT_P0, pri_cli_pbf);
- }
- return ELINK_STATUS_OK;
-}
-
-/******************************************************************************
- * Description:
- * Configure the COS to ETS according to BW and SP settings.
- ******************************************************************************/
-elink_status_t elink_ets_e3b0_config(const struct elink_params *params,
- const struct elink_vars *vars,
- struct elink_ets_params *ets_params)
-{
- struct bnx2x_softc *sc = params->sc;
- elink_status_t elink_status = ELINK_STATUS_OK;
- const uint8_t port = params->port;
- uint16_t total_bw = 0;
- const uint32_t min_w_val_nig = elink_ets_get_min_w_val_nig(vars);
- const uint32_t min_w_val_pbf = ELINK_ETS_E3B0_PBF_MIN_W_VAL;
- uint8_t cos_bw_bitmap = 0;
- uint8_t cos_sp_bitmap = 0;
- uint8_t sp_pri_to_cos[ELINK_DCBX_MAX_NUM_COS] = {0};
- const uint8_t max_num_of_cos = (port) ?
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT1 :
- ELINK_DCBX_E3B0_MAX_NUM_COS_PORT0;
- uint8_t cos_entry = 0;
-
- if (!CHIP_IS_E3B0(sc)) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_disabled the chip isn't E3B0");
- return ELINK_STATUS_ERROR;
- }
-
- if (ets_params->num_of_cos > max_num_of_cos) {
- ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config the number of COS "
- "isn't supported");
- return ELINK_STATUS_ERROR;
- }
-
- /* Prepare sp strict priority parameters*/
- elink_ets_e3b0_sp_pri_to_cos_init(sp_pri_to_cos);
-
- /* Prepare BW parameters*/
- elink_status = elink_ets_e3b0_get_total_bw(params, ets_params,
- &total_bw);
- if (elink_status != ELINK_STATUS_OK) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_E3B0_config get_total_bw failed");
- return ELINK_STATUS_ERROR;
- }
-
- /* Upper bound is set according to current link speed (min_w_val
- * should be the same for upper bound and COS credit val).
- */
- elink_ets_e3b0_set_credit_upper_bound_nig(params, min_w_val_nig);
- elink_ets_e3b0_set_credit_upper_bound_pbf(params, min_w_val_pbf);
-
-
- for (cos_entry = 0; cos_entry < ets_params->num_of_cos; cos_entry++) {
- if (elink_cos_state_bw == ets_params->cos[cos_entry].state) {
- cos_bw_bitmap |= (1 << cos_entry);
- /* The function also sets the BW in HW(not the mappin
- * yet)
- */
- elink_status = elink_ets_e3b0_set_cos_bw(
- sc, cos_entry, min_w_val_nig, min_w_val_pbf,
- total_bw,
- ets_params->cos[cos_entry].params.bw_params.bw,
- port);
- } else if (elink_cos_state_strict ==
- ets_params->cos[cos_entry].state){
- cos_sp_bitmap |= (1 << cos_entry);
-
- elink_status = elink_ets_e3b0_sp_pri_to_cos_set(
- params,
- sp_pri_to_cos,
- ets_params->cos[cos_entry].params.sp_params.pri,
- cos_entry);
-
- } else {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_config cos state not valid");
- return ELINK_STATUS_ERROR;
- }
- if (elink_status != ELINK_STATUS_OK) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_e3b0_config set cos bw failed");
- return elink_status;
- }
- }
-
- /* Set SP register (which COS has higher priority) */
- elink_status = elink_ets_e3b0_sp_set_pri_cli_reg(params,
- sp_pri_to_cos);
-
- if (elink_status != ELINK_STATUS_OK) {
- ELINK_DEBUG_P0(sc,
- "elink_ets_E3B0_config set_pri_cli_reg failed");
- return elink_status;
- }
-
- /* Set client mapping of BW and strict */
- elink_status = elink_ets_e3b0_cli_map(params, ets_params,
- cos_sp_bitmap,
- cos_bw_bitmap);
-
- if (elink_status != ELINK_STATUS_OK) {
- ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config SP failed");
- return elink_status;
- }
- return ELINK_STATUS_OK;
-}
-static void elink_ets_bw_limit_common(const struct elink_params *params)
-{
- /* ETS disabled configuration */
- struct bnx2x_softc *sc = params->sc;
- ELINK_DEBUG_P0(sc, "ETS enabled BW limit configuration");
- /* Defines which entries (clients) are subjected to WFQ arbitration
- * COS0 0x8
- * COS1 0x10
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_SUBJECT2WFQ, 0x18);
- /* Mapping between the ARB_CREDIT_WEIGHT registers and actual
- * client numbers (WEIGHT_0 does not actually have to represent
- * client 0)
- * PRI4 | PRI3 | PRI2 | PRI1 | PRI0
- * cos1-001 cos0-000 dbg1-100 dbg0-011 MCP-010
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_CREDIT_MAP, 0x111A);
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_0,
- ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_UPPER_BOUND_1,
- ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-
- /* ETS mode enabled*/
- REG_WR(sc, PBF_REG_ETS_ENABLED, 1);
-
- /* Defines the number of consecutive slots for the strict priority */
- REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0);
- /* Bitmap of 5bits length. Each bit specifies whether the entry behaves
- * as strict. Bits 0,1,2 - debug and management entries, 3 - COS0
- * entry, 4 - COS1 entry.
- * COS1 | COS0 | DEBUG21 | DEBUG0 | MGMT
- * bit4 bit3 bit2 bit1 bit0
- * MCP and debug are strict
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x7);
-
- /* Upper bound that COS0_WEIGHT can reach in the WFQ arbiter.*/
- REG_WR(sc, PBF_REG_COS0_UPPER_BOUND,
- ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
- REG_WR(sc, PBF_REG_COS1_UPPER_BOUND,
- ELINK_ETS_BW_LIMIT_CREDIT_UPPER_BOUND);
-}
-
-void elink_ets_bw_limit(const struct elink_params *params,
- const uint32_t cos0_bw,
- const uint32_t cos1_bw)
-{
- /* ETS disabled configuration*/
- struct bnx2x_softc *sc = params->sc;
- const uint32_t total_bw = cos0_bw + cos1_bw;
- uint32_t cos0_credit_weight = 0;
- uint32_t cos1_credit_weight = 0;
-
- ELINK_DEBUG_P0(sc, "ETS enabled BW limit configuration");
-
- if ((!total_bw) ||
- (!cos0_bw) ||
- (!cos1_bw)) {
- ELINK_DEBUG_P0(sc, "Total BW can't be zero");
- return;
- }
-
- cos0_credit_weight = (cos0_bw * ELINK_ETS_BW_LIMIT_CREDIT_WEIGHT) /
- total_bw;
- cos1_credit_weight = (cos1_bw * ELINK_ETS_BW_LIMIT_CREDIT_WEIGHT) /
- total_bw;
-
- elink_ets_bw_limit_common(params);
-
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_0, cos0_credit_weight);
- REG_WR(sc, NIG_REG_P0_TX_ARB_CREDIT_WEIGHT_1, cos1_credit_weight);
-
- REG_WR(sc, PBF_REG_COS0_WEIGHT, cos0_credit_weight);
- REG_WR(sc, PBF_REG_COS1_WEIGHT, cos1_credit_weight);
-}
-
-elink_status_t elink_ets_strict(const struct elink_params *params,
- const uint8_t strict_cos)
-{
- /* ETS disabled configuration*/
- struct bnx2x_softc *sc = params->sc;
- uint32_t val = 0;
-
- ELINK_DEBUG_P0(sc, "ETS enabled strict configuration");
- /* Bitmap of 5bits length. Each bit specifies whether the entry behaves
- * as strict. Bits 0,1,2 - debug and management entries,
- * 3 - COS0 entry, 4 - COS1 entry.
- * COS1 | COS0 | DEBUG21 | DEBUG0 | MGMT
- * bit4 bit3 bit2 bit1 bit0
- * MCP and debug are strict
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_CLIENT_IS_STRICT, 0x1F);
- /* For strict priority entries defines the number of consecutive slots
- * for the highest priority.
- */
- REG_WR(sc, NIG_REG_P0_TX_ARB_NUM_STRICT_ARB_SLOTS, 0x100);
- /* ETS mode disable */
- REG_WR(sc, PBF_REG_ETS_ENABLED, 0);
- /* Defines the number of consecutive slots for the strict priority */
- REG_WR(sc, PBF_REG_NUM_STRICT_ARB_SLOTS, 0x100);
-
- /* Defines the number of consecutive slots for the strict priority */
- REG_WR(sc, PBF_REG_HIGH_PRIORITY_COS_NUM, strict_cos);
-
- /* Mapping between entry priority to client number (0,1,2 -debug and
- * management clients, 3 - COS0 client, 4 - COS client)(HIGHEST)
- * 3bits client num.
- * PRI4 | PRI3 | PRI2 | PRI1 | PRI0
- * dbg0-010 dbg1-001 cos1-100 cos0-011 MCP-000
- * dbg0-010 dbg1-001 cos0-011 cos1-100 MCP-000
- */
- val = (!strict_cos) ? 0x2318 : 0x22E0;
- REG_WR(sc, NIG_REG_P0_TX_ARB_PRIORITY_CLIENT, val);
-
- return ELINK_STATUS_OK;
-}
-
/******************************************************************/
/* PFC section */
/******************************************************************/
@@ -2143,56 +1218,6 @@ static void elink_update_pfc_xmac(struct elink_params *params,
DELAY(30);
}
-static void elink_emac_get_pfc_stat(struct elink_params *params,
- uint32_t pfc_frames_sent[2],
- uint32_t pfc_frames_received[2])
-{
- /* Read pfc statistic */
- struct bnx2x_softc *sc = params->sc;
- uint32_t emac_base = params->port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
- uint32_t val_xon = 0;
- uint32_t val_xoff = 0;
-
- ELINK_DEBUG_P0(sc, "pfc statistic read from EMAC");
-
- /* PFC received frames */
- val_xoff = REG_RD(sc, emac_base +
- EMAC_REG_RX_PFC_STATS_XOFF_RCVD);
- val_xoff &= EMAC_REG_RX_PFC_STATS_XOFF_RCVD_COUNT;
- val_xon = REG_RD(sc, emac_base + EMAC_REG_RX_PFC_STATS_XON_RCVD);
- val_xon &= EMAC_REG_RX_PFC_STATS_XON_RCVD_COUNT;
-
- pfc_frames_received[0] = val_xon + val_xoff;
-
- /* PFC received sent */
- val_xoff = REG_RD(sc, emac_base +
- EMAC_REG_RX_PFC_STATS_XOFF_SENT);
- val_xoff &= EMAC_REG_RX_PFC_STATS_XOFF_SENT_COUNT;
- val_xon = REG_RD(sc, emac_base + EMAC_REG_RX_PFC_STATS_XON_SENT);
- val_xon &= EMAC_REG_RX_PFC_STATS_XON_SENT_COUNT;
-
- pfc_frames_sent[0] = val_xon + val_xoff;
-}
-
-/* Read pfc statistic*/
-void elink_pfc_statistic(struct elink_params *params, struct elink_vars *vars,
- uint32_t pfc_frames_sent[2],
- uint32_t pfc_frames_received[2])
-{
- /* Read pfc statistic */
- struct bnx2x_softc *sc = params->sc;
-
- ELINK_DEBUG_P0(sc, "pfc statistic");
-
- if (!vars->link_up)
- return;
-
- if (vars->mac_type == ELINK_MAC_TYPE_EMAC) {
- ELINK_DEBUG_P0(sc, "About to read PFC stats from EMAC");
- elink_emac_get_pfc_stat(params, pfc_frames_sent,
- pfc_frames_received);
- }
-}
/******************************************************************/
/* MAC/PBF section */
/******************************************************************/
@@ -2877,54 +1902,6 @@ static void elink_update_pfc_bmac2(struct elink_params *params,
REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_BMAC_CONTROL, wb_data, 2);
}
-/******************************************************************************
- * Description:
- * This function is needed because NIG ARB_CREDIT_WEIGHT_X are
- * not continues and ARB_CREDIT_WEIGHT_0 + offset is suitable.
- ******************************************************************************/
-static elink_status_t elink_pfc_nig_rx_priority_mask(struct bnx2x_softc *sc,
- uint8_t cos_entry,
- uint32_t priority_mask, uint8_t port)
-{
- uint32_t nig_reg_rx_priority_mask_add = 0;
-
- switch (cos_entry) {
- case 0:
- nig_reg_rx_priority_mask_add = (port) ?
- NIG_REG_P1_RX_COS0_PRIORITY_MASK :
- NIG_REG_P0_RX_COS0_PRIORITY_MASK;
- break;
- case 1:
- nig_reg_rx_priority_mask_add = (port) ?
- NIG_REG_P1_RX_COS1_PRIORITY_MASK :
- NIG_REG_P0_RX_COS1_PRIORITY_MASK;
- break;
- case 2:
- nig_reg_rx_priority_mask_add = (port) ?
- NIG_REG_P1_RX_COS2_PRIORITY_MASK :
- NIG_REG_P0_RX_COS2_PRIORITY_MASK;
- break;
- case 3:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS3_PRIORITY_MASK;
- break;
- case 4:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS4_PRIORITY_MASK;
- break;
- case 5:
- if (port)
- return ELINK_STATUS_ERROR;
- nig_reg_rx_priority_mask_add = NIG_REG_P0_RX_COS5_PRIORITY_MASK;
- break;
- }
-
- REG_WR(sc, nig_reg_rx_priority_mask_add, priority_mask);
-
- return ELINK_STATUS_OK;
-}
static void elink_update_mng(struct elink_params *params, uint32_t link_status)
{
struct bnx2x_softc *sc = params->sc;
@@ -2934,157 +1911,6 @@ static void elink_update_mng(struct elink_params *params, uint32_t link_status)
port_mb[params->port].link_status), link_status);
}
-static void elink_update_pfc_nig(struct elink_params *params,
- __rte_unused struct elink_vars *vars,
- struct elink_nig_brb_pfc_port_params *nig_params)
-{
- uint32_t xcm_mask = 0, ppp_enable = 0, pause_enable = 0;
- uint32_t llfc_out_en = 0;
- uint32_t llfc_enable = 0, xcm_out_en = 0, hwpfc_enable = 0;
- uint32_t pkt_priority_to_cos = 0;
- struct bnx2x_softc *sc = params->sc;
- uint8_t port = params->port;
-
- int set_pfc = params->feature_config_flags &
- ELINK_FEATURE_CONFIG_PFC_ENABLED;
- ELINK_DEBUG_P0(sc, "updating pfc nig parameters");
-
- /* When NIG_LLH0_XCM_MASK_REG_LLHX_XCM_MASK_BCN bit is set
- * MAC control frames (that are not pause packets)
- * will be forwarded to the XCM.
- */
- xcm_mask = REG_RD(sc, port ? NIG_REG_LLH1_XCM_MASK :
- NIG_REG_LLH0_XCM_MASK);
- /* NIG params will override non PFC params, since it's possible to
- * do transition from PFC to SAFC
- */
- if (set_pfc) {
- pause_enable = 0;
- llfc_out_en = 0;
- llfc_enable = 0;
- if (CHIP_IS_E3(sc))
- ppp_enable = 0;
- else
- ppp_enable = 1;
- xcm_mask &= ~(port ? NIG_LLH1_XCM_MASK_REG_LLH1_XCM_MASK_BCN :
- NIG_LLH0_XCM_MASK_REG_LLH0_XCM_MASK_BCN);
- xcm_out_en = 0;
- hwpfc_enable = 1;
- } else {
- if (nig_params) {
- llfc_out_en = nig_params->llfc_out_en;
- llfc_enable = nig_params->llfc_enable;
- pause_enable = nig_params->pause_enable;
- } else /* Default non PFC mode - PAUSE */
- pause_enable = 1;
-
- xcm_mask |= (port ? NIG_LLH1_XCM_MASK_REG_LLH1_XCM_MASK_BCN :
- NIG_LLH0_XCM_MASK_REG_LLH0_XCM_MASK_BCN);
- xcm_out_en = 1;
- }
-
- if (CHIP_IS_E3(sc))
- REG_WR(sc, port ? NIG_REG_BRB1_PAUSE_IN_EN :
- NIG_REG_BRB0_PAUSE_IN_EN, pause_enable);
- REG_WR(sc, port ? NIG_REG_LLFC_OUT_EN_1 :
- NIG_REG_LLFC_OUT_EN_0, llfc_out_en);
- REG_WR(sc, port ? NIG_REG_LLFC_ENABLE_1 :
- NIG_REG_LLFC_ENABLE_0, llfc_enable);
- REG_WR(sc, port ? NIG_REG_PAUSE_ENABLE_1 :
- NIG_REG_PAUSE_ENABLE_0, pause_enable);
-
- REG_WR(sc, port ? NIG_REG_PPP_ENABLE_1 :
- NIG_REG_PPP_ENABLE_0, ppp_enable);
-
- REG_WR(sc, port ? NIG_REG_LLH1_XCM_MASK :
- NIG_REG_LLH0_XCM_MASK, xcm_mask);
-
- REG_WR(sc, port ? NIG_REG_LLFC_EGRESS_SRC_ENABLE_1 :
- NIG_REG_LLFC_EGRESS_SRC_ENABLE_0, 0x7);
-
- /* Output enable for RX_XCM # IF */
- REG_WR(sc, port ? NIG_REG_XCM1_OUT_EN :
- NIG_REG_XCM0_OUT_EN, xcm_out_en);
-
- /* HW PFC TX enable */
- REG_WR(sc, port ? NIG_REG_P1_HWPFC_ENABLE :
- NIG_REG_P0_HWPFC_ENABLE, hwpfc_enable);
-
- if (nig_params) {
- uint8_t i = 0;
- pkt_priority_to_cos = nig_params->pkt_priority_to_cos;
-
- for (i = 0; i < nig_params->num_of_rx_cos_priority_mask; i++)
- elink_pfc_nig_rx_priority_mask(sc, i,
- nig_params->rx_cos_priority_mask[i], port);
-
- REG_WR(sc, port ? NIG_REG_LLFC_HIGH_PRIORITY_CLASSES_1 :
- NIG_REG_LLFC_HIGH_PRIORITY_CLASSES_0,
- nig_params->llfc_high_priority_classes);
-
- REG_WR(sc, port ? NIG_REG_LLFC_LOW_PRIORITY_CLASSES_1 :
- NIG_REG_LLFC_LOW_PRIORITY_CLASSES_0,
- nig_params->llfc_low_priority_classes);
- }
- REG_WR(sc, port ? NIG_REG_P1_PKT_PRIORITY_TO_COS :
- NIG_REG_P0_PKT_PRIORITY_TO_COS,
- pkt_priority_to_cos);
-}
-
-elink_status_t elink_update_pfc(struct elink_params *params,
- struct elink_vars *vars,
- struct elink_nig_brb_pfc_port_params *pfc_params)
-{
- /* The PFC and pause are orthogonal to one another, meaning when
- * PFC is enabled, the pause are disabled, and when PFC is
- * disabled, pause are set according to the pause result.
- */
- uint32_t val;
- struct bnx2x_softc *sc = params->sc;
- uint8_t bmac_loopback = (params->loopback_mode == ELINK_LOOPBACK_BMAC);
-
- if (params->feature_config_flags & ELINK_FEATURE_CONFIG_PFC_ENABLED)
- vars->link_status |= LINK_STATUS_PFC_ENABLED;
- else
- vars->link_status &= ~LINK_STATUS_PFC_ENABLED;
-
- elink_update_mng(params, vars->link_status);
-
- /* Update NIG params */
- elink_update_pfc_nig(params, vars, pfc_params);
-
- if (!vars->link_up)
- return ELINK_STATUS_OK;
-
- ELINK_DEBUG_P0(sc, "About to update PFC in BMAC");
-
- if (CHIP_IS_E3(sc)) {
- if (vars->mac_type == ELINK_MAC_TYPE_XMAC)
- elink_update_pfc_xmac(params, vars, 0);
- } else {
- val = REG_RD(sc, MISC_REG_RESET_REG_2);
- if ((val &
- (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << params->port))
- == 0) {
- ELINK_DEBUG_P0(sc, "About to update PFC in EMAC");
- elink_emac_enable(params, vars, 0);
- return ELINK_STATUS_OK;
- }
- if (CHIP_IS_E2(sc))
- elink_update_pfc_bmac2(params, vars, bmac_loopback);
- else
- elink_update_pfc_bmac1(params, vars);
-
- val = 0;
- if ((params->feature_config_flags &
- ELINK_FEATURE_CONFIG_PFC_ENABLED) ||
- (vars->flow_ctrl & ELINK_FLOW_CTRL_TX))
- val = 1;
- REG_WR(sc, NIG_REG_BMAC0_PAUSE_OUT_EN + params->port * 4, val);
- }
- return ELINK_STATUS_OK;
-}
-
static elink_status_t elink_bmac1_enable(struct elink_params *params,
struct elink_vars *vars,
uint8_t is_lb)
@@ -4030,40 +2856,6 @@ static void elink_cl45_read_and_write(struct bnx2x_softc *sc,
elink_cl45_write(sc, phy, devad, reg, val & and_val);
}
-elink_status_t elink_phy_read(struct elink_params *params, uint8_t phy_addr,
- uint8_t devad, uint16_t reg, uint16_t *ret_val)
-{
- uint8_t phy_index;
- /* Probe for the phy according to the given phy_addr, and execute
- * the read request on it
- */
- for (phy_index = 0; phy_index < params->num_phys; phy_index++) {
- if (params->phy[phy_index].addr == phy_addr) {
- return elink_cl45_read(params->sc,
- ¶ms->phy[phy_index], devad,
- reg, ret_val);
- }
- }
- return ELINK_STATUS_ERROR;
-}
-
-elink_status_t elink_phy_write(struct elink_params *params, uint8_t phy_addr,
- uint8_t devad, uint16_t reg, uint16_t val)
-{
- uint8_t phy_index;
- /* Probe for the phy according to the given phy_addr, and execute
- * the write request on it
- */
- for (phy_index = 0; phy_index < params->num_phys; phy_index++) {
- if (params->phy[phy_index].addr == phy_addr) {
- return elink_cl45_write(params->sc,
- ¶ms->phy[phy_index], devad,
- reg, val);
- }
- }
- return ELINK_STATUS_ERROR;
-}
-
static uint8_t elink_get_warpcore_lane(__rte_unused struct elink_phy *phy,
struct elink_params *params)
{
@@ -7108,47 +5900,6 @@ static elink_status_t elink_null_format_ver(__rte_unused uint32_t spirom_ver,
return ELINK_STATUS_OK;
}
-elink_status_t elink_get_ext_phy_fw_version(struct elink_params *params,
- uint8_t *version,
- uint16_t len)
-{
- struct bnx2x_softc *sc;
- uint32_t spirom_ver = 0;
- elink_status_t status = ELINK_STATUS_OK;
- uint8_t *ver_p = version;
- uint16_t remain_len = len;
- if (version == NULL || params == NULL)
- return ELINK_STATUS_ERROR;
- sc = params->sc;
-
- /* Extract first external phy*/
- version[0] = '\0';
- spirom_ver = REG_RD(sc, params->phy[ELINK_EXT_PHY1].ver_addr);
-
- if (params->phy[ELINK_EXT_PHY1].format_fw_ver) {
- status |= params->phy[ELINK_EXT_PHY1].format_fw_ver(spirom_ver,
- ver_p,
- &remain_len);
- ver_p += (len - remain_len);
- }
- if ((params->num_phys == ELINK_MAX_PHYS) &&
- (params->phy[ELINK_EXT_PHY2].ver_addr != 0)) {
- spirom_ver = REG_RD(sc, params->phy[ELINK_EXT_PHY2].ver_addr);
- if (params->phy[ELINK_EXT_PHY2].format_fw_ver) {
- *ver_p = '/';
- ver_p++;
- remain_len--;
- status |= params->phy[ELINK_EXT_PHY2].format_fw_ver(
- spirom_ver,
- ver_p,
- &remain_len);
- ver_p = version + (len - remain_len);
- }
- }
- *ver_p = '\0';
- return status;
-}
-
static void elink_set_xgxs_loopback(struct elink_phy *phy,
struct elink_params *params)
{
@@ -7360,99 +6111,6 @@ elink_status_t elink_set_led(struct elink_params *params,
}
-/* This function comes to reflect the actual link state read DIRECTLY from the
- * HW
- */
-elink_status_t elink_test_link(struct elink_params *params,
- __rte_unused struct elink_vars *vars,
- uint8_t is_serdes)
-{
- struct bnx2x_softc *sc = params->sc;
- uint16_t gp_status = 0, phy_index = 0;
- uint8_t ext_phy_link_up = 0, serdes_phy_type;
- struct elink_vars temp_vars;
- struct elink_phy *int_phy = ¶ms->phy[ELINK_INT_PHY];
-#ifdef ELINK_INCLUDE_FPGA
- if (CHIP_REV_IS_FPGA(sc))
- return ELINK_STATUS_OK;
-#endif
-#ifdef ELINK_INCLUDE_EMUL
- if (CHIP_REV_IS_EMUL(sc))
- return ELINK_STATUS_OK;
-#endif
-
- if (CHIP_IS_E3(sc)) {
- uint16_t link_up;
- if (params->req_line_speed[ELINK_LINK_CONFIG_IDX(ELINK_INT_PHY)]
- > ELINK_SPEED_10000) {
- /* Check 20G link */
- elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
- 1, &link_up);
- elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
- 1, &link_up);
- link_up &= (1 << 2);
- } else {
- /* Check 10G link and below*/
- uint8_t lane = elink_get_warpcore_lane(int_phy, params);
- elink_cl45_read(sc, int_phy, MDIO_WC_DEVAD,
- MDIO_WC_REG_GP2_STATUS_GP_2_1,
- &gp_status);
- gp_status = ((gp_status >> 8) & 0xf) |
- ((gp_status >> 12) & 0xf);
- link_up = gp_status & (1 << lane);
- }
- if (!link_up)
- return ELINK_STATUS_NO_LINK;
- } else {
- CL22_RD_OVER_CL45(sc, int_phy,
- MDIO_REG_BANK_GP_STATUS,
- MDIO_GP_STATUS_TOP_AN_STATUS1,
- &gp_status);
- /* Link is up only if both local phy and external phy are up */
- if (!(gp_status & MDIO_GP_STATUS_TOP_AN_STATUS1_LINK_STATUS))
- return ELINK_STATUS_NO_LINK;
- }
- /* In XGXS loopback mode, do not check external PHY */
- if (params->loopback_mode == ELINK_LOOPBACK_XGXS)
- return ELINK_STATUS_OK;
-
- switch (params->num_phys) {
- case 1:
- /* No external PHY */
- return ELINK_STATUS_OK;
- case 2:
- ext_phy_link_up = params->phy[ELINK_EXT_PHY1].read_status(
- ¶ms->phy[ELINK_EXT_PHY1],
- params, &temp_vars);
- break;
- case 3: /* Dual Media */
- for (phy_index = ELINK_EXT_PHY1; phy_index < params->num_phys;
- phy_index++) {
- serdes_phy_type = ((params->phy[phy_index].media_type ==
- ELINK_ETH_PHY_SFPP_10G_FIBER) ||
- (params->phy[phy_index].media_type ==
- ELINK_ETH_PHY_SFP_1G_FIBER) ||
- (params->phy[phy_index].media_type ==
- ELINK_ETH_PHY_XFP_FIBER) ||
- (params->phy[phy_index].media_type ==
- ELINK_ETH_PHY_DA_TWINAX));
-
- if (is_serdes != serdes_phy_type)
- continue;
- if (params->phy[phy_index].read_status) {
- ext_phy_link_up |=
- params->phy[phy_index].read_status(
- ¶ms->phy[phy_index],
- params, &temp_vars);
- }
- }
- break;
- }
- if (ext_phy_link_up)
- return ELINK_STATUS_OK;
- return ELINK_STATUS_NO_LINK;
-}
-
static elink_status_t elink_link_initialize(struct elink_params *params,
struct elink_vars *vars)
{
@@ -12443,31 +11101,6 @@ static elink_status_t elink_7101_format_ver(uint32_t spirom_ver, uint8_t *str,
return ELINK_STATUS_OK;
}
-void elink_sfx7101_sp_sw_reset(struct bnx2x_softc *sc, struct elink_phy *phy)
-{
- uint16_t val, cnt;
-
- elink_cl45_read(sc, phy,
- MDIO_PMA_DEVAD,
- MDIO_PMA_REG_7101_RESET, &val);
-
- for (cnt = 0; cnt < 10; cnt++) {
- DELAY(1000 * 50);
- /* Writes a self-clearing reset */
- elink_cl45_write(sc, phy,
- MDIO_PMA_DEVAD,
- MDIO_PMA_REG_7101_RESET,
- (val | (1 << 15)));
- /* Wait for clear */
- elink_cl45_read(sc, phy,
- MDIO_PMA_DEVAD,
- MDIO_PMA_REG_7101_RESET, &val);
-
- if ((val & (1 << 15)) == 0)
- break;
- }
-}
-
static void elink_7101_hw_reset(__rte_unused struct elink_phy *phy,
struct elink_params *params) {
/* Low power mode is controlled by GPIO 2 */
diff --git a/drivers/net/bnx2x/elink.h b/drivers/net/bnx2x/elink.h
index dd70ac6c66..f5cdf7440b 100644
--- a/drivers/net/bnx2x/elink.h
+++ b/drivers/net/bnx2x/elink.h
@@ -515,26 +515,10 @@ elink_status_t elink_lfa_reset(struct elink_params *params, struct elink_vars *v
/* elink_link_update should be called upon link interrupt */
elink_status_t elink_link_update(struct elink_params *params, struct elink_vars *vars);
-/* use the following phy functions to read/write from external_phy
- * In order to use it to read/write internal phy registers, use
- * ELINK_DEFAULT_PHY_DEV_ADDR as devad, and (_bank + (_addr & 0xf)) as
- * the register
- */
-elink_status_t elink_phy_read(struct elink_params *params, uint8_t phy_addr,
- uint8_t devad, uint16_t reg, uint16_t *ret_val);
-
-elink_status_t elink_phy_write(struct elink_params *params, uint8_t phy_addr,
- uint8_t devad, uint16_t reg, uint16_t val);
-
/* Reads the link_status from the shmem,
and update the link vars accordingly */
void elink_link_status_update(struct elink_params *input,
struct elink_vars *output);
-/* returns string representing the fw_version of the external phy */
-elink_status_t elink_get_ext_phy_fw_version(struct elink_params *params,
- uint8_t *version,
- uint16_t len);
-
/* Set/Unset the led
Basically, the CLC takes care of the led for the link, but in case one needs
to set/unset the led unnaturally, set the "mode" to ELINK_LED_MODE_OPER to
@@ -551,14 +535,6 @@ elink_status_t elink_set_led(struct elink_params *params,
*/
void elink_handle_module_detect_int(struct elink_params *params);
-/* Get the actual link status. In case it returns ELINK_STATUS_OK, link is up,
- * otherwise link is down
- */
-elink_status_t elink_test_link(struct elink_params *params,
- struct elink_vars *vars,
- uint8_t is_serdes);
-
-
/* One-time initialization for external phy after power up */
elink_status_t elink_common_init_phy(struct bnx2x_softc *sc, uint32_t shmem_base_path[],
uint32_t shmem2_base_path[], uint32_t chip_id,
@@ -567,9 +543,6 @@ elink_status_t elink_common_init_phy(struct bnx2x_softc *sc, uint32_t shmem_base
/* Reset the external PHY using GPIO */
void elink_ext_phy_hw_reset(struct bnx2x_softc *sc, uint8_t port);
-/* Reset the external of SFX7101 */
-void elink_sfx7101_sp_sw_reset(struct bnx2x_softc *sc, struct elink_phy *phy);
-
/* Read "byte_cnt" bytes from address "addr" from the SFP+ EEPROM */
elink_status_t elink_read_sfp_module_eeprom(struct elink_phy *phy,
struct elink_params *params, uint8_t dev_addr,
@@ -650,36 +623,6 @@ struct elink_ets_params {
struct elink_ets_cos_params cos[ELINK_DCBX_MAX_NUM_COS];
};
-/* Used to update the PFC attributes in EMAC, BMAC, NIG and BRB
- * when link is already up
- */
-elink_status_t elink_update_pfc(struct elink_params *params,
- struct elink_vars *vars,
- struct elink_nig_brb_pfc_port_params *pfc_params);
-
-
-/* Used to configure the ETS to disable */
-elink_status_t elink_ets_disabled(struct elink_params *params,
- struct elink_vars *vars);
-
-/* Used to configure the ETS to BW limited */
-void elink_ets_bw_limit(const struct elink_params *params,
- const uint32_t cos0_bw,
- const uint32_t cos1_bw);
-
-/* Used to configure the ETS to strict */
-elink_status_t elink_ets_strict(const struct elink_params *params,
- const uint8_t strict_cos);
-
-
-/* Configure the COS to ETS according to BW and SP settings.*/
-elink_status_t elink_ets_e3b0_config(const struct elink_params *params,
- const struct elink_vars *vars,
- struct elink_ets_params *ets_params);
-/* Read pfc statistic*/
-void elink_pfc_statistic(struct elink_params *params, struct elink_vars *vars,
- uint32_t pfc_frames_sent[2],
- uint32_t pfc_frames_received[2]);
void elink_init_mod_abs_int(struct bnx2x_softc *sc, struct elink_vars *vars,
uint32_t chip_id, uint32_t shmem_base, uint32_t shmem2_base,
uint8_t port);
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index 918cabf19c..cdb13607d5 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -227,62 +227,6 @@ ba_alloc_reverse(struct bitalloc *pool)
return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
}
-static int
-ba_alloc_index_helper(struct bitalloc *pool,
- int offset,
- int words,
- unsigned int size,
- int *index,
- int *clear)
-{
- bitalloc_word_t *storage = &pool->storage[offset];
- int loc;
- int r;
-
- if (pool->size > size)
- r = ba_alloc_index_helper(pool,
- offset + words + 1,
- storage[words],
- size * 32,
- index,
- clear);
- else
- r = 1; /* Check if already allocated */
-
- loc = (*index % 32);
- *index = *index / 32;
-
- if (r == 1) {
- r = (storage[*index] & (1 << loc)) ? 0 : -1;
- if (r == 0) {
- *clear = 1;
- pool->free_count--;
- }
- }
-
- if (*clear) {
- storage[*index] &= ~(1 << loc);
- *clear = (storage[*index] == 0);
- }
-
- return r;
-}
-
-int
-ba_alloc_index(struct bitalloc *pool, int index)
-{
- int clear = 0;
- int index_copy = index;
-
- if (index < 0 || index >= (int)pool->size)
- return -1;
-
- if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
- return index;
- else
- return -1;
-}
-
static int
ba_inuse_helper(struct bitalloc *pool,
int offset,
@@ -365,107 +309,7 @@ ba_free(struct bitalloc *pool, int index)
return ba_free_helper(pool, 0, 1, 32, &index);
}
-int
-ba_inuse_free(struct bitalloc *pool, int index)
-{
- if (index < 0 || index >= (int)pool->size)
- return -1;
-
- return ba_free_helper(pool, 0, 1, 32, &index) + 1;
-}
-
-int
-ba_free_count(struct bitalloc *pool)
-{
- return (int)pool->free_count;
-}
-
int ba_inuse_count(struct bitalloc *pool)
{
return (int)(pool->size) - (int)(pool->free_count);
}
-
-static int
-ba_find_next_helper(struct bitalloc *pool,
- int offset,
- int words,
- unsigned int size,
- int *index,
- int free)
-{
- bitalloc_word_t *storage = &pool->storage[offset];
- int loc, r, bottom = 0;
-
- if (pool->size > size)
- r = ba_find_next_helper(pool,
- offset + words + 1,
- storage[words],
- size * 32,
- index,
- free);
- else
- bottom = 1; /* Bottom of tree */
-
- loc = (*index % 32);
- *index = *index / 32;
-
- if (bottom) {
- int bit_index = *index * 32;
-
- loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
- if (loc > 0) {
- loc--;
- r = (bit_index + loc);
- if (r >= (int)pool->size)
- r = -1;
- } else {
- /* Loop over array at bottom of tree */
- r = -1;
- bit_index += 32;
- *index = *index + 1;
- while ((int)pool->size > bit_index) {
- loc = ba_ffs(~storage[*index]);
-
- if (loc > 0) {
- loc--;
- r = (bit_index + loc);
- if (r >= (int)pool->size)
- r = -1;
- break;
- }
- bit_index += 32;
- *index = *index + 1;
- }
- }
- }
-
- if (r >= 0 && (free)) {
- if (bottom)
- pool->free_count++;
- storage[*index] |= (1 << loc);
- }
-
- return r;
-}
-
-int
-ba_find_next_inuse(struct bitalloc *pool, int index)
-{
- if (index < 0 ||
- index >= (int)pool->size ||
- pool->free_count == pool->size)
- return -1;
-
- return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
-}
-
-int
-ba_find_next_inuse_free(struct bitalloc *pool, int index)
-{
- if (index < 0 ||
- index >= (int)pool->size ||
- pool->free_count == pool->size)
- return -1;
-
- return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
-}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 2825bb37e5..9ac6eadd81 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -70,7 +70,6 @@ int ba_init(struct bitalloc *pool, int size);
* Returns -1 on failure, or index of allocated entry
*/
int ba_alloc(struct bitalloc *pool);
-int ba_alloc_index(struct bitalloc *pool, int index);
/**
* Returns -1 on failure, or index of allocated entry
@@ -85,37 +84,12 @@ int ba_alloc_reverse(struct bitalloc *pool);
*/
int ba_inuse(struct bitalloc *pool, int index);
-/**
- * Variant of ba_inuse that frees the index if it is allocated, same
- * return codes as ba_inuse
- */
-int ba_inuse_free(struct bitalloc *pool, int index);
-
-/**
- * Find next index that is in use, start checking at index 'idx'
- *
- * Returns next index that is in use on success, or
- * -1 if no in use index is found
- */
-int ba_find_next_inuse(struct bitalloc *pool, int idx);
-
-/**
- * Variant of ba_find_next_inuse that also frees the next in use index,
- * same return codes as ba_find_next_inuse
- */
-int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
-
/**
* Multiple freeing of the same index has no negative side effects,
* but will return -1. returns -1 on failure, 0 on success.
*/
int ba_free(struct bitalloc *pool, int index);
-/**
- * Returns the pool's free count
- */
-int ba_free_count(struct bitalloc *pool);
-
/**
* Returns the pool's in use count
*/
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 954806377e..bda415e82e 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -88,28 +88,3 @@ stack_pop(struct stack *st, uint32_t *x)
return 0;
}
-
-/* Dump the stack
- */
-void stack_dump(struct stack *st)
-{
- int i, j;
-
- printf("top=%d\n", st->top);
- printf("max=%d\n", st->max);
-
- if (st->top == -1) {
- printf("stack is empty\n");
- return;
- }
-
- for (i = 0; i < st->max + 7 / 8; i++) {
- printf("item[%d] 0x%08x", i, st->items[i]);
-
- for (j = 0; j < 7; j++) {
- if (i++ < st->max - 1)
- printf(" 0x%08x", st->items[i]);
- }
- printf("\n");
- }
-}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index 6732e03132..7e2f5dfec6 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -102,16 +102,4 @@ int stack_push(struct stack *st, uint32_t x);
*/
int stack_pop(struct stack *st, uint32_t *x);
-/** Dump stack information
- *
- * Warning: Don't use for large stacks due to prints
- *
- * [in] st
- * pointer to the stack
- *
- * return
- * none
- */
-void stack_dump(struct stack *st);
-
#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f49a00256..a4276d1bcc 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -90,69 +90,6 @@ tf_open_session(struct tf *tfp,
return 0;
}
-int
-tf_attach_session(struct tf *tfp,
- struct tf_attach_session_parms *parms)
-{
- int rc;
- unsigned int domain, bus, slot, device;
- struct tf_session_attach_session_parms aparms;
-
- TF_CHECK_PARMS2(tfp, parms);
-
- /* Verify control channel */
- rc = sscanf(parms->ctrl_chan_name,
- "%x:%x:%x.%d",
- &domain,
- &bus,
- &slot,
- &device);
- if (rc != 4) {
- TFP_DRV_LOG(ERR,
- "Failed to scan device ctrl_chan_name\n");
- return -EINVAL;
- }
-
- /* Verify 'attach' channel */
- rc = sscanf(parms->attach_chan_name,
- "%x:%x:%x.%d",
- &domain,
- &bus,
- &slot,
- &device);
- if (rc != 4) {
- TFP_DRV_LOG(ERR,
- "Failed to scan device attach_chan_name\n");
- return -EINVAL;
- }
-
- /* Prepare return value of session_id, using ctrl_chan_name
- * device values as it becomes the session id.
- */
- parms->session_id.internal.domain = domain;
- parms->session_id.internal.bus = bus;
- parms->session_id.internal.device = device;
- aparms.attach_cfg = parms;
- rc = tf_session_attach_session(tfp,
- &aparms);
- /* Logging handled by dev_bind */
- if (rc)
- return rc;
-
- TFP_DRV_LOG(INFO,
- "Attached to session, session_id:%d\n",
- parms->session_id.id);
-
- TFP_DRV_LOG(INFO,
- "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
- parms->session_id.internal.domain,
- parms->session_id.internal.bus,
- parms->session_id.internal.device,
- parms->session_id.internal.fw_session_id);
-
- return rc;
-}
-
int
tf_close_session(struct tf *tfp)
{
@@ -792,14 +729,6 @@ tf_set_tcam_entry(struct tf *tfp,
return 0;
}
-int
-tf_get_tcam_entry(struct tf *tfp __rte_unused,
- struct tf_get_tcam_entry_parms *parms __rte_unused)
-{
- TF_CHECK_PARMS2(tfp, parms);
- return -EOPNOTSUPP;
-}
-
int
tf_free_tcam_entry(struct tf *tfp,
struct tf_free_tcam_entry_parms *parms)
@@ -1228,80 +1157,6 @@ tf_get_tbl_entry(struct tf *tfp,
return rc;
}
-int
-tf_bulk_get_tbl_entry(struct tf *tfp,
- struct tf_bulk_get_tbl_entry_parms *parms)
-{
- int rc = 0;
- struct tf_session *tfs;
- struct tf_dev_info *dev;
- struct tf_tbl_get_bulk_parms bparms;
-
- TF_CHECK_PARMS2(tfp, parms);
-
- /* Can't do static initialization due to UT enum check */
- memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
-
- /* Retrieve the session information */
- rc = tf_session_get_session(tfp, &tfs);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: Failed to lookup session, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- /* Retrieve the device information */
- rc = tf_session_get_device(tfs, &dev);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: Failed to lookup device, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- if (parms->type == TF_TBL_TYPE_EXT) {
- /* Not supported, yet */
- rc = -EOPNOTSUPP;
- TFP_DRV_LOG(ERR,
- "%s, External table type not supported, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
-
- return rc;
- }
-
- /* Internal table type processing */
-
- if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
- rc = -EOPNOTSUPP;
- TFP_DRV_LOG(ERR,
- "%s: Operation not supported, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return -EOPNOTSUPP;
- }
-
- bparms.dir = parms->dir;
- bparms.type = parms->type;
- bparms.starting_idx = parms->starting_idx;
- bparms.num_entries = parms->num_entries;
- bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
- bparms.physical_mem_addr = parms->physical_mem_addr;
- rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: Table get bulk failed, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- return rc;
-}
-
int
tf_alloc_tbl_scope(struct tf *tfp,
struct tf_alloc_tbl_scope_parms *parms)
@@ -1340,44 +1195,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
return rc;
}
-int
-tf_map_tbl_scope(struct tf *tfp,
- struct tf_map_tbl_scope_parms *parms)
-{
- struct tf_session *tfs;
- struct tf_dev_info *dev;
- int rc;
-
- TF_CHECK_PARMS2(tfp, parms);
-
- /* Retrieve the session information */
- rc = tf_session_get_session(tfp, &tfs);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "Failed to lookup session, rc:%s\n",
- strerror(-rc));
- return rc;
- }
-
- /* Retrieve the device information */
- rc = tf_session_get_device(tfs, &dev);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "Failed to lookup device, rc:%s\n",
- strerror(-rc));
- return rc;
- }
-
- if (dev->ops->tf_dev_map_tbl_scope != NULL) {
- rc = dev->ops->tf_dev_map_tbl_scope(tfp, parms);
- } else {
- TFP_DRV_LOG(ERR,
- "Map table scope not supported by device\n");
- return -EINVAL;
- }
-
- return rc;
-}
int
tf_free_tbl_scope(struct tf *tfp,
@@ -1475,61 +1292,3 @@ tf_set_if_tbl_entry(struct tf *tfp,
return 0;
}
-
-int
-tf_get_if_tbl_entry(struct tf *tfp,
- struct tf_get_if_tbl_entry_parms *parms)
-{
- int rc;
- struct tf_session *tfs;
- struct tf_dev_info *dev;
- struct tf_if_tbl_get_parms gparms = { 0 };
-
- TF_CHECK_PARMS2(tfp, parms);
-
- /* Retrieve the session information */
- rc = tf_session_get_session(tfp, &tfs);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: Failed to lookup session, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- /* Retrieve the device information */
- rc = tf_session_get_device(tfs, &dev);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: Failed to lookup device, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- if (dev->ops->tf_dev_get_if_tbl == NULL) {
- rc = -EOPNOTSUPP;
- TFP_DRV_LOG(ERR,
- "%s: Operation not supported, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- gparms.dir = parms->dir;
- gparms.type = parms->type;
- gparms.idx = parms->idx;
- gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
- gparms.data = parms->data;
-
- rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "%s: If_tbl get failed, rc:%s\n",
- tf_dir_2_str(parms->dir),
- strerror(-rc));
- return rc;
- }
-
- return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fa8ab52af1..2d556be752 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -657,27 +657,6 @@ struct tf_attach_session_parms {
union tf_session_id session_id;
};
-/**
- * Experimental
- *
- * Allows a 2nd application instance to attach to an existing
- * session. Used when a session is to be shared between two processes.
- *
- * Attach will increment a ref count as to manage the shared session data.
- *
- * [in] tfp
- * Pointer to TF handle
- *
- * [in] parms
- * Pointer to attach parameters
- *
- * Returns
- * - (0) if successful.
- * - (-EINVAL) on failure.
- */
-int tf_attach_session(struct tf *tfp,
- struct tf_attach_session_parms *parms);
-
/**
* Closes an existing session client or the session it self. The
* session client is default closed and if the session reference count
@@ -961,25 +940,6 @@ struct tf_map_tbl_scope_parms {
int tf_alloc_tbl_scope(struct tf *tfp,
struct tf_alloc_tbl_scope_parms *parms);
-/**
- * map a table scope (legacy device only Wh+/SR)
- *
- * Map a table scope to one or more partition interfaces (parifs).
- * The parif can be remapped in the L2 context lookup for legacy devices. This
- * API allows a number of parifs to be mapped to the same table scope. On
- * legacy devices a table scope identifies one of 16 sets of EEM table base
- * addresses and is associated with a PF communication channel. The associated
- * PF must be configured for the table scope to operate.
- *
- * An L2 context TCAM lookup returns a remapped parif value used to
- * index into the set of 16 parif_to_pf registers which are used to map to one
- * of the 16 table scopes. This API allows the user to map the parifs in the
- * mask to the previously allocated table scope (EEM table).
-
- * Returns success or failure code.
- */
-int tf_map_tbl_scope(struct tf *tfp,
- struct tf_map_tbl_scope_parms *parms);
/**
* free a table scope
*
@@ -1256,18 +1216,6 @@ struct tf_get_tcam_entry_parms {
uint16_t result_sz_in_bits;
};
-/**
- * get TCAM entry
- *
- * Program a TCAM table entry for a TruFlow session.
- *
- * If the entry has not been allocated, an error will be returned.
- *
- * Returns success or failure code.
- */
-int tf_get_tcam_entry(struct tf *tfp,
- struct tf_get_tcam_entry_parms *parms);
-
/**
* tf_free_tcam_entry parameter definition
*/
@@ -1638,22 +1586,6 @@ struct tf_bulk_get_tbl_entry_parms {
uint64_t physical_mem_addr;
};
-/**
- * Bulk get index table entry
- *
- * Used to retrieve a set of index table entries.
- *
- * Entries within the range may not have been allocated using
- * tf_alloc_tbl_entry() at the time of access. But the range must
- * be within the bounds determined from tf_open_session() for the
- * given table type. Currently, this is only used for collecting statistics.
- *
- * Returns success or failure code. Failure will be returned if the
- * provided data buffer is too small for the data type requested.
- */
-int tf_bulk_get_tbl_entry(struct tf *tfp,
- struct tf_bulk_get_tbl_entry_parms *parms);
-
/**
* @page exact_match Exact Match Table
*
@@ -2066,17 +1998,4 @@ struct tf_get_if_tbl_entry_parms {
uint32_t idx;
};
-/**
- * get interface table entry
- *
- * Used to retrieve an interface table entry.
- *
- * Reads the interface table entry value
- *
- * Returns success or failure code. Failure will be returned if the
- * provided data buffer is too small for the data type requested.
- */
-int tf_get_if_tbl_entry(struct tf *tfp,
- struct tf_get_if_tbl_entry_parms *parms);
-
#endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 5615eedbbe..e4fe5fe055 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -148,14 +148,6 @@ tf_msg_session_open(struct tf *tfp,
return rc;
}
-int
-tf_msg_session_attach(struct tf *tfp __rte_unused,
- char *ctrl_chan_name __rte_unused,
- uint8_t tf_fw_session_id __rte_unused)
-{
- return -1;
-}
-
int
tf_msg_session_client_register(struct tf *tfp,
char *ctrl_channel_name,
@@ -266,38 +258,6 @@ tf_msg_session_close(struct tf *tfp)
return rc;
}
-int
-tf_msg_session_qcfg(struct tf *tfp)
-{
- int rc;
- struct hwrm_tf_session_qcfg_input req = { 0 };
- struct hwrm_tf_session_qcfg_output resp = { 0 };
- struct tfp_send_msg_parms parms = { 0 };
- uint8_t fw_session_id;
-
- rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
- if (rc) {
- TFP_DRV_LOG(ERR,
- "Unable to lookup FW id, rc:%s\n",
- strerror(-rc));
- return rc;
- }
-
- /* Populate the request */
- req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
-
- parms.tf_type = HWRM_TF_SESSION_QCFG,
- parms.req_data = (uint32_t *)&req;
- parms.req_size = sizeof(req);
- parms.resp_data = (uint32_t *)&resp;
- parms.resp_size = sizeof(resp);
- parms.mailbox = TF_KONG_MB;
-
- rc = tfp_send_msg_direct(tfp,
- &parms);
- return rc;
-}
-
int
tf_msg_session_resc_qcaps(struct tf *tfp,
enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 72bf850487..4483017ada 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -38,26 +38,6 @@ int tf_msg_session_open(struct tf *tfp,
uint8_t *fw_session_id,
uint8_t *fw_session_client_id);
-/**
- * Sends session close request to Firmware
- *
- * [in] session
- * Pointer to session handle
- *
- * [in] ctrl_chan_name
- * PCI name of the control channel
- *
- * [in] fw_session_id
- * Pointer to the fw_session_id that is assigned to the session at
- * time of session open
- *
- * Returns:
- * 0 on Success else internal Truflow error
- */
-int tf_msg_session_attach(struct tf *tfp,
- char *ctrl_channel_name,
- uint8_t tf_fw_session_id);
-
/**
* Sends session client register request to Firmware
*
@@ -105,17 +85,6 @@ int tf_msg_session_client_unregister(struct tf *tfp,
*/
int tf_msg_session_close(struct tf *tfp);
-/**
- * Sends session query config request to TF Firmware
- *
- * [in] session
- * Pointer to session handle
- *
- * Returns:
- * 0 on Success else internal Truflow error
- */
-int tf_msg_session_qcfg(struct tf *tfp);
-
/**
* Sends session HW resource query capability request to TF Firmware
*
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c95c4bdbd3..912b2837f9 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -749,36 +749,3 @@ tf_session_get_fw_session_id(struct tf *tfp,
return 0;
}
-
-int
-tf_session_get_session_id(struct tf *tfp,
- union tf_session_id *session_id)
-{
- int rc;
- struct tf_session *tfs = NULL;
-
- if (tfp->session == NULL) {
- rc = -EINVAL;
- TFP_DRV_LOG(ERR,
- "Session not created, rc:%s\n",
- strerror(-rc));
- return rc;
- }
-
- if (session_id == NULL) {
- rc = -EINVAL;
- TFP_DRV_LOG(ERR,
- "Invalid Argument(s), rc:%s\n",
- strerror(-rc));
- return rc;
- }
-
- /* Using internal version as session client may not exist yet */
- rc = tf_session_get_session_internal(tfp, &tfs);
- if (rc)
- return rc;
-
- *session_id = tfs->session_id;
-
- return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 6a5c894033..37d4703cc1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -394,20 +394,4 @@ int tf_session_get_device(struct tf_session *tfs,
int tf_session_get_fw_session_id(struct tf *tfp,
uint8_t *fw_session_id);
-/**
- * Looks up the Session id the requested TF handle.
- *
- * [in] tfp
- * Pointer to TF handle
- *
- * [out] session_id
- * Pointer to the session_id
- *
- * Returns
- * - (0) if successful.
- * - (-EINVAL) on failure.
- */
-int tf_session_get_session_id(struct tf *tfp,
- union tf_session_id *session_id);
-
#endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
index a4207eb3ab..2caf4f8747 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -637,59 +637,6 @@ tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms)
return 0;
}
-int
-tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms)
-{
- uint16_t idx;
- struct tf_shadow_tbl_ctxt *ctxt;
- struct tf_tbl_set_parms *sparms;
- struct tf_shadow_tbl_db *shadow_db;
- struct tf_shadow_tbl_shadow_result_entry *sr_entry;
-
- if (!parms || !parms->sparms) {
- TFP_DRV_LOG(ERR, "Null parms\n");
- return -EINVAL;
- }
-
- sparms = parms->sparms;
- if (!sparms->data || !sparms->data_sz_in_bytes) {
- TFP_DRV_LOG(ERR, "%s:%s No result to set.\n",
- tf_dir_2_str(sparms->dir),
- tf_tbl_type_2_str(sparms->type));
- return -EINVAL;
- }
-
- shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
- ctxt = tf_shadow_tbl_ctxt_get(shadow_db, sparms->type);
- if (!ctxt) {
- /* We aren't tracking this table, so return success */
- TFP_DRV_LOG(DEBUG, "%s Unable to get tbl mgr context\n",
- tf_tbl_type_2_str(sparms->type));
- return 0;
- }
-
- idx = TF_SHADOW_IDX_TO_SHIDX(ctxt, sparms->idx);
- if (idx >= tf_shadow_tbl_sh_num_entries_get(ctxt)) {
- TFP_DRV_LOG(ERR, "%s:%s Invalid idx(0x%x)\n",
- tf_dir_2_str(sparms->dir),
- tf_tbl_type_2_str(sparms->type),
- sparms->idx);
- return -EINVAL;
- }
-
- /* Write the result table, the key/hash has been written already */
- sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[idx];
-
- /*
- * If the handle is not valid, the bind was never called. We aren't
- * tracking this entry.
- */
- if (!TF_SHADOW_HB_HANDLE_IS_VALID(sr_entry->hb_handle))
- return 0;
-
- return 0;
-}
-
int
tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms)
{
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
index 96a34309b2..bbd8cfd3a9 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -225,20 +225,6 @@ int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
*/
int tf_shadow_tbl_bind_index(struct tf_shadow_tbl_bind_index_parms *parms);
-/**
- * Inserts an element into the Shadow table DB. Will fail if the
- * elements ref_count is different from 0. Ref_count after insert will
- * be incremented.
- *
- * [in] parms
- * Pointer to insert parameters
- *
- * Returns
- * - (0) if successful.
- * - (-EINVAL) on failure.
- */
-int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
-
/**
* Removes an element from the Shadow table DB. Will fail if the
* elements ref_count is 0. Ref_count after removal will be
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 7679d09eea..e3fec46926 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -683,10 +683,3 @@ tf_tcam_set(struct tf *tfp __rte_unused,
return 0;
}
-
-int
-tf_tcam_get(struct tf *tfp __rte_unused,
- struct tf_tcam_get_parms *parms __rte_unused)
-{
- return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 280f138dd3..9614cf52c7 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -355,21 +355,4 @@ int tf_tcam_alloc_search(struct tf *tfp,
int tf_tcam_set(struct tf *tfp,
struct tf_tcam_set_parms *parms);
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- * Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- * Pointer to parameters
- *
- * Returns
- * - (0) if successful.
- * - (-EINVAL) on failure.
- */
-int tf_tcam_get(struct tf *tfp,
- struct tf_tcam_get_parms *parms);
-
#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 0f6d63cc00..49ca034241 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -135,33 +135,6 @@ tfp_memcpy(void *dest, void *src, size_t n)
rte_memcpy(dest, src, n);
}
-/**
- * Used to initialize portable spin lock
- */
-void
-tfp_spinlock_init(struct tfp_spinlock_parms *parms)
-{
- rte_spinlock_init(&parms->slock);
-}
-
-/**
- * Used to lock portable spin lock
- */
-void
-tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
-{
- rte_spinlock_lock(&parms->slock);
-}
-
-/**
- * Used to unlock portable spin lock
- */
-void
-tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
-{
- rte_spinlock_unlock(&parms->slock);
-}
-
int
tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
{
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 551b9c569f..fc2409371a 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -202,10 +202,6 @@ int tfp_calloc(struct tfp_calloc_parms *parms);
void tfp_memcpy(void *dest, void *src, size_t n);
void tfp_free(void *addr);
-void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
-void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
-void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
-
/**
* Lookup of the FID in the platform specific structure.
*
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index 45025516f4..4a6105a05e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -214,74 +214,6 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
}
-/*
- * DMA-in the raw counter data from the HW and accumulate in the
- * local accumulator table using the TF-Core API
- *
- * tfp [in] The TF-Core context
- *
- * fc_info [in] The ULP Flow counter info ptr
- *
- * dir [in] The direction of the flow
- *
- * num_counters [in] The number of counters
- *
- */
-__rte_unused static int32_t
-ulp_bulk_get_flow_stats(struct tf *tfp,
- struct bnxt_ulp_fc_info *fc_info,
- enum tf_dir dir,
- struct bnxt_ulp_device_params *dparms)
-/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
-{
- int rc = 0;
- struct tf_tbl_get_bulk_parms parms = { 0 };
- enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64; /* TBD: Template? */
- struct sw_acc_counter *sw_acc_tbl_entry = NULL;
- uint64_t *stats = NULL;
- uint16_t i = 0;
-
- parms.dir = dir;
- parms.type = stype;
- parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
- parms.num_entries = dparms->flow_count_db_entries / 2; /* direction */
- /*
- * TODO:
- * Size of an entry needs to obtained from template
- */
- parms.entry_sz_in_bytes = sizeof(uint64_t);
- stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
- parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
-
- if (!stats) {
- PMD_DRV_LOG(ERR,
- "BULK: Memory not initialized id:0x%x dir:%d\n",
- parms.starting_idx, dir);
- return -EINVAL;
- }
-
- rc = tf_tbl_bulk_get(tfp, &parms);
- if (rc) {
- PMD_DRV_LOG(ERR,
- "BULK: Get failed for id:0x%x rc:%d\n",
- parms.starting_idx, rc);
- return rc;
- }
-
- for (i = 0; i < parms.num_entries; i++) {
- /* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
- sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
- if (!sw_acc_tbl_entry->valid)
- continue;
- sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i],
- dparms);
- sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i],
- dparms);
- }
-
- return rc;
-}
-
static int ulp_get_single_flow_stat(struct bnxt_ulp_context *ctxt,
struct tf *tfp,
struct bnxt_ulp_fc_info *fc_info,
@@ -387,16 +319,6 @@ ulp_fc_mgr_alarm_cb(void *arg)
ulp_fc_mgr_thread_cancel(ctxt);
return;
}
- /*
- * Commented for now till GET_BULK is resolved, just get the first flow
- * stat for now
- for (i = 0; i < TF_DIR_MAX; i++) {
- rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
- dparms->flow_count_db_entries);
- if (rc)
- break;
- }
- */
/* reset the parent accumulation counters before accumulation if any */
ulp_flow_db_parent_flow_count_reset(ctxt);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 4b4eaeb126..2d1dbb7e6e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -226,37 +226,6 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
return 0;
}
-/*
- * Api to get the function id for a given ulp ifindex.
- *
- * ulp_ctxt [in] Ptr to ulp context
- * ifindex [in] ulp ifindex
- * func_id [out] the function id of the given ifindex.
- *
- * Returns 0 on success or negative number on failure.
- */
-int32_t
-ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
- uint32_t ifindex,
- uint32_t fid_type,
- uint16_t *func_id)
-{
- struct bnxt_ulp_port_db *port_db;
-
- port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
- if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
- BNXT_TF_DBG(ERR, "Invalid Arguments\n");
- return -EINVAL;
- }
-
- if (fid_type == BNXT_ULP_DRV_FUNC_FID)
- *func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
- else
- *func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
-
- return 0;
-}
-
/*
* Api to get the svif for a given ulp ifindex.
*
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 7b85987a0c..bd7032004f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -122,20 +122,6 @@ int32_t
ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
uint32_t port_id, uint32_t *ifindex);
-/*
- * Api to get the function id for a given ulp ifindex.
- *
- * ulp_ctxt [in] Ptr to ulp context
- * ifindex [in] ulp ifindex
- * func_id [out] the function id of the given ifindex.
- *
- * Returns 0 on success or negative number on failure.
- */
-int32_t
-ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
- uint32_t ifindex, uint32_t fid_type,
- uint16_t *func_id);
-
/*
* Api to get the svif for a given ulp ifindex.
*
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index a13a3bbf65..b5a4f85fcf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -803,17 +803,6 @@ int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
}
-/* Function to check if bitmap is zero.Return 1 on success */
-uint32_t ulp_bitmap_is_zero(uint8_t *bitmap, int32_t size)
-{
- while (size-- > 0) {
- if (*bitmap != 0)
- return 0;
- bitmap++;
- }
- return 1;
-}
-
/* Function to check if bitmap is ones. Return 1 on success */
uint32_t ulp_bitmap_is_ones(uint8_t *bitmap, int32_t size)
{
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
index 749ac06d87..a45a2705da 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -384,9 +384,6 @@ ulp_encap_buffer_copy(uint8_t *dst,
*/
int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
-/* Function to check if bitmap is zero.Return 1 on success */
-uint32_t ulp_bitmap_is_zero(uint8_t *bitmap, int32_t size);
-
/* Function to check if bitmap is ones. Return 1 on success */
uint32_t ulp_bitmap_is_ones(uint8_t *bitmap, int32_t size);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index 8f198bd50e..e5645a10ab 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -224,10 +224,6 @@ int
mac_address_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *new_mac_addr);
-int
-mac_address_get(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *dst_mac_addr);
-
int
mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..23a4393f23 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -278,19 +278,6 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
int
rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
-/**
- * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
- *
- * @param bonded_port_id Port ID of bonded device.
- *
- * @return
- * Monitoring interval on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id);
-
-
/**
* Set the period in milliseconds for delaying the disabling of a bonded link
* when the link down status has been detected
@@ -305,18 +292,6 @@ int
rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
uint32_t delay_ms);
-/**
- * Get the period in milliseconds set for delaying the disabling of a bonded
- * link when the link down status has been detected
- *
- * @param bonded_port_id Port ID of bonded device.
- *
- * @return
- * Delay period on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id);
-
/**
* Set the period in milliseconds for delaying the enabling of a bonded link
* when the link up status has been detected
@@ -331,19 +306,6 @@ int
rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id,
uint32_t delay_ms);
-/**
- * Get the period in milliseconds set for delaying the enabling of a bonded
- * link when the link up status has been detected
- *
- * @param bonded_port_id Port ID of bonded device.
- *
- * @return
- * Delay period on success, negative value otherwise.
- */
-int
-rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id);
-
-
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 55c8e3167c..1c09d2e4ba 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -981,19 +981,6 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms)
return 0;
}
-int
-rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id)
-{
- struct bond_dev_private *internals;
-
- if (valid_bonded_port_id(bonded_port_id) != 0)
- return -1;
-
- internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
- return internals->link_status_polling_interval_ms;
-}
-
int
rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
uint32_t delay_ms)
@@ -1010,19 +997,6 @@ rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id,
return 0;
}
-int
-rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id)
-{
- struct bond_dev_private *internals;
-
- if (valid_bonded_port_id(bonded_port_id) != 0)
- return -1;
-
- internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
- return internals->link_down_delay_ms;
-}
-
int
rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, uint32_t delay_ms)
@@ -1037,16 +1011,3 @@ rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, uint32_t delay_ms)
return 0;
}
-
-int
-rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id)
-{
- struct bond_dev_private *internals;
-
- if (valid_bonded_port_id(bonded_port_id) != 0)
- return -1;
-
- internals = rte_eth_devices[bonded_port_id].data->dev_private;
-
- return internals->link_up_delay_ms;
-}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 057b1ada54..d9a0154de1 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1396,28 +1396,6 @@ link_properties_valid(struct rte_eth_dev *ethdev,
return 0;
}
-int
-mac_address_get(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *dst_mac_addr)
-{
- struct rte_ether_addr *mac_addr;
-
- if (eth_dev == NULL) {
- RTE_BOND_LOG(ERR, "NULL pointer eth_dev specified");
- return -1;
- }
-
- if (dst_mac_addr == NULL) {
- RTE_BOND_LOG(ERR, "NULL pointer MAC specified");
- return -1;
- }
-
- mac_addr = eth_dev->data->mac_addrs;
-
- rte_ether_addr_copy(mac_addr, dst_mac_addr);
- return 0;
-}
-
int
mac_address_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *new_mac_addr)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 8fe8e2a36b..6e360bc42d 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -363,8 +363,6 @@ int t4vf_get_vfres(struct adapter *adap);
int t4_fixup_host_params_compat(struct adapter *adap, unsigned int page_size,
unsigned int cache_line_size,
enum chip_type chip_compat);
-int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
- unsigned int cache_line_size);
int t4_fw_initialize(struct adapter *adap, unsigned int mbox);
int t4_query_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int nparams, const u32 *params,
@@ -485,9 +483,6 @@ static inline int t4vf_wr_mbox_ns(struct adapter *adapter, const void *cmd,
void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
unsigned int data_reg, u32 *vals, unsigned int nregs,
unsigned int start_idx);
-void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
- unsigned int data_reg, const u32 *vals,
- unsigned int nregs, unsigned int start_idx);
int t4_get_vpd_params(struct adapter *adapter, struct vpd_params *p);
int t4_get_pfres(struct adapter *adapter);
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index 9217956b42..d5b916ccf5 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -189,28 +189,6 @@ void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
}
}
-/**
- * t4_write_indirect - write indirectly addressed registers
- * @adap: the adapter
- * @addr_reg: register holding the indirect addresses
- * @data_reg: register holding the value for the indirect registers
- * @vals: values to write
- * @nregs: how many indirect registers to write
- * @start_idx: address of first indirect register to write
- *
- * Writes a sequential block of registers that are accessed indirectly
- * through an address/data register pair.
- */
-void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
- unsigned int data_reg, const u32 *vals,
- unsigned int nregs, unsigned int start_idx)
-{
- while (nregs--) {
- t4_write_reg(adap, addr_reg, start_idx++);
- t4_write_reg(adap, data_reg, *vals++);
- }
-}
-
/**
* t4_report_fw_error - report firmware error
* @adap: the adapter
@@ -3860,25 +3838,6 @@ int t4_fixup_host_params_compat(struct adapter *adap,
return 0;
}
-/**
- * t4_fixup_host_params - fix up host-dependent parameters (T4 compatible)
- * @adap: the adapter
- * @page_size: the host's Base Page Size
- * @cache_line_size: the host's Cache Line Size
- *
- * Various registers in T4 contain values which are dependent on the
- * host's Base Page and Cache Line Sizes. This function will fix all of
- * those registers with the appropriate values as passed in ...
- *
- * This routine makes changes which are compatible with T4 chips.
- */
-int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
- unsigned int cache_line_size)
-{
- return t4_fixup_host_params_compat(adap, page_size, cache_line_size,
- T4_LAST_REV);
-}
-
/**
* t4_fw_initialize - ask FW to initialize the device
* @adap: the adapter
diff --git a/drivers/net/dpaa/fmlib/fm_vsp.c b/drivers/net/dpaa/fmlib/fm_vsp.c
index 78efd93f22..0e261e3d1a 100644
--- a/drivers/net/dpaa/fmlib/fm_vsp.c
+++ b/drivers/net/dpaa/fmlib/fm_vsp.c
@@ -19,25 +19,6 @@
#include "fm_vsp_ext.h"
#include <dpaa_ethdev.h>
-uint32_t
-fm_port_vsp_alloc(t_handle h_fm_port,
- t_fm_port_vspalloc_params *p_params)
-{
- t_device *p_dev = (t_device *)h_fm_port;
- ioc_fm_port_vsp_alloc_params_t params;
-
- _fml_dbg("Calling...\n");
- memset(¶ms, 0, sizeof(ioc_fm_port_vsp_alloc_params_t));
- memcpy(¶ms.params, p_params, sizeof(t_fm_port_vspalloc_params));
-
- if (ioctl(p_dev->fd, FM_PORT_IOC_VSP_ALLOC, ¶ms))
- RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
-
- _fml_dbg("Called.\n");
-
- return E_OK;
-}
-
t_handle
fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params)
{
diff --git a/drivers/net/dpaa/fmlib/fm_vsp_ext.h b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
index b51c46162d..97590ea4c0 100644
--- a/drivers/net/dpaa/fmlib/fm_vsp_ext.h
+++ b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
@@ -99,9 +99,6 @@ typedef struct ioc_fm_buffer_prefix_content_params_t {
ioc_fm_buffer_prefix_content_t fm_buffer_prefix_content;
} ioc_fm_buffer_prefix_content_params_t;
-uint32_t fm_port_vsp_alloc(t_handle h_fm_port,
- t_fm_port_vspalloc_params *p_params);
-
t_handle fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params);
uint32_t fm_vsp_init(t_handle h_fm_vsp);
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 63f1ec7d30..dce9c55a9a 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -57,227 +57,6 @@ int dpdmux_open(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpdmux_close() - Close the control session of the object
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_CLOSE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_create() - Create the DPDMUX object
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: returned object id
- *
- * Create the DPDMUX object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpdmux_cfg *cfg,
- uint32_t *obj_id)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_create *cmd_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpdmux_cmd_create *)cmd.params;
- cmd_params->method = cfg->method;
- cmd_params->manip = cfg->manip;
- cmd_params->num_ifs = cpu_to_le16(cfg->num_ifs);
- cmd_params->adv_max_dmat_entries =
- cpu_to_le16(cfg->adv.max_dmat_entries);
- cmd_params->adv_max_mc_groups = cpu_to_le16(cfg->adv.max_mc_groups);
- cmd_params->adv_max_vlan_ids = cpu_to_le16(cfg->adv.max_vlan_ids);
- cmd_params->options = cpu_to_le64(cfg->adv.options);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpdmux_destroy() - Destroy the DPDMUX object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpdmux_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_destroy *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpdmux_cmd_destroy *)cmd.params;
- cmd_params->dpdmux_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_enable() - Enable DPDMUX functionality
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_ENABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_disable() - Disable DPDMUX functionality
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DISABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_is_enabled() - Check if the DPDMUX is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_rsp_is_enabled *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IS_ENABLED,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmux_rsp_is_enabled *)cmd.params;
- *en = dpdmux_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
-/**
- * dpdmux_reset() - Reset the DPDMUX, returns the object to initial state.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_RESET,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpdmux_get_attributes() - Retrieve DPDMUX attributes
* @mc_io: Pointer to MC portal's I/O object
@@ -318,407 +97,6 @@ int dpdmux_get_attributes(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpdmux_if_enable() - Enable Interface
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Interface Identifier
- *
- * Return: Completion status. '0' on Success; Error code otherwise.
- */
-int dpdmux_if_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id)
-{
- struct dpdmux_cmd_if *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_ENABLE,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_disable() - Disable Interface
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Interface Identifier
- *
- * Return: Completion status. '0' on Success; Error code otherwise.
- */
-int dpdmux_if_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id)
-{
- struct dpdmux_cmd_if *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_DISABLE,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_set_max_frame_length() - Set the maximum frame length in DPDMUX
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @max_frame_length: The required maximum frame length
- *
- * Update the maximum frame length on all DMUX interfaces.
- * In case of VEPA, the maximum frame length on all dmux interfaces
- * will be updated with the minimum value of the mfls of the connected
- * dpnis and the actual value of dmux mfl.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t max_frame_length)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_set_max_frame_length *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_MAX_FRAME_LENGTH,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_set_max_frame_length *)cmd.params;
- cmd_params->max_frame_length = cpu_to_le16(max_frame_length);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_ul_reset_counters() - Function resets the uplink counter
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_ul_reset_counters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_UL_RESET_COUNTERS,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_set_accepted_frames() - Set the accepted frame types
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Interface ID (0 for uplink, or 1-num_ifs);
- * @cfg: Frame types configuration
- *
- * if 'DPDMUX_ADMIT_ONLY_VLAN_TAGGED' is set - untagged frames or
- * priority-tagged frames are discarded.
- * if 'DPDMUX_ADMIT_ONLY_UNTAGGED' is set - untagged frames or
- * priority-tagged frames are accepted.
- * if 'DPDMUX_ADMIT_ALL' is set (default mode) - all VLAN tagged,
- * untagged and priority-tagged frame are accepted;
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_set_accepted_frames(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_accepted_frames *cfg)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_set_accepted_frames *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_ACCEPTED_FRAMES,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_set_accepted_frames *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
- dpdmux_set_field(cmd_params->frames_options,
- ACCEPTED_FRAMES_TYPE,
- cfg->type);
- dpdmux_set_field(cmd_params->frames_options,
- UNACCEPTED_FRAMES_ACTION,
- cfg->unaccept_act);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_attributes() - Obtain DPDMUX interface attributes
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Interface ID (0 for uplink, or 1-num_ifs);
- * @attr: Interface attributes
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_attributes(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_if_attr *attr)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if *cmd_params;
- struct dpdmux_rsp_if_get_attr *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_ATTR,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmux_rsp_if_get_attr *)cmd.params;
- attr->rate = le32_to_cpu(rsp_params->rate);
- attr->enabled = dpdmux_get_field(rsp_params->enabled, ENABLE);
- attr->is_default = dpdmux_get_field(rsp_params->enabled, IS_DEFAULT);
- attr->accept_frame_type = dpdmux_get_field(
- rsp_params->accepted_frames_type,
- ACCEPTED_FRAMES_TYPE);
-
- return 0;
-}
-
-/**
- * dpdmux_if_remove_l2_rule() - Remove L2 rule from DPDMUX table
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Destination interface ID
- * @rule: L2 rule
- *
- * Function removes a L2 rule from DPDMUX table
- * or adds an interface to an existing multicast address
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_remove_l2_rule(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_l2_rule *rule)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_l2_rule *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_REMOVE_L2_RULE,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_l2_rule *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
- cmd_params->vlan_id = cpu_to_le16(rule->vlan_id);
- cmd_params->mac_addr5 = rule->mac_addr[5];
- cmd_params->mac_addr4 = rule->mac_addr[4];
- cmd_params->mac_addr3 = rule->mac_addr[3];
- cmd_params->mac_addr2 = rule->mac_addr[2];
- cmd_params->mac_addr1 = rule->mac_addr[1];
- cmd_params->mac_addr0 = rule->mac_addr[0];
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_add_l2_rule() - Add L2 rule into DPDMUX table
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Destination interface ID
- * @rule: L2 rule
- *
- * Function adds a L2 rule into DPDMUX table
- * or adds an interface to an existing multicast address
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_add_l2_rule(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_l2_rule *rule)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_l2_rule *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_ADD_L2_RULE,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_l2_rule *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
- cmd_params->vlan_id = cpu_to_le16(rule->vlan_id);
- cmd_params->mac_addr5 = rule->mac_addr[5];
- cmd_params->mac_addr4 = rule->mac_addr[4];
- cmd_params->mac_addr3 = rule->mac_addr[3];
- cmd_params->mac_addr2 = rule->mac_addr[2];
- cmd_params->mac_addr1 = rule->mac_addr[1];
- cmd_params->mac_addr0 = rule->mac_addr[0];
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_counter() - Functions obtains specific counter of an interface
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPDMUX object
- * @if_id: Interface Id
- * @counter_type: counter type
- * @counter: Returned specific counter information
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_counter(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- enum dpdmux_counter_type counter_type,
- uint64_t *counter)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_get_counter *cmd_params;
- struct dpdmux_rsp_if_get_counter *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_COUNTER,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_get_counter *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
- cmd_params->counter_type = counter_type;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmux_rsp_if_get_counter *)cmd.params;
- *counter = le64_to_cpu(rsp_params->counter);
-
- return 0;
-}
-
-/**
- * dpdmux_if_set_link_cfg() - set the link configuration.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- * @cfg: Link configuration
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_if_set_link_cfg(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_link_cfg *cfg)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_set_link_cfg *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_LINK_CFG,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_set_link_cfg *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
- cmd_params->rate = cpu_to_le32(cfg->rate);
- cmd_params->options = cpu_to_le64(cfg->options);
- cmd_params->advertising = cpu_to_le64(cfg->advertising);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_if_get_link_state - Return the link state
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- * @state: link state
- *
- * @returns '0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_link_state(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_link_state *state)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_cmd_if_get_link_state *cmd_params;
- struct dpdmux_rsp_if_get_link_state *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_LINK_STATE,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_if_get_link_state *)cmd.params;
- cmd_params->if_id = cpu_to_le16(if_id);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmux_rsp_if_get_link_state *)cmd.params;
- state->rate = le32_to_cpu(rsp_params->rate);
- state->options = le64_to_cpu(rsp_params->options);
- state->up = dpdmux_get_field(rsp_params->up, UP);
- state->state_valid = dpdmux_get_field(rsp_params->up, STATE_VALID);
- state->supported = le64_to_cpu(rsp_params->supported);
- state->advertising = le64_to_cpu(rsp_params->advertising);
-
- return 0;
-}
-
/**
* dpdmux_if_set_default - Set default interface
* @mc_io: Pointer to MC portal's I/O object
@@ -747,41 +125,6 @@ int dpdmux_if_set_default(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpdmux_if_get_default - Get default interface
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @if_id: interface id
- *
- * @returns '0' on Success; Error code otherwise.
- */
-int dpdmux_if_get_default(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *if_id)
-{
- struct dpdmux_cmd_if *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_DEFAULT,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpdmux_cmd_if *)cmd.params;
- *if_id = le16_to_cpu(rsp_params->if_id);
-
- return 0;
-}
-
/**
* dpdmux_set_custom_key - Set a custom classification key.
*
@@ -859,71 +202,3 @@ int dpdmux_add_custom_cls_entry(struct fsl_mc_io *mc_io,
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
-
-/**
- * dpdmux_remove_custom_cls_entry - Removes a custom classification entry.
- *
- * This API is only available for DPDMUX instances created with
- * DPDMUX_METHOD_CUSTOM. The API can be used to remove classification
- * entries previously inserted using dpdmux_add_custom_cls_entry.
- *
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPSW object
- * @rule: Classification rule to remove
- *
- * @returns '0' on Success; Error code otherwise.
- */
-int dpdmux_remove_custom_cls_entry(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpdmux_rule_cfg *rule)
-{
- struct dpdmux_cmd_remove_custom_cls_entry *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_REMOVE_CUSTOM_CLS_ENTRY,
- cmd_flags,
- token);
- cmd_params = (struct dpdmux_cmd_remove_custom_cls_entry *)cmd.params;
- cmd_params->key_size = rule->key_size;
- cmd_params->key_iova = cpu_to_le64(rule->key_iova);
- cmd_params->mask_iova = cpu_to_le64(rule->mask_iova);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpdmux_get_api_version() - Get Data Path Demux API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path demux API
- * @minor_ver: Minor version of data path demux API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct mc_command cmd = { 0 };
- struct dpdmux_rsp_get_api_version *rsp_params;
- int err;
-
- cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpdmux_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 683d7bcc17..ad4df05dfc 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -80,99 +80,6 @@ int dpni_close(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_create() - Create the DPNI object
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPNI object, allocate required resources and
- * perform required initialization.
- *
- * The object can be created either by declaring it in the
- * DPL file, or by calling this function.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpni_cfg *cfg,
- uint32_t *obj_id)
-{
- struct dpni_cmd_create *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_CREATE,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dpni_cmd_create *)cmd.params;
- cmd_params->options = cpu_to_le32(cfg->options);
- cmd_params->num_queues = cfg->num_queues;
- cmd_params->num_tcs = cfg->num_tcs;
- cmd_params->mac_filter_entries = cfg->mac_filter_entries;
- cmd_params->num_rx_tcs = cfg->num_rx_tcs;
- cmd_params->vlan_filter_entries = cfg->vlan_filter_entries;
- cmd_params->qos_entries = cfg->qos_entries;
- cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
- cmd_params->num_cgs = cfg->num_cgs;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dpni_destroy() - Destroy the DPNI object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpni_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dpni_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- /* set object id to destroy */
- cmd_params = (struct dpni_cmd_destroy *)cmd.params;
- cmd_params->dpsw_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpni_set_pools() - Set buffer pools configuration
* @mc_io: Pointer to MC portal's I/O object
@@ -356,47 +263,6 @@ int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_irq_enable() - Get overall interrupt state
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @irq_index: The interrupt index to configure
- * @en: Returned interrupt state - enable = 1, disable = 0
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t irq_index,
- uint8_t *en)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_get_irq_enable *cmd_params;
- struct dpni_rsp_get_irq_enable *rsp_params;
-
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_irq_enable *)cmd.params;
- cmd_params->irq_index = irq_index;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_irq_enable *)cmd.params;
- *en = dpni_get_field(rsp_params->enabled, ENABLE);
-
- return 0;
-}
-
/**
* dpni_set_irq_mask() - Set interrupt mask.
* @mc_io: Pointer to MC portal's I/O object
@@ -434,49 +300,6 @@ int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_irq_mask() - Get interrupt mask.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @irq_index: The interrupt index to configure
- * @mask: Returned event mask to trigger interrupt
- *
- * Every interrupt can have up to 32 causes and the interrupt model supports
- * masking/unmasking each cause independently
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t irq_index,
- uint32_t *mask)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_get_irq_mask *cmd_params;
- struct dpni_rsp_get_irq_mask *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_irq_mask *)cmd.params;
- cmd_params->irq_index = irq_index;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_irq_mask *)cmd.params;
- *mask = le32_to_cpu(rsp_params->mask);
-
- return 0;
-}
-
/**
* dpni_get_irq_status() - Get the current status of any pending interrupts.
* @mc_io: Pointer to MC portal's I/O object
@@ -633,57 +456,6 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_buffer_layout() - Retrieve buffer layout attributes.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @qtype: Type of queue to retrieve configuration for
- * @layout: Returns buffer layout attributes
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_queue_type qtype,
- struct dpni_buffer_layout *layout)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_get_buffer_layout *cmd_params;
- struct dpni_rsp_get_buffer_layout *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_BUFFER_LAYOUT,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_buffer_layout *)cmd.params;
- cmd_params->qtype = qtype;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_buffer_layout *)cmd.params;
- layout->pass_timestamp =
- (int)dpni_get_field(rsp_params->flags, PASS_TS);
- layout->pass_parser_result =
- (int)dpni_get_field(rsp_params->flags, PASS_PR);
- layout->pass_frame_status =
- (int)dpni_get_field(rsp_params->flags, PASS_FS);
- layout->pass_sw_opaque =
- (int)dpni_get_field(rsp_params->flags, PASS_SWO);
- layout->private_data_size = le16_to_cpu(rsp_params->private_data_size);
- layout->data_align = le16_to_cpu(rsp_params->data_align);
- layout->data_head_room = le16_to_cpu(rsp_params->head_room);
- layout->data_tail_room = le16_to_cpu(rsp_params->tail_room);
-
- return 0;
-}
-
/**
* dpni_set_buffer_layout() - Set buffer layout configuration.
* @mc_io: Pointer to MC portal's I/O object
@@ -758,50 +530,6 @@ int dpni_set_offload(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_offload() - Get DPNI offload configuration.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @type: Type of DPNI offload
- * @config: Offload configuration.
- * For checksum offloads, a value of 1 indicates that the
- * offload is enabled.
- *
- * Return: '0' on Success; Error code otherwise.
- *
- * @warning Allowed only when DPNI is disabled
- */
-int dpni_get_offload(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_offload type,
- uint32_t *config)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_get_offload *cmd_params;
- struct dpni_rsp_get_offload *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OFFLOAD,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_offload *)cmd.params;
- cmd_params->dpni_offload = type;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_offload *)cmd.params;
- *config = le32_to_cpu(rsp_params->config);
-
- return 0;
-}
-
/**
* dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
* for enqueue operations
@@ -844,41 +572,6 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @data_offset: Tx data offset (from start of buffer)
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *data_offset)
-{
- struct mc_command cmd = { 0 };
- struct dpni_rsp_get_tx_data_offset *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_tx_data_offset *)cmd.params;
- *data_offset = le16_to_cpu(rsp_params->data_offset);
-
- return 0;
-}
-
/**
* dpni_set_link_cfg() - set the link configuration.
* @mc_io: Pointer to MC portal's I/O object
@@ -978,42 +671,6 @@ int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_max_frame_length() - Get the maximum received frame length.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @max_frame_length: Maximum received frame length (in bytes);
- * frame is discarded if its length exceeds this value
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *max_frame_length)
-{
- struct mc_command cmd = { 0 };
- struct dpni_rsp_get_max_frame_length *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_max_frame_length *)cmd.params;
- *max_frame_length = le16_to_cpu(rsp_params->max_frame_length);
-
- return 0;
-}
-
/**
* dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
* @mc_io: Pointer to MC portal's I/O object
@@ -1042,41 +699,6 @@ int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_multicast_promisc() - Get multicast promiscuous mode
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @en: Returns '1' if enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct mc_command cmd = { 0 };
- struct dpni_rsp_get_multicast_promisc *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_multicast_promisc *)cmd.params;
- *en = dpni_get_field(rsp_params->enabled, ENABLE);
-
- return 0;
-}
-
/**
* dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
* @mc_io: Pointer to MC portal's I/O object
@@ -1096,48 +718,13 @@ int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
/* prepare command */
cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
- dpni_set_field(cmd_params->enable, ENABLE, en);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_unicast_promisc() - Get unicast promiscuous mode
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @en: Returns '1' if enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct mc_command cmd = { 0 };
- struct dpni_rsp_get_unicast_promisc *rsp_params;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_unicast_promisc *)cmd.params;
- *en = dpni_get_field(rsp_params->enabled, ENABLE);
+ cmd_flags,
+ token);
+ cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
+ dpni_set_field(cmd_params->enable, ENABLE, en);
- return 0;
+ /* send command to mc*/
+ return mc_send_command(mc_io, &cmd);
}
/**
@@ -1281,39 +868,6 @@ int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @unicast: Set to '1' to clear unicast addresses
- * @multicast: Set to '1' to clear multicast addresses
- *
- * The primary MAC address is not cleared by this operation.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int unicast,
- int multicast)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_clear_mac_filters *cmd_params;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_clear_mac_filters *)cmd.params;
- dpni_set_field(cmd_params->flags, UNICAST_FILTERS, unicast);
- dpni_set_field(cmd_params->flags, MULTICAST_FILTERS, multicast);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpni_get_port_mac_addr() - Retrieve MAC address associated to the physical
* port the DPNI is attached to
@@ -1453,29 +1007,6 @@ int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_clear_vlan_filters() - Clear all VLAN filters
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_VLAN_FILTERS,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
* @mc_io: Pointer to MC portal's I/O object
@@ -1675,32 +1206,6 @@ int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_clear_qos_table() - Clear all QoS mapping entries
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- *
- * Following this function call, all frames are directed to
- * the default traffic class (0)
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpni_add_fs_entry() - Add Flow Steering entry for a specific traffic class
* (to select a flow ID)
@@ -1779,35 +1284,6 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_clear_fs_entries() - Clear all Flow Steering entries of a specific
- * traffic class
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @tc_id: Traffic class selection (0-7)
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t tc_id)
-{
- struct dpni_cmd_clear_fs_entries *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_FS_ENT,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_clear_fs_entries *)cmd.params;
- cmd_params->tc_id = tc_id;
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dpni_set_congestion_notification() - Set traffic class congestion
* notification configuration
@@ -1858,94 +1334,6 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_congestion_notification() - Get traffic class congestion
- * notification configuration
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @qtype: Type of queue - Rx, Tx and Tx confirm types are supported
- * @tc_id: Traffic class selection (0-7)
- * @cfg: congestion notification configuration
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpni_get_congestion_notification(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_queue_type qtype,
- uint8_t tc_id,
- struct dpni_congestion_notification_cfg *cfg)
-{
- struct dpni_rsp_get_congestion_notification *rsp_params;
- struct dpni_cmd_get_congestion_notification *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(
- DPNI_CMDID_GET_CONGESTION_NOTIFICATION,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_congestion_notification *)cmd.params;
- cmd_params->qtype = qtype;
- cmd_params->tc = tc_id;
- cmd_params->congestion_point = cfg->cg_point;
- cmd_params->cgid = cfg->cgid;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpni_rsp_get_congestion_notification *)cmd.params;
- cfg->units = dpni_get_field(rsp_params->type_units, CONG_UNITS);
- cfg->threshold_entry = le32_to_cpu(rsp_params->threshold_entry);
- cfg->threshold_exit = le32_to_cpu(rsp_params->threshold_exit);
- cfg->message_ctx = le64_to_cpu(rsp_params->message_ctx);
- cfg->message_iova = le64_to_cpu(rsp_params->message_iova);
- cfg->notification_mode = le16_to_cpu(rsp_params->notification_mode);
- cfg->dest_cfg.dest_id = le32_to_cpu(rsp_params->dest_id);
- cfg->dest_cfg.priority = rsp_params->dest_priority;
- cfg->dest_cfg.dest_type = dpni_get_field(rsp_params->type_units,
- DEST_TYPE);
-
- return 0;
-}
-
-/**
- * dpni_get_api_version() - Get Data Path Network Interface API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path network interface API
- * @minor_ver: Minor version of data path network interface API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dpni_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dpni_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
-
/**
* dpni_set_queue() - Set queue parameters
* @mc_io: Pointer to MC portal's I/O object
@@ -2184,67 +1572,6 @@ int dpni_set_taildrop(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_taildrop() - Get taildrop information
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @cg_point: Congestion point
- * @q_type: Queue type on which the taildrop is configured.
- * Only Rx queues are supported for now
- * @tc: Traffic class to apply this taildrop to
- * @q_index: Index of the queue if the DPNI supports multiple queues for
- * traffic distribution. Ignored if CONGESTION_POINT is not 0.
- * @taildrop: Taildrop structure
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_congestion_point cg_point,
- enum dpni_queue_type qtype,
- uint8_t tc,
- uint8_t index,
- struct dpni_taildrop *taildrop)
-{
- struct mc_command cmd = { 0 };
- struct dpni_cmd_get_taildrop *cmd_params;
- struct dpni_rsp_get_taildrop *rsp_params;
- uint8_t oal_lo, oal_hi;
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TAILDROP,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_taildrop *)cmd.params;
- cmd_params->congestion_point = cg_point;
- cmd_params->qtype = qtype;
- cmd_params->tc = tc;
- cmd_params->index = index;
-
- /* send command to mc */
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_taildrop *)cmd.params;
- taildrop->enable = dpni_get_field(rsp_params->enable_oal_lo, ENABLE);
- taildrop->units = rsp_params->units;
- taildrop->threshold = le32_to_cpu(rsp_params->threshold);
- oal_lo = dpni_get_field(rsp_params->enable_oal_lo, OAL_LO);
- oal_hi = dpni_get_field(rsp_params->oal_hi, OAL_HI);
- taildrop->oal = oal_hi << DPNI_OAL_LO_SIZE | oal_lo;
-
- /* Fill the first 4 bits, 'oal' is a 2's complement value of 12 bits */
- if (taildrop->oal >= 0x0800)
- taildrop->oal |= 0xF000;
-
- return 0;
-}
-
/**
* dpni_set_opr() - Set Order Restoration configuration.
* @mc_io: Pointer to MC portal's I/O object
@@ -2290,69 +1617,6 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
-/**
- * dpni_get_opr() - Retrieve Order Restoration config and query.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @tc: Traffic class, in range 0 to NUM_TCS - 1
- * @index: Selects the specific queue out of the set allocated
- * for the same TC. Value must be in range 0 to
- * NUM_QUEUES - 1
- * @cfg: Returned OPR configuration
- * @qry: Returned OPR query
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dpni_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t tc,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry)
-{
- struct dpni_rsp_get_opr *rsp_params;
- struct dpni_cmd_get_opr *cmd_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OPR,
- cmd_flags,
- token);
- cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
- cmd_params->index = index;
- cmd_params->tc_id = tc;
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dpni_rsp_get_opr *)cmd.params;
- cfg->oloe = rsp_params->oloe;
- cfg->oeane = rsp_params->oeane;
- cfg->olws = rsp_params->olws;
- cfg->oa = rsp_params->oa;
- cfg->oprrws = rsp_params->oprrws;
- qry->rip = dpni_get_field(rsp_params->flags, RIP);
- qry->enable = dpni_get_field(rsp_params->flags, OPR_ENABLE);
- qry->nesn = le16_to_cpu(rsp_params->nesn);
- qry->ndsn = le16_to_cpu(rsp_params->ndsn);
- qry->ea_tseq = le16_to_cpu(rsp_params->ea_tseq);
- qry->tseq_nlis = dpni_get_field(rsp_params->tseq_nlis, TSEQ_NLIS);
- qry->ea_hseq = le16_to_cpu(rsp_params->ea_hseq);
- qry->hseq_nlis = dpni_get_field(rsp_params->hseq_nlis, HSEQ_NLIS);
- qry->ea_hptr = le16_to_cpu(rsp_params->ea_hptr);
- qry->ea_tptr = le16_to_cpu(rsp_params->ea_tptr);
- qry->opr_vid = le16_to_cpu(rsp_params->opr_vid);
- qry->opr_id = le16_to_cpu(rsp_params->opr_id);
-
- return 0;
-}
-
/**
* dpni_set_rx_fs_dist() - Set Rx traffic class FS distribution
* @mc_io: Pointer to MC portal's I/O object
@@ -2567,73 +1831,3 @@ int dpni_enable_sw_sequence(struct fsl_mc_io *mc_io,
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
-
-/**
- * dpni_get_sw_sequence_layout() - Get the soft sequence layout
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @src: Source of the layout (WRIOP Rx or Tx)
- * @ss_layout_iova: I/O virtual address of 264 bytes DMA-able memory
- *
- * warning: After calling this function, call dpni_extract_sw_sequence_layout()
- * to get the layout.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_soft_sequence_dest src,
- uint64_t ss_layout_iova)
-{
- struct dpni_get_sw_sequence_layout *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SW_SEQUENCE_LAYOUT,
- cmd_flags,
- token);
-
- cmd_params = (struct dpni_get_sw_sequence_layout *)cmd.params;
- cmd_params->src = src;
- cmd_params->layout_iova = cpu_to_le64(ss_layout_iova);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
- *
- * This function has to be called after dpni_get_sw_sequence_layout
- *
- */
-void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
- const uint8_t *sw_sequence_layout_buf)
-{
- const struct dpni_sw_sequence_layout_entry *ext_params;
- int i;
- uint16_t ss_size, ss_offset;
-
- ext_params = (const struct dpni_sw_sequence_layout_entry *)
- sw_sequence_layout_buf;
-
- for (i = 0; i < DPNI_SW_SEQUENCE_LAYOUT_SIZE; i++) {
- ss_offset = le16_to_cpu(ext_params[i].ss_offset);
- ss_size = le16_to_cpu(ext_params[i].ss_size);
-
- if (ss_offset == 0 && ss_size == 0) {
- layout->num_ss = i;
- return;
- }
-
- layout->ss[i].ss_offset = ss_offset;
- layout->ss[i].ss_size = ss_size;
- layout->ss[i].param_offset = ext_params[i].param_offset;
- layout->ss[i].param_size = ext_params[i].param_size;
- }
-}
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..96e20bce81 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -54,213 +54,6 @@ int dprtc_open(struct fsl_mc_io *mc_io,
return err;
}
-/**
- * dprtc_close() - Close the control session of the object
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_CLOSE, cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_create() - Create the DPRTC object.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @cfg: Configuration structure
- * @obj_id: Returned object id
- *
- * Create the DPRTC object, allocate required resources and
- * perform required initialization.
- *
- * The function accepts an authentication token of a parent
- * container that this object should be assigned to. The token
- * can be '0' so the object will be assigned to the default container.
- * The newly created object can be opened with the returned
- * object id and using the container's associated tokens and MC portals.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dprtc_cfg *cfg,
- uint32_t *obj_id)
-{
- struct mc_command cmd = { 0 };
- int err;
-
- (void)(cfg); /* unused */
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_CREATE,
- cmd_flags,
- dprc_token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- *obj_id = mc_cmd_read_object_id(&cmd);
-
- return 0;
-}
-
-/**
- * dprtc_destroy() - Destroy the DPRTC object and release all its resources.
- * @mc_io: Pointer to MC portal's I/O object
- * @dprc_token: Parent container token; '0' for default container
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @object_id: The object id; it must be a valid id within the container that
- * created this object;
- *
- * The function accepts the authentication token of the parent container that
- * created the object (not the one that currently owns the object). The object
- * is searched within parent using the provided 'object_id'.
- * All tokens to the object must be closed before calling destroy.
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dprtc_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id)
-{
- struct dprtc_cmd_destroy *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_DESTROY,
- cmd_flags,
- dprc_token);
- cmd_params = (struct dprtc_cmd_destroy *)cmd.params;
- cmd_params->object_id = cpu_to_le32(object_id);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_enable() - Enable the DPRTC.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_ENABLE, cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_disable() - Disable the DPRTC.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_DISABLE,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_is_enabled() - Check if the DPRTC is enabled.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @en: Returns '1' if object is enabled; '0' otherwise
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en)
-{
- struct dprtc_rsp_is_enabled *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_IS_ENABLED, cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dprtc_rsp_is_enabled *)cmd.params;
- *en = dprtc_get_field(rsp_params->en, ENABLE);
-
- return 0;
-}
-
-/**
- * dprtc_reset() - Reset the DPRTC, returns the object to initial state.
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token)
-{
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_RESET,
- cmd_flags,
- token);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
/**
* dprtc_get_attributes - Retrieve DPRTC attributes.
*
@@ -299,101 +92,6 @@ int dprtc_get_attributes(struct fsl_mc_io *mc_io,
return 0;
}
-/**
- * dprtc_set_clock_offset() - Sets the clock's offset
- * (usually relative to another clock).
- *
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @offset: New clock offset (in nanoseconds).
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_set_clock_offset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int64_t offset)
-{
- struct dprtc_cmd_set_clock_offset *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_CLOCK_OFFSET,
- cmd_flags,
- token);
- cmd_params = (struct dprtc_cmd_set_clock_offset *)cmd.params;
- cmd_params->offset = cpu_to_le64(offset);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_set_freq_compensation() - Sets a new frequency compensation value.
- *
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @freq_compensation: The new frequency compensation value to set.
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint32_t freq_compensation)
-{
- struct dprtc_get_freq_compensation *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FREQ_COMPENSATION,
- cmd_flags,
- token);
- cmd_params = (struct dprtc_get_freq_compensation *)cmd.params;
- cmd_params->freq_compensation = cpu_to_le32(freq_compensation);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_get_freq_compensation() - Retrieves the frequency compensation value
- *
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @freq_compensation: Frequency compensation value
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint32_t *freq_compensation)
-{
- struct dprtc_get_freq_compensation *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_FREQ_COMPENSATION,
- cmd_flags,
- token);
-
- /* send command to mc*/
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- /* retrieve response parameters */
- rsp_params = (struct dprtc_get_freq_compensation *)cmd.params;
- *freq_compensation = le32_to_cpu(rsp_params->freq_compensation);
-
- return 0;
-}
-
/**
* dprtc_get_time() - Returns the current RTC time.
*
@@ -458,66 +156,3 @@ int dprtc_set_time(struct fsl_mc_io *mc_io,
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
}
-
-/**
- * dprtc_set_alarm() - Defines and sets alarm.
- *
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @time: In nanoseconds, the time when the alarm
- * should go off - must be a multiple of
- * 1 microsecond
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_set_alarm(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token, uint64_t time)
-{
- struct dprtc_time *cmd_params;
- struct mc_command cmd = { 0 };
-
- /* prepare command */
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_ALARM,
- cmd_flags,
- token);
- cmd_params = (struct dprtc_time *)cmd.params;
- cmd_params->time = cpu_to_le64(time);
-
- /* send command to mc*/
- return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dprtc_get_api_version() - Get Data Path Real Time Counter API version
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver: Major version of data path real time counter API
- * @minor_ver: Minor version of data path real time counter API
- *
- * Return: '0' on Success; Error code otherwise.
- */
-int dprtc_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver)
-{
- struct dprtc_rsp_get_api_version *rsp_params;
- struct mc_command cmd = { 0 };
- int err;
-
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_API_VERSION,
- cmd_flags,
- 0);
-
- err = mc_send_command(mc_io, &cmd);
- if (err)
- return err;
-
- rsp_params = (struct dprtc_rsp_get_api_version *)cmd.params;
- *major_ver = le16_to_cpu(rsp_params->major);
- *minor_ver = le16_to_cpu(rsp_params->minor);
-
- return 0;
-}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index accd1ef5c1..eb768fafbb 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -21,10 +21,6 @@ int dpdmux_open(struct fsl_mc_io *mc_io,
int dpdmux_id,
uint16_t *token);
-int dpdmux_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* DPDMUX general options
*/
@@ -102,34 +98,6 @@ struct dpdmux_cfg {
} adv;
};
-int dpdmux_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpdmux_cfg *cfg,
- uint32_t *obj_id);
-
-int dpdmux_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
-int dpdmux_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dpdmux_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dpdmux_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
-int dpdmux_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dpdmux_attr - Structure representing DPDMUX attributes
* @id: DPDMUX object ID
@@ -153,11 +121,6 @@ int dpdmux_get_attributes(struct fsl_mc_io *mc_io,
uint16_t token,
struct dpdmux_attr *attr);
-int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t max_frame_length);
-
/**
* enum dpdmux_counter_type - Counter types
* @DPDMUX_CNT_ING_FRAME: Counts ingress frames
@@ -223,12 +186,6 @@ struct dpdmux_accepted_frames {
enum dpdmux_action unaccept_act;
};
-int dpdmux_if_set_accepted_frames(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_accepted_frames *cfg);
-
/**
* struct dpdmux_if_attr - Structure representing frame types configuration
* @rate: Configured interface rate (in bits per second)
@@ -242,22 +199,6 @@ struct dpdmux_if_attr {
enum dpdmux_accepted_frames_type accept_frame_type;
};
-int dpdmux_if_get_attributes(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_if_attr *attr);
-
-int dpdmux_if_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id);
-
-int dpdmux_if_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id);
-
/**
* struct dpdmux_l2_rule - Structure representing L2 rule
* @mac_addr: MAC address
@@ -268,29 +209,6 @@ struct dpdmux_l2_rule {
uint16_t vlan_id;
};
-int dpdmux_if_remove_l2_rule(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_l2_rule *rule);
-
-int dpdmux_if_add_l2_rule(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- const struct dpdmux_l2_rule *rule);
-
-int dpdmux_if_get_counter(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- enum dpdmux_counter_type counter_type,
- uint64_t *counter);
-
-int dpdmux_ul_reset_counters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* Enable auto-negotiation
*/
@@ -319,11 +237,6 @@ struct dpdmux_link_cfg {
uint64_t advertising;
};
-int dpdmux_if_set_link_cfg(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_link_cfg *cfg);
/**
* struct dpdmux_link_state - Structure representing DPDMUX link state
* @rate: Rate
@@ -342,22 +255,11 @@ struct dpdmux_link_state {
uint64_t advertising;
};
-int dpdmux_if_get_link_state(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t if_id,
- struct dpdmux_link_state *state);
-
int dpdmux_if_set_default(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint16_t if_id);
-int dpdmux_if_get_default(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *if_id);
-
int dpdmux_set_custom_key(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -397,14 +299,4 @@ int dpdmux_add_custom_cls_entry(struct fsl_mc_io *mc_io,
struct dpdmux_rule_cfg *rule,
struct dpdmux_cls_action *action);
-int dpdmux_remove_custom_cls_entry(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- struct dpdmux_rule_cfg *rule);
-
-int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
#endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 598911ddd1..2e2012d0bf 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -185,17 +185,6 @@ struct dpni_cfg {
uint8_t num_cgs;
};
-int dpni_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dpni_cfg *cfg,
- uint32_t *obj_id);
-
-int dpni_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
/**
* struct dpni_pools_cfg - Structure representing buffer pools configuration
* @num_dpbp: Number of DPBPs
@@ -265,24 +254,12 @@ int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
uint8_t irq_index,
uint8_t en);
-int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t irq_index,
- uint8_t *en);
-
int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint8_t irq_index,
uint32_t mask);
-int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t irq_index,
- uint32_t *mask);
-
int dpni_get_irq_status(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -495,12 +472,6 @@ enum dpni_queue_type {
DPNI_QUEUE_RX_ERR,
};
-int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_queue_type qtype,
- struct dpni_buffer_layout *layout);
-
int dpni_set_buffer_layout(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -530,23 +501,12 @@ int dpni_set_offload(struct fsl_mc_io *mc_io,
enum dpni_offload type,
uint32_t config);
-int dpni_get_offload(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_offload type,
- uint32_t *config);
-
int dpni_get_qdid(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
enum dpni_queue_type qtype,
uint16_t *qdid);
-int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *data_offset);
-
#define DPNI_STATISTICS_CNT 7
/**
@@ -736,11 +696,6 @@ int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t max_frame_length);
-int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint16_t *max_frame_length);
-
int dpni_set_mtu(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -756,21 +711,11 @@ int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
uint16_t token,
int en);
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
int en);
-int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -794,12 +739,6 @@ int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
uint16_t token,
const uint8_t mac_addr[6]);
-int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int unicast,
- int multicast);
-
int dpni_get_port_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -828,10 +767,6 @@ int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t vlan_id);
-int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* enum dpni_dist_mode - DPNI distribution mode
* @DPNI_DIST_MODE_NONE: No distribution
@@ -1042,13 +977,6 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
uint8_t tc_id,
const struct dpni_congestion_notification_cfg *cfg);
-int dpni_get_congestion_notification(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_queue_type qtype,
- uint8_t tc_id,
- struct dpni_congestion_notification_cfg *cfg);
-
/* DPNI FLC stash options */
/**
@@ -1212,10 +1140,6 @@ int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
uint16_t token,
const struct dpni_rule_cfg *cfg);
-int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* Discard matching traffic. If set, this takes precedence over any other
* configuration and matching traffic is always discarded.
@@ -1273,16 +1197,6 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
uint8_t tc_id,
const struct dpni_rule_cfg *cfg);
-int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t tc_id);
-
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
/**
* Set User Context
*/
@@ -1372,15 +1286,6 @@ int dpni_set_taildrop(struct fsl_mc_io *mc_io,
uint8_t q_index,
struct dpni_taildrop *taildrop);
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_congestion_point cg_point,
- enum dpni_queue_type q_type,
- uint8_t tc,
- uint8_t q_index,
- struct dpni_taildrop *taildrop);
-
int dpni_set_opr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -1389,14 +1294,6 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t options,
struct opr_cfg *cfg);
-int dpni_get_opr(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint8_t tc,
- uint8_t index,
- struct opr_cfg *cfg,
- struct opr_qry *qry);
-
/**
* When used for queue_idx in function dpni_set_rx_dist_default_queue will
* signal to dpni to drop all unclassified frames
@@ -1550,35 +1447,4 @@ struct dpni_sw_sequence_layout {
} ss[DPNI_SW_SEQUENCE_LAYOUT_SIZE];
};
-/**
- * dpni_get_sw_sequence_layout() - Get the soft sequence layout
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPNI object
- * @src: Source of the layout (WRIOP Rx or Tx)
- * @ss_layout_iova: I/O virtual address of 264 bytes DMA-able memory
- *
- * warning: After calling this function, call dpni_extract_sw_sequence_layout()
- * to get the layout
- *
- * Return: '0' on Success; error code otherwise.
- */
-int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- enum dpni_soft_sequence_dest src,
- uint64_t ss_layout_iova);
-
-/**
- * dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
- *
- * This function has to be called after dpni_get_sw_sequence_layout
- *
- */
-void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
- const uint8_t *sw_sequence_layout_buf);
-
#endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..d8be107ef1 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -16,10 +16,6 @@ int dprtc_open(struct fsl_mc_io *mc_io,
int dprtc_id,
uint16_t *token);
-int dprtc_close(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
/**
* struct dprtc_cfg - Structure representing DPRTC configuration
* @options: place holder
@@ -28,49 +24,6 @@ struct dprtc_cfg {
uint32_t options;
};
-int dprtc_create(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- const struct dprtc_cfg *cfg,
- uint32_t *obj_id);
-
-int dprtc_destroy(struct fsl_mc_io *mc_io,
- uint16_t dprc_token,
- uint32_t cmd_flags,
- uint32_t object_id);
-
-int dprtc_enable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dprtc_disable(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dprtc_is_enabled(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int *en);
-
-int dprtc_reset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token);
-
-int dprtc_set_clock_offset(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- int64_t offset);
-
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint32_t freq_compensation);
-
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint32_t *freq_compensation);
-
int dprtc_get_time(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -81,11 +34,6 @@ int dprtc_set_time(struct fsl_mc_io *mc_io,
uint16_t token,
uint64_t time);
-int dprtc_set_alarm(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t token,
- uint64_t time);
-
/**
* struct dprtc_attr - Structure representing DPRTC attributes
* @id: DPRTC object ID
@@ -101,9 +49,4 @@ int dprtc_get_attributes(struct fsl_mc_io *mc_io,
uint16_t token,
struct dprtc_attr *attr);
-int dprtc_get_api_version(struct fsl_mc_io *mc_io,
- uint32_t cmd_flags,
- uint16_t *major_ver,
- uint16_t *minor_ver);
-
#endif /* __FSL_DPRTC_H */
diff --git a/drivers/net/e1000/base/e1000_82542.c b/drivers/net/e1000/base/e1000_82542.c
index fd473c1c6f..e14e9e9e58 100644
--- a/drivers/net/e1000/base/e1000_82542.c
+++ b/drivers/net/e1000/base/e1000_82542.c
@@ -406,103 +406,6 @@ STATIC int e1000_rar_set_82542(struct e1000_hw *hw, u8 *addr, u32 index)
return E1000_SUCCESS;
}
-/**
- * e1000_translate_register_82542 - Translate the proper register offset
- * @reg: e1000 register to be read
- *
- * Registers in 82542 are located in different offsets than other adapters
- * even though they function in the same manner. This function takes in
- * the name of the register to read and returns the correct offset for
- * 82542 silicon.
- **/
-u32 e1000_translate_register_82542(u32 reg)
-{
- /*
- * Some of the 82542 registers are located at different
- * offsets than they are in newer adapters.
- * Despite the difference in location, the registers
- * function in the same manner.
- */
- switch (reg) {
- case E1000_RA:
- reg = 0x00040;
- break;
- case E1000_RDTR:
- reg = 0x00108;
- break;
- case E1000_RDBAL(0):
- reg = 0x00110;
- break;
- case E1000_RDBAH(0):
- reg = 0x00114;
- break;
- case E1000_RDLEN(0):
- reg = 0x00118;
- break;
- case E1000_RDH(0):
- reg = 0x00120;
- break;
- case E1000_RDT(0):
- reg = 0x00128;
- break;
- case E1000_RDBAL(1):
- reg = 0x00138;
- break;
- case E1000_RDBAH(1):
- reg = 0x0013C;
- break;
- case E1000_RDLEN(1):
- reg = 0x00140;
- break;
- case E1000_RDH(1):
- reg = 0x00148;
- break;
- case E1000_RDT(1):
- reg = 0x00150;
- break;
- case E1000_FCRTH:
- reg = 0x00160;
- break;
- case E1000_FCRTL:
- reg = 0x00168;
- break;
- case E1000_MTA:
- reg = 0x00200;
- break;
- case E1000_TDBAL(0):
- reg = 0x00420;
- break;
- case E1000_TDBAH(0):
- reg = 0x00424;
- break;
- case E1000_TDLEN(0):
- reg = 0x00428;
- break;
- case E1000_TDH(0):
- reg = 0x00430;
- break;
- case E1000_TDT(0):
- reg = 0x00438;
- break;
- case E1000_TIDV:
- reg = 0x00440;
- break;
- case E1000_VFTA:
- reg = 0x00600;
- break;
- case E1000_TDFH:
- reg = 0x08010;
- break;
- case E1000_TDFT:
- reg = 0x08018;
- break;
- default:
- break;
- }
-
- return reg;
-}
-
/**
* e1000_clear_hw_cntrs_82542 - Clear device specific hardware counters
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82543.c b/drivers/net/e1000/base/e1000_82543.c
index ca273b4368..992dffe1ff 100644
--- a/drivers/net/e1000/base/e1000_82543.c
+++ b/drivers/net/e1000/base/e1000_82543.c
@@ -364,84 +364,6 @@ STATIC bool e1000_init_phy_disabled_82543(struct e1000_hw *hw)
return ret_val;
}
-/**
- * e1000_tbi_adjust_stats_82543 - Adjust stats when TBI enabled
- * @hw: pointer to the HW structure
- * @stats: Struct containing statistic register values
- * @frame_len: The length of the frame in question
- * @mac_addr: The Ethernet destination address of the frame in question
- * @max_frame_size: The maximum frame size
- *
- * Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
- **/
-void e1000_tbi_adjust_stats_82543(struct e1000_hw *hw,
- struct e1000_hw_stats *stats, u32 frame_len,
- u8 *mac_addr, u32 max_frame_size)
-{
- if (!(e1000_tbi_sbp_enabled_82543(hw)))
- goto out;
-
- /* First adjust the frame length. */
- frame_len--;
- /*
- * We need to adjust the statistics counters, since the hardware
- * counters overcount this packet as a CRC error and undercount
- * the packet as a good packet
- */
- /* This packet should not be counted as a CRC error. */
- stats->crcerrs--;
- /* This packet does count as a Good Packet Received. */
- stats->gprc++;
-
- /* Adjust the Good Octets received counters */
- stats->gorc += frame_len;
-
- /*
- * Is this a broadcast or multicast? Check broadcast first,
- * since the test for a multicast frame will test positive on
- * a broadcast frame.
- */
- if ((mac_addr[0] == 0xff) && (mac_addr[1] == 0xff))
- /* Broadcast packet */
- stats->bprc++;
- else if (*mac_addr & 0x01)
- /* Multicast packet */
- stats->mprc++;
-
- /*
- * In this case, the hardware has over counted the number of
- * oversize frames.
- */
- if ((frame_len == max_frame_size) && (stats->roc > 0))
- stats->roc--;
-
- /*
- * Adjust the bin counters when the extra byte put the frame in the
- * wrong bin. Remember that the frame_len was adjusted above.
- */
- if (frame_len == 64) {
- stats->prc64++;
- stats->prc127--;
- } else if (frame_len == 127) {
- stats->prc127++;
- stats->prc255--;
- } else if (frame_len == 255) {
- stats->prc255++;
- stats->prc511--;
- } else if (frame_len == 511) {
- stats->prc511++;
- stats->prc1023--;
- } else if (frame_len == 1023) {
- stats->prc1023++;
- stats->prc1522--;
- } else if (frame_len == 1522) {
- stats->prc1522++;
- }
-
-out:
- return;
-}
-
/**
* e1000_read_phy_reg_82543 - Read PHY register
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82543.h b/drivers/net/e1000/base/e1000_82543.h
index cf81e4e848..8af412bc77 100644
--- a/drivers/net/e1000/base/e1000_82543.h
+++ b/drivers/net/e1000/base/e1000_82543.h
@@ -16,10 +16,6 @@
/* If TBI_COMPAT_ENABLED, then this is the current state (on/off) */
#define TBI_SBP_ENABLED 0x2
-void e1000_tbi_adjust_stats_82543(struct e1000_hw *hw,
- struct e1000_hw_stats *stats,
- u32 frame_len, u8 *mac_addr,
- u32 max_frame_size);
void e1000_set_tbi_compatibility_82543(struct e1000_hw *hw,
bool state);
bool e1000_tbi_sbp_enabled_82543(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_82571.c b/drivers/net/e1000/base/e1000_82571.c
index 9dc7f6025c..9da1fbf856 100644
--- a/drivers/net/e1000/base/e1000_82571.c
+++ b/drivers/net/e1000/base/e1000_82571.c
@@ -1467,41 +1467,6 @@ STATIC s32 e1000_led_on_82574(struct e1000_hw *hw)
return E1000_SUCCESS;
}
-/**
- * e1000_check_phy_82574 - check 82574 phy hung state
- * @hw: pointer to the HW structure
- *
- * Returns whether phy is hung or not
- **/
-bool e1000_check_phy_82574(struct e1000_hw *hw)
-{
- u16 status_1kbt = 0;
- u16 receive_errors = 0;
- s32 ret_val;
-
- DEBUGFUNC("e1000_check_phy_82574");
-
- /* Read PHY Receive Error counter first, if its is max - all F's then
- * read the Base1000T status register If both are max then PHY is hung.
- */
- ret_val = hw->phy.ops.read_reg(hw, E1000_RECEIVE_ERROR_COUNTER,
- &receive_errors);
- if (ret_val)
- return false;
- if (receive_errors == E1000_RECEIVE_ERROR_MAX) {
- ret_val = hw->phy.ops.read_reg(hw, E1000_BASE1000T_STATUS,
- &status_1kbt);
- if (ret_val)
- return false;
- if ((status_1kbt & E1000_IDLE_ERROR_COUNT_MASK) ==
- E1000_IDLE_ERROR_COUNT_MASK)
- return true;
- }
-
- return false;
-}
-
-
/**
* e1000_setup_link_82571 - Setup flow control and link settings
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_82571.h b/drivers/net/e1000/base/e1000_82571.h
index 0d8412678d..3c1840d0e8 100644
--- a/drivers/net/e1000/base/e1000_82571.h
+++ b/drivers/net/e1000/base/e1000_82571.h
@@ -29,7 +29,6 @@
#define E1000_IDLE_ERROR_COUNT_MASK 0xFF
#define E1000_RECEIVE_ERROR_COUNTER 21
#define E1000_RECEIVE_ERROR_MAX 0xFFFF
-bool e1000_check_phy_82574(struct e1000_hw *hw);
bool e1000_get_laa_state_82571(struct e1000_hw *hw);
void e1000_set_laa_state_82571(struct e1000_hw *hw, bool state);
diff --git a/drivers/net/e1000/base/e1000_82575.c b/drivers/net/e1000/base/e1000_82575.c
index 7c78649393..074bd34f11 100644
--- a/drivers/net/e1000/base/e1000_82575.c
+++ b/drivers/net/e1000/base/e1000_82575.c
@@ -2119,62 +2119,6 @@ void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
E1000_WRITE_REG(hw, reg_offset, reg_val);
}
-/**
- * e1000_vmdq_set_loopback_pf - enable or disable vmdq loopback
- * @hw: pointer to the hardware struct
- * @enable: state to enter, either enabled or disabled
- *
- * enables/disables L2 switch loopback functionality.
- **/
-void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable)
-{
- u32 dtxswc;
-
- switch (hw->mac.type) {
- case e1000_82576:
- dtxswc = E1000_READ_REG(hw, E1000_DTXSWC);
- if (enable)
- dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
- else
- dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
- E1000_WRITE_REG(hw, E1000_DTXSWC, dtxswc);
- break;
- case e1000_i350:
- case e1000_i354:
- dtxswc = E1000_READ_REG(hw, E1000_TXSWC);
- if (enable)
- dtxswc |= E1000_DTXSWC_VMDQ_LOOPBACK_EN;
- else
- dtxswc &= ~E1000_DTXSWC_VMDQ_LOOPBACK_EN;
- E1000_WRITE_REG(hw, E1000_TXSWC, dtxswc);
- break;
- default:
- /* Currently no other hardware supports loopback */
- break;
- }
-
-
-}
-
-/**
- * e1000_vmdq_set_replication_pf - enable or disable vmdq replication
- * @hw: pointer to the hardware struct
- * @enable: state to enter, either enabled or disabled
- *
- * enables/disables replication of packets across multiple pools.
- **/
-void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable)
-{
- u32 vt_ctl = E1000_READ_REG(hw, E1000_VT_CTL);
-
- if (enable)
- vt_ctl |= E1000_VT_CTL_VM_REPL_EN;
- else
- vt_ctl &= ~E1000_VT_CTL_VM_REPL_EN;
-
- E1000_WRITE_REG(hw, E1000_VT_CTL, vt_ctl);
-}
-
/**
* e1000_read_phy_reg_82580 - Read 82580 MDI control register
* @hw: pointer to the HW structure
@@ -2596,45 +2540,6 @@ STATIC s32 e1000_update_nvm_checksum_i350(struct e1000_hw *hw)
return ret_val;
}
-/**
- * __e1000_access_emi_reg - Read/write EMI register
- * @hw: pointer to the HW structure
- * @address: EMI address to program
- * @data: pointer to value to read/write from/to the EMI address
- * @read: boolean flag to indicate read or write
- **/
-STATIC s32 __e1000_access_emi_reg(struct e1000_hw *hw, u16 address,
- u16 *data, bool read)
-{
- s32 ret_val;
-
- DEBUGFUNC("__e1000_access_emi_reg");
-
- ret_val = hw->phy.ops.write_reg(hw, E1000_EMIADD, address);
- if (ret_val)
- return ret_val;
-
- if (read)
- ret_val = hw->phy.ops.read_reg(hw, E1000_EMIDATA, data);
- else
- ret_val = hw->phy.ops.write_reg(hw, E1000_EMIDATA, *data);
-
- return ret_val;
-}
-
-/**
- * e1000_read_emi_reg - Read Extended Management Interface register
- * @hw: pointer to the HW structure
- * @addr: EMI address to program
- * @data: value to be read from the EMI address
- **/
-s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data)
-{
- DEBUGFUNC("e1000_read_emi_reg");
-
- return __e1000_access_emi_reg(hw, addr, data, true);
-}
-
/**
* e1000_initialize_M88E1512_phy - Initialize M88E1512 PHY
* @hw: pointer to the HW structure
@@ -2823,179 +2728,6 @@ s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw)
return ret_val;
}
-/**
- * e1000_set_eee_i350 - Enable/disable EEE support
- * @hw: pointer to the HW structure
- * @adv1G: boolean flag enabling 1G EEE advertisement
- * @adv100M: boolean flag enabling 100M EEE advertisement
- *
- * Enable/disable EEE based on setting in dev_spec structure.
- *
- **/
-s32 e1000_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M)
-{
- u32 ipcnfg, eeer;
-
- DEBUGFUNC("e1000_set_eee_i350");
-
- if ((hw->mac.type < e1000_i350) ||
- (hw->phy.media_type != e1000_media_type_copper))
- goto out;
- ipcnfg = E1000_READ_REG(hw, E1000_IPCNFG);
- eeer = E1000_READ_REG(hw, E1000_EEER);
-
- /* enable or disable per user setting */
- if (!(hw->dev_spec._82575.eee_disable)) {
- u32 eee_su = E1000_READ_REG(hw, E1000_EEE_SU);
-
- if (adv100M)
- ipcnfg |= E1000_IPCNFG_EEE_100M_AN;
- else
- ipcnfg &= ~E1000_IPCNFG_EEE_100M_AN;
-
- if (adv1G)
- ipcnfg |= E1000_IPCNFG_EEE_1G_AN;
- else
- ipcnfg &= ~E1000_IPCNFG_EEE_1G_AN;
-
- eeer |= (E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
- E1000_EEER_LPI_FC);
-
- /* This bit should not be set in normal operation. */
- if (eee_su & E1000_EEE_SU_LPI_CLK_STP)
- DEBUGOUT("LPI Clock Stop Bit should not be set!\n");
- } else {
- ipcnfg &= ~(E1000_IPCNFG_EEE_1G_AN | E1000_IPCNFG_EEE_100M_AN);
- eeer &= ~(E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
- E1000_EEER_LPI_FC);
- }
- E1000_WRITE_REG(hw, E1000_IPCNFG, ipcnfg);
- E1000_WRITE_REG(hw, E1000_EEER, eeer);
- E1000_READ_REG(hw, E1000_IPCNFG);
- E1000_READ_REG(hw, E1000_EEER);
-out:
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_set_eee_i354 - Enable/disable EEE support
- * @hw: pointer to the HW structure
- * @adv1G: boolean flag enabling 1G EEE advertisement
- * @adv100M: boolean flag enabling 100M EEE advertisement
- *
- * Enable/disable EEE legacy mode based on setting in dev_spec structure.
- *
- **/
-s32 e1000_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M)
-{
- struct e1000_phy_info *phy = &hw->phy;
- s32 ret_val = E1000_SUCCESS;
- u16 phy_data;
-
- DEBUGFUNC("e1000_set_eee_i354");
-
- if ((hw->phy.media_type != e1000_media_type_copper) ||
- ((phy->id != M88E1543_E_PHY_ID) &&
- (phy->id != M88E1512_E_PHY_ID)))
- goto out;
-
- if (!hw->dev_spec._82575.eee_disable) {
- /* Switch to PHY page 18. */
- ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 18);
- if (ret_val)
- goto out;
-
- ret_val = phy->ops.read_reg(hw, E1000_M88E1543_EEE_CTRL_1,
- &phy_data);
- if (ret_val)
- goto out;
-
- phy_data |= E1000_M88E1543_EEE_CTRL_1_MS;
- ret_val = phy->ops.write_reg(hw, E1000_M88E1543_EEE_CTRL_1,
- phy_data);
- if (ret_val)
- goto out;
-
- /* Return the PHY to page 0. */
- ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0);
- if (ret_val)
- goto out;
-
- /* Turn on EEE advertisement. */
- ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
- E1000_EEE_ADV_DEV_I354,
- &phy_data);
- if (ret_val)
- goto out;
-
- if (adv100M)
- phy_data |= E1000_EEE_ADV_100_SUPPORTED;
- else
- phy_data &= ~E1000_EEE_ADV_100_SUPPORTED;
-
- if (adv1G)
- phy_data |= E1000_EEE_ADV_1000_SUPPORTED;
- else
- phy_data &= ~E1000_EEE_ADV_1000_SUPPORTED;
-
- ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
- E1000_EEE_ADV_DEV_I354,
- phy_data);
- } else {
- /* Turn off EEE advertisement. */
- ret_val = e1000_read_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
- E1000_EEE_ADV_DEV_I354,
- &phy_data);
- if (ret_val)
- goto out;
-
- phy_data &= ~(E1000_EEE_ADV_100_SUPPORTED |
- E1000_EEE_ADV_1000_SUPPORTED);
- ret_val = e1000_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
- E1000_EEE_ADV_DEV_I354,
- phy_data);
- }
-
-out:
- return ret_val;
-}
-
-/**
- * e1000_get_eee_status_i354 - Get EEE status
- * @hw: pointer to the HW structure
- * @status: EEE status
- *
- * Get EEE status by guessing based on whether Tx or Rx LPI indications have
- * been received.
- **/
-s32 e1000_get_eee_status_i354(struct e1000_hw *hw, bool *status)
-{
- struct e1000_phy_info *phy = &hw->phy;
- s32 ret_val = E1000_SUCCESS;
- u16 phy_data;
-
- DEBUGFUNC("e1000_get_eee_status_i354");
-
- /* Check if EEE is supported on this device. */
- if ((hw->phy.media_type != e1000_media_type_copper) ||
- ((phy->id != M88E1543_E_PHY_ID) &&
- (phy->id != M88E1512_E_PHY_ID)))
- goto out;
-
- ret_val = e1000_read_xmdio_reg(hw, E1000_PCS_STATUS_ADDR_I354,
- E1000_PCS_STATUS_DEV_I354,
- &phy_data);
- if (ret_val)
- goto out;
-
- *status = phy_data & (E1000_PCS_STATUS_TX_LPI_RCVD |
- E1000_PCS_STATUS_RX_LPI_RCVD) ? true : false;
-
-out:
- return ret_val;
-}
-
/* Due to a hw errata, if the host tries to configure the VFTA register
* while performing queries from the BMC or DMA, then the VFTA in some
* cases won't be written.
@@ -3044,36 +2776,6 @@ void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value)
E1000_WRITE_FLUSH(hw);
}
-
-/**
- * e1000_set_i2c_bb - Enable I2C bit-bang
- * @hw: pointer to the HW structure
- *
- * Enable I2C bit-bang interface
- *
- **/
-s32 e1000_set_i2c_bb(struct e1000_hw *hw)
-{
- s32 ret_val = E1000_SUCCESS;
- u32 ctrl_ext, i2cparams;
-
- DEBUGFUNC("e1000_set_i2c_bb");
-
- ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT);
- ctrl_ext |= E1000_CTRL_I2C_ENA;
- E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext);
- E1000_WRITE_FLUSH(hw);
-
- i2cparams = E1000_READ_REG(hw, E1000_I2CPARAMS);
- i2cparams |= E1000_I2CBB_EN;
- i2cparams |= E1000_I2C_DATA_OE_N;
- i2cparams |= E1000_I2C_CLK_OE_N;
- E1000_WRITE_REG(hw, E1000_I2CPARAMS, i2cparams);
- E1000_WRITE_FLUSH(hw);
-
- return ret_val;
-}
-
/**
* e1000_read_i2c_byte_generic - Reads 8 bit word over I2C
* @hw: pointer to hardware structure
diff --git a/drivers/net/e1000/base/e1000_82575.h b/drivers/net/e1000/base/e1000_82575.h
index 006b37ae98..03284ca946 100644
--- a/drivers/net/e1000/base/e1000_82575.h
+++ b/drivers/net/e1000/base/e1000_82575.h
@@ -361,9 +361,7 @@ s32 e1000_init_nvm_params_82575(struct e1000_hw *hw);
/* Rx packet buffer size defines */
#define E1000_RXPBS_SIZE_MASK_82576 0x0000007F
-void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable);
void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf);
-void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable);
enum e1000_promisc_type {
e1000_promisc_disabled = 0, /* all promisc modes disabled */
@@ -373,15 +371,10 @@ enum e1000_promisc_type {
e1000_num_promisc_types
};
-void e1000_vfta_set_vf(struct e1000_hw *, u16, bool);
void e1000_rlpml_set_vf(struct e1000_hw *, u16);
s32 e1000_promisc_set_vf(struct e1000_hw *, enum e1000_promisc_type type);
void e1000_write_vfta_i350(struct e1000_hw *hw, u32 offset, u32 value);
u16 e1000_rxpbs_adjust_82580(u32 data);
-s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data);
-s32 e1000_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M);
-s32 e1000_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M);
-s32 e1000_get_eee_status_i354(struct e1000_hw *, bool *);
s32 e1000_initialize_M88E1512_phy(struct e1000_hw *hw);
s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw);
@@ -397,7 +390,6 @@ s32 e1000_initialize_M88E1543_phy(struct e1000_hw *hw);
#define E1000_I2C_T_SU_STO 4
#define E1000_I2C_T_BUF 5
-s32 e1000_set_i2c_bb(struct e1000_hw *hw);
s32 e1000_read_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
u8 dev_addr, u8 *data);
s32 e1000_write_i2c_byte_generic(struct e1000_hw *hw, u8 byte_offset,
diff --git a/drivers/net/e1000/base/e1000_api.c b/drivers/net/e1000/base/e1000_api.c
index 6a2376f40f..c3a8892c47 100644
--- a/drivers/net/e1000/base/e1000_api.c
+++ b/drivers/net/e1000/base/e1000_api.c
@@ -530,21 +530,6 @@ void e1000_clear_vfta(struct e1000_hw *hw)
hw->mac.ops.clear_vfta(hw);
}
-/**
- * e1000_write_vfta - Write value to VLAN filter table
- * @hw: pointer to the HW structure
- * @offset: the 32-bit offset in which to write the value to.
- * @value: the 32-bit value to write at location offset.
- *
- * This writes a 32-bit value to a 32-bit offset in the VLAN filter
- * table. This is a function pointer entry point called by drivers.
- **/
-void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
-{
- if (hw->mac.ops.write_vfta)
- hw->mac.ops.write_vfta(hw, offset, value);
-}
-
/**
* e1000_update_mc_addr_list - Update Multicast addresses
* @hw: pointer to the HW structure
@@ -562,19 +547,6 @@ void e1000_update_mc_addr_list(struct e1000_hw *hw, u8 *mc_addr_list,
mc_addr_count);
}
-/**
- * e1000_force_mac_fc - Force MAC flow control
- * @hw: pointer to the HW structure
- *
- * Force the MAC's flow control settings. Currently no func pointer exists
- * and all implementations are handled in the generic version of this
- * function.
- **/
-s32 e1000_force_mac_fc(struct e1000_hw *hw)
-{
- return e1000_force_mac_fc_generic(hw);
-}
-
/**
* e1000_check_for_link - Check/Store link connection
* @hw: pointer to the HW structure
@@ -591,34 +563,6 @@ s32 e1000_check_for_link(struct e1000_hw *hw)
return -E1000_ERR_CONFIG;
}
-/**
- * e1000_check_mng_mode - Check management mode
- * @hw: pointer to the HW structure
- *
- * This checks if the adapter has manageability enabled.
- * This is a function pointer entry point called by drivers.
- **/
-bool e1000_check_mng_mode(struct e1000_hw *hw)
-{
- if (hw->mac.ops.check_mng_mode)
- return hw->mac.ops.check_mng_mode(hw);
-
- return false;
-}
-
-/**
- * e1000_mng_write_dhcp_info - Writes DHCP info to host interface
- * @hw: pointer to the HW structure
- * @buffer: pointer to the host interface
- * @length: size of the buffer
- *
- * Writes the DHCP information to the host interface.
- **/
-s32 e1000_mng_write_dhcp_info(struct e1000_hw *hw, u8 *buffer, u16 length)
-{
- return e1000_mng_write_dhcp_info_generic(hw, buffer, length);
-}
-
/**
* e1000_reset_hw - Reset hardware
* @hw: pointer to the HW structure
@@ -665,86 +609,6 @@ s32 e1000_setup_link(struct e1000_hw *hw)
return -E1000_ERR_CONFIG;
}
-/**
- * e1000_get_speed_and_duplex - Returns current speed and duplex
- * @hw: pointer to the HW structure
- * @speed: pointer to a 16-bit value to store the speed
- * @duplex: pointer to a 16-bit value to store the duplex.
- *
- * This returns the speed and duplex of the adapter in the two 'out'
- * variables passed in. This is a function pointer entry point called
- * by drivers.
- **/
-s32 e1000_get_speed_and_duplex(struct e1000_hw *hw, u16 *speed, u16 *duplex)
-{
- if (hw->mac.ops.get_link_up_info)
- return hw->mac.ops.get_link_up_info(hw, speed, duplex);
-
- return -E1000_ERR_CONFIG;
-}
-
-/**
- * e1000_setup_led - Configures SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This prepares the SW controllable LED for use and saves the current state
- * of the LED so it can be later restored. This is a function pointer entry
- * point called by drivers.
- **/
-s32 e1000_setup_led(struct e1000_hw *hw)
-{
- if (hw->mac.ops.setup_led)
- return hw->mac.ops.setup_led(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_cleanup_led - Restores SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This restores the SW controllable LED to the value saved off by
- * e1000_setup_led. This is a function pointer entry point called by drivers.
- **/
-s32 e1000_cleanup_led(struct e1000_hw *hw)
-{
- if (hw->mac.ops.cleanup_led)
- return hw->mac.ops.cleanup_led(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_blink_led - Blink SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This starts the adapter LED blinking. Request the LED to be setup first
- * and cleaned up after. This is a function pointer entry point called by
- * drivers.
- **/
-s32 e1000_blink_led(struct e1000_hw *hw)
-{
- if (hw->mac.ops.blink_led)
- return hw->mac.ops.blink_led(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_id_led_init - store LED configurations in SW
- * @hw: pointer to the HW structure
- *
- * Initializes the LED config in SW. This is a function pointer entry point
- * called by drivers.
- **/
-s32 e1000_id_led_init(struct e1000_hw *hw)
-{
- if (hw->mac.ops.id_led_init)
- return hw->mac.ops.id_led_init(hw);
-
- return E1000_SUCCESS;
-}
-
/**
* e1000_led_on - Turn on SW controllable LED
* @hw: pointer to the HW structure
@@ -775,43 +639,6 @@ s32 e1000_led_off(struct e1000_hw *hw)
return E1000_SUCCESS;
}
-/**
- * e1000_reset_adaptive - Reset adaptive IFS
- * @hw: pointer to the HW structure
- *
- * Resets the adaptive IFS. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-void e1000_reset_adaptive(struct e1000_hw *hw)
-{
- e1000_reset_adaptive_generic(hw);
-}
-
-/**
- * e1000_update_adaptive - Update adaptive IFS
- * @hw: pointer to the HW structure
- *
- * Updates adapter IFS. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-void e1000_update_adaptive(struct e1000_hw *hw)
-{
- e1000_update_adaptive_generic(hw);
-}
-
-/**
- * e1000_disable_pcie_master - Disable PCI-Express master access
- * @hw: pointer to the HW structure
- *
- * Disables PCI-Express master access and verifies there are no pending
- * requests. Currently no func pointer exists and all implementations are
- * handled in the generic version of this function.
- **/
-s32 e1000_disable_pcie_master(struct e1000_hw *hw)
-{
- return e1000_disable_pcie_master_generic(hw);
-}
-
/**
* e1000_config_collision_dist - Configure collision distance
* @hw: pointer to the HW structure
@@ -841,94 +668,6 @@ int e1000_rar_set(struct e1000_hw *hw, u8 *addr, u32 index)
return E1000_SUCCESS;
}
-/**
- * e1000_validate_mdi_setting - Ensures valid MDI/MDIX SW state
- * @hw: pointer to the HW structure
- *
- * Ensures that the MDI/MDIX SW state is valid.
- **/
-s32 e1000_validate_mdi_setting(struct e1000_hw *hw)
-{
- if (hw->mac.ops.validate_mdi_setting)
- return hw->mac.ops.validate_mdi_setting(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_hash_mc_addr - Determines address location in multicast table
- * @hw: pointer to the HW structure
- * @mc_addr: Multicast address to hash.
- *
- * This hashes an address to determine its location in the multicast
- * table. Currently no func pointer exists and all implementations
- * are handled in the generic version of this function.
- **/
-u32 e1000_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr)
-{
- return e1000_hash_mc_addr_generic(hw, mc_addr);
-}
-
-/**
- * e1000_enable_tx_pkt_filtering - Enable packet filtering on TX
- * @hw: pointer to the HW structure
- *
- * Enables packet filtering on transmit packets if manageability is enabled
- * and host interface is enabled.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-bool e1000_enable_tx_pkt_filtering(struct e1000_hw *hw)
-{
- return e1000_enable_tx_pkt_filtering_generic(hw);
-}
-
-/**
- * e1000_mng_host_if_write - Writes to the manageability host interface
- * @hw: pointer to the HW structure
- * @buffer: pointer to the host interface buffer
- * @length: size of the buffer
- * @offset: location in the buffer to write to
- * @sum: sum of the data (not checksum)
- *
- * This function writes the buffer content at the offset given on the host if.
- * It also does alignment considerations to do the writes in most efficient
- * way. Also fills up the sum of the buffer in *buffer parameter.
- **/
-s32 e1000_mng_host_if_write(struct e1000_hw *hw, u8 *buffer, u16 length,
- u16 offset, u8 *sum)
-{
- return e1000_mng_host_if_write_generic(hw, buffer, length, offset, sum);
-}
-
-/**
- * e1000_mng_write_cmd_header - Writes manageability command header
- * @hw: pointer to the HW structure
- * @hdr: pointer to the host interface command header
- *
- * Writes the command header after does the checksum calculation.
- **/
-s32 e1000_mng_write_cmd_header(struct e1000_hw *hw,
- struct e1000_host_mng_command_header *hdr)
-{
- return e1000_mng_write_cmd_header_generic(hw, hdr);
-}
-
-/**
- * e1000_mng_enable_host_if - Checks host interface is enabled
- * @hw: pointer to the HW structure
- *
- * Returns E1000_success upon success, else E1000_ERR_HOST_INTERFACE_COMMAND
- *
- * This function checks whether the HOST IF is enabled for command operation
- * and also checks whether the previous command is completed. It busy waits
- * in case of previous command is not completed.
- **/
-s32 e1000_mng_enable_host_if(struct e1000_hw *hw)
-{
- return e1000_mng_enable_host_if_generic(hw);
-}
-
/**
* e1000_check_reset_block - Verifies PHY can be reset
* @hw: pointer to the HW structure
@@ -944,126 +683,6 @@ s32 e1000_check_reset_block(struct e1000_hw *hw)
return E1000_SUCCESS;
}
-/**
- * e1000_read_phy_reg - Reads PHY register
- * @hw: pointer to the HW structure
- * @offset: the register to read
- * @data: the buffer to store the 16-bit read.
- *
- * Reads the PHY register and returns the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 e1000_read_phy_reg(struct e1000_hw *hw, u32 offset, u16 *data)
-{
- if (hw->phy.ops.read_reg)
- return hw->phy.ops.read_reg(hw, offset, data);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_write_phy_reg - Writes PHY register
- * @hw: pointer to the HW structure
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes the PHY register at offset with the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 e1000_write_phy_reg(struct e1000_hw *hw, u32 offset, u16 data)
-{
- if (hw->phy.ops.write_reg)
- return hw->phy.ops.write_reg(hw, offset, data);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_release_phy - Generic release PHY
- * @hw: pointer to the HW structure
- *
- * Return if silicon family does not require a semaphore when accessing the
- * PHY.
- **/
-void e1000_release_phy(struct e1000_hw *hw)
-{
- if (hw->phy.ops.release)
- hw->phy.ops.release(hw);
-}
-
-/**
- * e1000_acquire_phy - Generic acquire PHY
- * @hw: pointer to the HW structure
- *
- * Return success if silicon family does not require a semaphore when
- * accessing the PHY.
- **/
-s32 e1000_acquire_phy(struct e1000_hw *hw)
-{
- if (hw->phy.ops.acquire)
- return hw->phy.ops.acquire(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_cfg_on_link_up - Configure PHY upon link up
- * @hw: pointer to the HW structure
- **/
-s32 e1000_cfg_on_link_up(struct e1000_hw *hw)
-{
- if (hw->phy.ops.cfg_on_link_up)
- return hw->phy.ops.cfg_on_link_up(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_read_kmrn_reg - Reads register using Kumeran interface
- * @hw: pointer to the HW structure
- * @offset: the register to read
- * @data: the location to store the 16-bit value read.
- *
- * Reads a register out of the Kumeran interface. Currently no func pointer
- * exists and all implementations are handled in the generic version of
- * this function.
- **/
-s32 e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data)
-{
- return e1000_read_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- * e1000_write_kmrn_reg - Writes register using Kumeran interface
- * @hw: pointer to the HW structure
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes a register to the Kumeran interface. Currently no func pointer
- * exists and all implementations are handled in the generic version of
- * this function.
- **/
-s32 e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data)
-{
- return e1000_write_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- * e1000_get_cable_length - Retrieves cable length estimation
- * @hw: pointer to the HW structure
- *
- * This function estimates the cable length and stores them in
- * hw->phy.min_length and hw->phy.max_length. This is a function pointer
- * entry point called by drivers.
- **/
-s32 e1000_get_cable_length(struct e1000_hw *hw)
-{
- if (hw->phy.ops.get_cable_length)
- return hw->phy.ops.get_cable_length(hw);
-
- return E1000_SUCCESS;
-}
-
/**
* e1000_get_phy_info - Retrieves PHY information from registers
* @hw: pointer to the HW structure
@@ -1095,65 +714,6 @@ s32 e1000_phy_hw_reset(struct e1000_hw *hw)
return E1000_SUCCESS;
}
-/**
- * e1000_phy_commit - Soft PHY reset
- * @hw: pointer to the HW structure
- *
- * Performs a soft PHY reset on those that apply. This is a function pointer
- * entry point called by drivers.
- **/
-s32 e1000_phy_commit(struct e1000_hw *hw)
-{
- if (hw->phy.ops.commit)
- return hw->phy.ops.commit(hw);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_set_d0_lplu_state - Sets low power link up state for D0
- * @hw: pointer to the HW structure
- * @active: boolean used to enable/disable lplu
- *
- * Success returns 0, Failure returns 1
- *
- * The low power link up (lplu) state is set to the power management level D0
- * and SmartSpeed is disabled when active is true, else clear lplu for D0
- * and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
- * is used during Dx states where the power conservation is most important.
- * During driver activity, SmartSpeed should be enabled so performance is
- * maintained. This is a function pointer entry point called by drivers.
- **/
-s32 e1000_set_d0_lplu_state(struct e1000_hw *hw, bool active)
-{
- if (hw->phy.ops.set_d0_lplu_state)
- return hw->phy.ops.set_d0_lplu_state(hw, active);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_set_d3_lplu_state - Sets low power link up state for D3
- * @hw: pointer to the HW structure
- * @active: boolean used to enable/disable lplu
- *
- * Success returns 0, Failure returns 1
- *
- * The low power link up (lplu) state is set to the power management level D3
- * and SmartSpeed is disabled when active is true, else clear lplu for D3
- * and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
- * is used during Dx states where the power conservation is most important.
- * During driver activity, SmartSpeed should be enabled so performance is
- * maintained. This is a function pointer entry point called by drivers.
- **/
-s32 e1000_set_d3_lplu_state(struct e1000_hw *hw, bool active)
-{
- if (hw->phy.ops.set_d3_lplu_state)
- return hw->phy.ops.set_d3_lplu_state(hw, active);
-
- return E1000_SUCCESS;
-}
-
/**
* e1000_read_mac_addr - Reads MAC address
* @hw: pointer to the HW structure
@@ -1170,52 +730,6 @@ s32 e1000_read_mac_addr(struct e1000_hw *hw)
return e1000_read_mac_addr_generic(hw);
}
-/**
- * e1000_read_pba_string - Read device part number string
- * @hw: pointer to the HW structure
- * @pba_num: pointer to device part number
- * @pba_num_size: size of part number buffer
- *
- * Reads the product board assembly (PBA) number from the EEPROM and stores
- * the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 e1000_read_pba_string(struct e1000_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
- return e1000_read_pba_string_generic(hw, pba_num, pba_num_size);
-}
-
-/**
- * e1000_read_pba_length - Read device part number string length
- * @hw: pointer to the HW structure
- * @pba_num_size: size of part number buffer
- *
- * Reads the product board assembly (PBA) number length from the EEPROM and
- * stores the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 e1000_read_pba_length(struct e1000_hw *hw, u32 *pba_num_size)
-{
- return e1000_read_pba_length_generic(hw, pba_num_size);
-}
-
-/**
- * e1000_read_pba_num - Read device part number
- * @hw: pointer to the HW structure
- * @pba_num: pointer to device part number
- *
- * Reads the product board assembly (PBA) number from the EEPROM and stores
- * the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 e1000_read_pba_num(struct e1000_hw *hw, u32 *pba_num)
-{
- return e1000_read_pba_num_generic(hw, pba_num);
-}
-
/**
* e1000_validate_nvm_checksum - Verifies NVM (EEPROM) checksum
* @hw: pointer to the HW structure
@@ -1231,34 +745,6 @@ s32 e1000_validate_nvm_checksum(struct e1000_hw *hw)
return -E1000_ERR_CONFIG;
}
-/**
- * e1000_update_nvm_checksum - Updates NVM (EEPROM) checksum
- * @hw: pointer to the HW structure
- *
- * Updates the NVM checksum. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-s32 e1000_update_nvm_checksum(struct e1000_hw *hw)
-{
- if (hw->nvm.ops.update)
- return hw->nvm.ops.update(hw);
-
- return -E1000_ERR_CONFIG;
-}
-
-/**
- * e1000_reload_nvm - Reloads EEPROM
- * @hw: pointer to the HW structure
- *
- * Reloads the EEPROM by setting the "Reinitialize from EEPROM" bit in the
- * extended control register.
- **/
-void e1000_reload_nvm(struct e1000_hw *hw)
-{
- if (hw->nvm.ops.reload)
- hw->nvm.ops.reload(hw);
-}
-
/**
* e1000_read_nvm - Reads NVM (EEPROM)
* @hw: pointer to the HW structure
@@ -1295,22 +781,6 @@ s32 e1000_write_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data)
return E1000_SUCCESS;
}
-/**
- * e1000_write_8bit_ctrl_reg - Writes 8bit Control register
- * @hw: pointer to the HW structure
- * @reg: 32bit register offset
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes the PHY register at offset with the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 e1000_write_8bit_ctrl_reg(struct e1000_hw *hw, u32 reg, u32 offset,
- u8 data)
-{
- return e1000_write_8bit_ctrl_reg_generic(hw, reg, offset, data);
-}
-
/**
* e1000_power_up_phy - Restores link in case of PHY power down
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_api.h b/drivers/net/e1000/base/e1000_api.h
index 6b38e2b7bb..1c240dfcdf 100644
--- a/drivers/net/e1000/base/e1000_api.h
+++ b/drivers/net/e1000/base/e1000_api.h
@@ -29,65 +29,25 @@ s32 e1000_init_phy_params(struct e1000_hw *hw);
s32 e1000_init_mbx_params(struct e1000_hw *hw);
s32 e1000_get_bus_info(struct e1000_hw *hw);
void e1000_clear_vfta(struct e1000_hw *hw);
-void e1000_write_vfta(struct e1000_hw *hw, u32 offset, u32 value);
-s32 e1000_force_mac_fc(struct e1000_hw *hw);
s32 e1000_check_for_link(struct e1000_hw *hw);
s32 e1000_reset_hw(struct e1000_hw *hw);
s32 e1000_init_hw(struct e1000_hw *hw);
s32 e1000_setup_link(struct e1000_hw *hw);
-s32 e1000_get_speed_and_duplex(struct e1000_hw *hw, u16 *speed, u16 *duplex);
-s32 e1000_disable_pcie_master(struct e1000_hw *hw);
void e1000_config_collision_dist(struct e1000_hw *hw);
int e1000_rar_set(struct e1000_hw *hw, u8 *addr, u32 index);
-u32 e1000_hash_mc_addr(struct e1000_hw *hw, u8 *mc_addr);
void e1000_update_mc_addr_list(struct e1000_hw *hw, u8 *mc_addr_list,
u32 mc_addr_count);
-s32 e1000_setup_led(struct e1000_hw *hw);
-s32 e1000_cleanup_led(struct e1000_hw *hw);
s32 e1000_check_reset_block(struct e1000_hw *hw);
-s32 e1000_blink_led(struct e1000_hw *hw);
s32 e1000_led_on(struct e1000_hw *hw);
s32 e1000_led_off(struct e1000_hw *hw);
-s32 e1000_id_led_init(struct e1000_hw *hw);
-void e1000_reset_adaptive(struct e1000_hw *hw);
-void e1000_update_adaptive(struct e1000_hw *hw);
-s32 e1000_get_cable_length(struct e1000_hw *hw);
-s32 e1000_validate_mdi_setting(struct e1000_hw *hw);
-s32 e1000_read_phy_reg(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_write_phy_reg(struct e1000_hw *hw, u32 offset, u16 data);
-s32 e1000_write_8bit_ctrl_reg(struct e1000_hw *hw, u32 reg, u32 offset,
- u8 data);
s32 e1000_get_phy_info(struct e1000_hw *hw);
-void e1000_release_phy(struct e1000_hw *hw);
-s32 e1000_acquire_phy(struct e1000_hw *hw);
-s32 e1000_cfg_on_link_up(struct e1000_hw *hw);
s32 e1000_phy_hw_reset(struct e1000_hw *hw);
-s32 e1000_phy_commit(struct e1000_hw *hw);
void e1000_power_up_phy(struct e1000_hw *hw);
void e1000_power_down_phy(struct e1000_hw *hw);
s32 e1000_read_mac_addr(struct e1000_hw *hw);
-s32 e1000_read_pba_num(struct e1000_hw *hw, u32 *part_num);
-s32 e1000_read_pba_string(struct e1000_hw *hw, u8 *pba_num, u32 pba_num_size);
-s32 e1000_read_pba_length(struct e1000_hw *hw, u32 *pba_num_size);
-void e1000_reload_nvm(struct e1000_hw *hw);
-s32 e1000_update_nvm_checksum(struct e1000_hw *hw);
s32 e1000_validate_nvm_checksum(struct e1000_hw *hw);
s32 e1000_read_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
-s32 e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data);
s32 e1000_write_nvm(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
-s32 e1000_set_d3_lplu_state(struct e1000_hw *hw, bool active);
-s32 e1000_set_d0_lplu_state(struct e1000_hw *hw, bool active);
-bool e1000_check_mng_mode(struct e1000_hw *hw);
-bool e1000_enable_tx_pkt_filtering(struct e1000_hw *hw);
-s32 e1000_mng_enable_host_if(struct e1000_hw *hw);
-s32 e1000_mng_host_if_write(struct e1000_hw *hw, u8 *buffer, u16 length,
- u16 offset, u8 *sum);
-s32 e1000_mng_write_cmd_header(struct e1000_hw *hw,
- struct e1000_host_mng_command_header *hdr);
-s32 e1000_mng_write_dhcp_info(struct e1000_hw *hw, u8 *buffer, u16 length);
-u32 e1000_translate_register_82542(u32 reg);
-
/*
diff --git a/drivers/net/e1000/base/e1000_base.c b/drivers/net/e1000/base/e1000_base.c
index ab73e1e59e..958aca14b2 100644
--- a/drivers/net/e1000/base/e1000_base.c
+++ b/drivers/net/e1000/base/e1000_base.c
@@ -110,81 +110,3 @@ void e1000_power_down_phy_copper_base(struct e1000_hw *hw)
if (phy->ops.check_reset_block(hw))
e1000_power_down_phy_copper(hw);
}
-
-/**
- * e1000_rx_fifo_flush_base - Clean Rx FIFO after Rx enable
- * @hw: pointer to the HW structure
- *
- * After Rx enable, if manageability is enabled then there is likely some
- * bad data at the start of the FIFO and possibly in the DMA FIFO. This
- * function clears the FIFOs and flushes any packets that came in as Rx was
- * being enabled.
- **/
-void e1000_rx_fifo_flush_base(struct e1000_hw *hw)
-{
- u32 rctl, rlpml, rxdctl[4], rfctl, temp_rctl, rx_enabled;
- int i, ms_wait;
-
- DEBUGFUNC("e1000_rx_fifo_flush_base");
-
- /* disable IPv6 options as per hardware errata */
- rfctl = E1000_READ_REG(hw, E1000_RFCTL);
- rfctl |= E1000_RFCTL_IPV6_EX_DIS;
- E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
-
- if (!(E1000_READ_REG(hw, E1000_MANC) & E1000_MANC_RCV_TCO_EN))
- return;
-
- /* Disable all Rx queues */
- for (i = 0; i < 4; i++) {
- rxdctl[i] = E1000_READ_REG(hw, E1000_RXDCTL(i));
- E1000_WRITE_REG(hw, E1000_RXDCTL(i),
- rxdctl[i] & ~E1000_RXDCTL_QUEUE_ENABLE);
- }
- /* Poll all queues to verify they have shut down */
- for (ms_wait = 0; ms_wait < 10; ms_wait++) {
- msec_delay(1);
- rx_enabled = 0;
- for (i = 0; i < 4; i++)
- rx_enabled |= E1000_READ_REG(hw, E1000_RXDCTL(i));
- if (!(rx_enabled & E1000_RXDCTL_QUEUE_ENABLE))
- break;
- }
-
- if (ms_wait == 10)
- DEBUGOUT("Queue disable timed out after 10ms\n");
-
- /* Clear RLPML, RCTL.SBP, RFCTL.LEF, and set RCTL.LPE so that all
- * incoming packets are rejected. Set enable and wait 2ms so that
- * any packet that was coming in as RCTL.EN was set is flushed
- */
- E1000_WRITE_REG(hw, E1000_RFCTL, rfctl & ~E1000_RFCTL_LEF);
-
- rlpml = E1000_READ_REG(hw, E1000_RLPML);
- E1000_WRITE_REG(hw, E1000_RLPML, 0);
-
- rctl = E1000_READ_REG(hw, E1000_RCTL);
- temp_rctl = rctl & ~(E1000_RCTL_EN | E1000_RCTL_SBP);
- temp_rctl |= E1000_RCTL_LPE;
-
- E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl);
- E1000_WRITE_REG(hw, E1000_RCTL, temp_rctl | E1000_RCTL_EN);
- E1000_WRITE_FLUSH(hw);
- msec_delay(2);
-
- /* Enable Rx queues that were previously enabled and restore our
- * previous state
- */
- for (i = 0; i < 4; i++)
- E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl[i]);
- E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- E1000_WRITE_FLUSH(hw);
-
- E1000_WRITE_REG(hw, E1000_RLPML, rlpml);
- E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
-
- /* Flush receive errors generated by workaround */
- E1000_READ_REG(hw, E1000_ROC);
- E1000_READ_REG(hw, E1000_RNBC);
- E1000_READ_REG(hw, E1000_MPC);
-}
diff --git a/drivers/net/e1000/base/e1000_base.h b/drivers/net/e1000/base/e1000_base.h
index 0d6172b6d8..16d7ca98a7 100644
--- a/drivers/net/e1000/base/e1000_base.h
+++ b/drivers/net/e1000/base/e1000_base.h
@@ -8,7 +8,6 @@
/* forward declaration */
s32 e1000_init_hw_base(struct e1000_hw *hw);
void e1000_power_down_phy_copper_base(struct e1000_hw *hw);
-extern void e1000_rx_fifo_flush_base(struct e1000_hw *hw);
s32 e1000_acquire_phy_base(struct e1000_hw *hw);
void e1000_release_phy_base(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_ich8lan.c b/drivers/net/e1000/base/e1000_ich8lan.c
index 14f86b7bdc..4f9a7bc3f1 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.c
+++ b/drivers/net/e1000/base/e1000_ich8lan.c
@@ -5467,60 +5467,6 @@ void e1000_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
return;
}
-/**
- * e1000_ipg3_phy_powerdown_workaround_ich8lan - Power down workaround on D3
- * @hw: pointer to the HW structure
- *
- * Workaround for 82566 power-down on D3 entry:
- * 1) disable gigabit link
- * 2) write VR power-down enable
- * 3) read it back
- * Continue if successful, else issue LCD reset and repeat
- **/
-void e1000_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw)
-{
- u32 reg;
- u16 data;
- u8 retry = 0;
-
- DEBUGFUNC("e1000_igp3_phy_powerdown_workaround_ich8lan");
-
- if (hw->phy.type != e1000_phy_igp_3)
- return;
-
- /* Try the workaround twice (if needed) */
- do {
- /* Disable link */
- reg = E1000_READ_REG(hw, E1000_PHY_CTRL);
- reg |= (E1000_PHY_CTRL_GBE_DISABLE |
- E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
- E1000_WRITE_REG(hw, E1000_PHY_CTRL, reg);
-
- /* Call gig speed drop workaround on Gig disable before
- * accessing any PHY registers
- */
- if (hw->mac.type == e1000_ich8lan)
- e1000_gig_downshift_workaround_ich8lan(hw);
-
- /* Write VR power-down enable */
- hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
- data &= ~IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
- hw->phy.ops.write_reg(hw, IGP3_VR_CTRL,
- data | IGP3_VR_CTRL_MODE_SHUTDOWN);
-
- /* Read it back and test */
- hw->phy.ops.read_reg(hw, IGP3_VR_CTRL, &data);
- data &= IGP3_VR_CTRL_DEV_POWERDOWN_MODE_MASK;
- if ((data == IGP3_VR_CTRL_MODE_SHUTDOWN) || retry)
- break;
-
- /* Issue PHY reset and repeat at most one more time */
- reg = E1000_READ_REG(hw, E1000_CTRL);
- E1000_WRITE_REG(hw, E1000_CTRL, reg | E1000_CTRL_PHY_RST);
- retry++;
- } while (retry);
-}
-
/**
* e1000_gig_downshift_workaround_ich8lan - WoL from S5 stops working
* @hw: pointer to the HW structure
@@ -5557,218 +5503,6 @@ void e1000_gig_downshift_workaround_ich8lan(struct e1000_hw *hw)
reg_data);
}
-/**
- * e1000_suspend_workarounds_ich8lan - workarounds needed during S0->Sx
- * @hw: pointer to the HW structure
- *
- * During S0 to Sx transition, it is possible the link remains at gig
- * instead of negotiating to a lower speed. Before going to Sx, set
- * 'Gig Disable' to force link speed negotiation to a lower speed based on
- * the LPLU setting in the NVM or custom setting. For PCH and newer parts,
- * the OEM bits PHY register (LED, GbE disable and LPLU configurations) also
- * needs to be written.
- * Parts that support (and are linked to a partner which support) EEE in
- * 100Mbps should disable LPLU since 100Mbps w/ EEE requires less power
- * than 10Mbps w/o EEE.
- **/
-void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw)
-{
- struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan;
- u32 phy_ctrl;
- s32 ret_val;
-
- DEBUGFUNC("e1000_suspend_workarounds_ich8lan");
-
- phy_ctrl = E1000_READ_REG(hw, E1000_PHY_CTRL);
- phy_ctrl |= E1000_PHY_CTRL_GBE_DISABLE;
-
- if (hw->phy.type == e1000_phy_i217) {
- u16 phy_reg, device_id = hw->device_id;
-
- if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) ||
- (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) ||
- (device_id == E1000_DEV_ID_PCH_I218_LM3) ||
- (device_id == E1000_DEV_ID_PCH_I218_V3) ||
- (hw->mac.type >= e1000_pch_spt)) {
- u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6);
-
- E1000_WRITE_REG(hw, E1000_FEXTNVM6,
- fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK);
- }
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- goto out;
-
- if (!dev_spec->eee_disable) {
- u16 eee_advert;
-
- ret_val =
- e1000_read_emi_reg_locked(hw,
- I217_EEE_ADVERTISEMENT,
- &eee_advert);
- if (ret_val)
- goto release;
-
- /* Disable LPLU if both link partners support 100BaseT
- * EEE and 100Full is advertised on both ends of the
- * link, and enable Auto Enable LPI since there will
- * be no driver to enable LPI while in Sx.
- */
- if ((eee_advert & I82579_EEE_100_SUPPORTED) &&
- (dev_spec->eee_lp_ability &
- I82579_EEE_100_SUPPORTED) &&
- (hw->phy.autoneg_advertised & ADVERTISE_100_FULL)) {
- phy_ctrl &= ~(E1000_PHY_CTRL_D0A_LPLU |
- E1000_PHY_CTRL_NOND0A_LPLU);
-
- /* Set Auto Enable LPI after link up */
- hw->phy.ops.read_reg_locked(hw,
- I217_LPI_GPIO_CTRL,
- &phy_reg);
- phy_reg |= I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
- hw->phy.ops.write_reg_locked(hw,
- I217_LPI_GPIO_CTRL,
- phy_reg);
- }
- }
-
- /* For i217 Intel Rapid Start Technology support,
- * when the system is going into Sx and no manageability engine
- * is present, the driver must configure proxy to reset only on
- * power good. LPI (Low Power Idle) state must also reset only
- * on power good, as well as the MTA (Multicast table array).
- * The SMBus release must also be disabled on LCD reset.
- */
- if (!(E1000_READ_REG(hw, E1000_FWSM) &
- E1000_ICH_FWSM_FW_VALID)) {
- /* Enable proxy to reset only on power good. */
- hw->phy.ops.read_reg_locked(hw, I217_PROXY_CTRL,
- &phy_reg);
- phy_reg |= I217_PROXY_CTRL_AUTO_DISABLE;
- hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL,
- phy_reg);
-
- /* Set bit enable LPI (EEE) to reset only on
- * power good.
- */
- hw->phy.ops.read_reg_locked(hw, I217_SxCTRL, &phy_reg);
- phy_reg |= I217_SxCTRL_ENABLE_LPI_RESET;
- hw->phy.ops.write_reg_locked(hw, I217_SxCTRL, phy_reg);
-
- /* Disable the SMB release on LCD reset. */
- hw->phy.ops.read_reg_locked(hw, I217_MEMPWR, &phy_reg);
- phy_reg &= ~I217_MEMPWR_DISABLE_SMB_RELEASE;
- hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
- }
-
- /* Enable MTA to reset for Intel Rapid Start Technology
- * Support
- */
- hw->phy.ops.read_reg_locked(hw, I217_CGFREG, &phy_reg);
- phy_reg |= I217_CGFREG_ENABLE_MTA_RESET;
- hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
-
-release:
- hw->phy.ops.release(hw);
- }
-out:
- E1000_WRITE_REG(hw, E1000_PHY_CTRL, phy_ctrl);
-
- if (hw->mac.type == e1000_ich8lan)
- e1000_gig_downshift_workaround_ich8lan(hw);
-
- if (hw->mac.type >= e1000_pchlan) {
- e1000_oem_bits_config_ich8lan(hw, false);
-
- /* Reset PHY to activate OEM bits on 82577/8 */
- if (hw->mac.type == e1000_pchlan)
- e1000_phy_hw_reset_generic(hw);
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return;
- e1000_write_smbus_addr(hw);
- hw->phy.ops.release(hw);
- }
-
- return;
-}
-
-/**
- * e1000_resume_workarounds_pchlan - workarounds needed during Sx->S0
- * @hw: pointer to the HW structure
- *
- * During Sx to S0 transitions on non-managed devices or managed devices
- * on which PHY resets are not blocked, if the PHY registers cannot be
- * accessed properly by the s/w toggle the LANPHYPC value to power cycle
- * the PHY.
- * On i217, setup Intel Rapid Start Technology.
- **/
-u32 e1000_resume_workarounds_pchlan(struct e1000_hw *hw)
-{
- s32 ret_val;
-
- DEBUGFUNC("e1000_resume_workarounds_pchlan");
- if (hw->mac.type < e1000_pch2lan)
- return E1000_SUCCESS;
-
- ret_val = e1000_init_phy_workarounds_pchlan(hw);
- if (ret_val) {
- DEBUGOUT1("Failed to init PHY flow ret_val=%d\n", ret_val);
- return ret_val;
- }
-
- /* For i217 Intel Rapid Start Technology support when the system
- * is transitioning from Sx and no manageability engine is present
- * configure SMBus to restore on reset, disable proxy, and enable
- * the reset on MTA (Multicast table array).
- */
- if (hw->phy.type == e1000_phy_i217) {
- u16 phy_reg;
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val) {
- DEBUGOUT("Failed to setup iRST\n");
- return ret_val;
- }
-
- /* Clear Auto Enable LPI after link up */
- hw->phy.ops.read_reg_locked(hw, I217_LPI_GPIO_CTRL, &phy_reg);
- phy_reg &= ~I217_LPI_GPIO_CTRL_AUTO_EN_LPI;
- hw->phy.ops.write_reg_locked(hw, I217_LPI_GPIO_CTRL, phy_reg);
-
- if (!(E1000_READ_REG(hw, E1000_FWSM) &
- E1000_ICH_FWSM_FW_VALID)) {
- /* Restore clear on SMB if no manageability engine
- * is present
- */
- ret_val = hw->phy.ops.read_reg_locked(hw, I217_MEMPWR,
- &phy_reg);
- if (ret_val)
- goto release;
- phy_reg |= I217_MEMPWR_DISABLE_SMB_RELEASE;
- hw->phy.ops.write_reg_locked(hw, I217_MEMPWR, phy_reg);
-
- /* Disable Proxy */
- hw->phy.ops.write_reg_locked(hw, I217_PROXY_CTRL, 0);
- }
- /* Enable reset on MTA */
- ret_val = hw->phy.ops.read_reg_locked(hw, I217_CGFREG,
- &phy_reg);
- if (ret_val)
- goto release;
- phy_reg &= ~I217_CGFREG_ENABLE_MTA_RESET;
- hw->phy.ops.write_reg_locked(hw, I217_CGFREG, phy_reg);
-release:
- if (ret_val)
- DEBUGOUT1("Error %d in resume workarounds\n", ret_val);
- hw->phy.ops.release(hw);
- return ret_val;
- }
- return E1000_SUCCESS;
-}
-
/**
* e1000_cleanup_led_ich8lan - Restore the default LED operation
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_ich8lan.h b/drivers/net/e1000/base/e1000_ich8lan.h
index e456e5132e..e28ebb55ba 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.h
+++ b/drivers/net/e1000/base/e1000_ich8lan.h
@@ -281,10 +281,7 @@
#define E1000_PCI_REVISION_ID_REG 0x08
void e1000_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
bool state);
-void e1000_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw);
void e1000_gig_downshift_workaround_ich8lan(struct e1000_hw *hw);
-void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw);
-u32 e1000_resume_workarounds_pchlan(struct e1000_hw *hw);
s32 e1000_configure_k1_ich8lan(struct e1000_hw *hw, bool k1_enable);
s32 e1000_configure_k0s_lpt(struct e1000_hw *hw, u8 entry_latency, u8 min_time);
void e1000_copy_rx_addrs_to_phy_ich8lan(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_mac.c b/drivers/net/e1000/base/e1000_mac.c
index d3b3a6bac9..fe1516bd92 100644
--- a/drivers/net/e1000/base/e1000_mac.c
+++ b/drivers/net/e1000/base/e1000_mac.c
@@ -124,20 +124,6 @@ void e1000_null_write_vfta(struct e1000_hw E1000_UNUSEDARG *hw,
return;
}
-/**
- * e1000_null_rar_set - No-op function, return 0
- * @hw: pointer to the HW structure
- * @h: dummy variable
- * @a: dummy variable
- **/
-int e1000_null_rar_set(struct e1000_hw E1000_UNUSEDARG *hw,
- u8 E1000_UNUSEDARG *h, u32 E1000_UNUSEDARG a)
-{
- DEBUGFUNC("e1000_null_rar_set");
- UNREFERENCED_3PARAMETER(hw, h, a);
- return E1000_SUCCESS;
-}
-
/**
* e1000_get_bus_info_pci_generic - Get PCI(x) bus information
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_mac.h b/drivers/net/e1000/base/e1000_mac.h
index 86fcad23bb..0abaf2f452 100644
--- a/drivers/net/e1000/base/e1000_mac.h
+++ b/drivers/net/e1000/base/e1000_mac.h
@@ -13,7 +13,6 @@ s32 e1000_null_link_info(struct e1000_hw *hw, u16 *s, u16 *d);
bool e1000_null_mng_mode(struct e1000_hw *hw);
void e1000_null_update_mc(struct e1000_hw *hw, u8 *h, u32 a);
void e1000_null_write_vfta(struct e1000_hw *hw, u32 a, u32 b);
-int e1000_null_rar_set(struct e1000_hw *hw, u8 *h, u32 a);
s32 e1000_blink_led_generic(struct e1000_hw *hw);
s32 e1000_check_for_copper_link_generic(struct e1000_hw *hw);
s32 e1000_check_for_fiber_link_generic(struct e1000_hw *hw);
diff --git a/drivers/net/e1000/base/e1000_manage.c b/drivers/net/e1000/base/e1000_manage.c
index 4b81028302..266bb9ec91 100644
--- a/drivers/net/e1000/base/e1000_manage.c
+++ b/drivers/net/e1000/base/e1000_manage.c
@@ -353,195 +353,3 @@ bool e1000_enable_mng_pass_thru(struct e1000_hw *hw)
return false;
}
-
-/**
- * e1000_host_interface_command - Writes buffer to host interface
- * @hw: pointer to the HW structure
- * @buffer: contains a command to write
- * @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- * Writes a buffer to the Host Interface. Upon success, returns E1000_SUCCESS
- * else returns E1000_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 e1000_host_interface_command(struct e1000_hw *hw, u8 *buffer, u32 length)
-{
- u32 hicr, i;
-
- DEBUGFUNC("e1000_host_interface_command");
-
- if (!(hw->mac.arc_subsystem_valid)) {
- DEBUGOUT("Hardware doesn't support host interface command.\n");
- return E1000_SUCCESS;
- }
-
- if (!hw->mac.asf_firmware_present) {
- DEBUGOUT("Firmware is not present.\n");
- return E1000_SUCCESS;
- }
-
- if (length == 0 || length & 0x3 ||
- length > E1000_HI_MAX_BLOCK_BYTE_LENGTH) {
- DEBUGOUT("Buffer length failure.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Check that the host interface is enabled. */
- hicr = E1000_READ_REG(hw, E1000_HICR);
- if (!(hicr & E1000_HICR_EN)) {
- DEBUGOUT("E1000_HOST_EN bit disabled.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Calculate length in DWORDs */
- length >>= 2;
-
- /* The device driver writes the relevant command block
- * into the ram area.
- */
- for (i = 0; i < length; i++)
- E1000_WRITE_REG_ARRAY_DWORD(hw, E1000_HOST_IF, i,
- *((u32 *)buffer + i));
-
- /* Setting this bit tells the ARC that a new command is pending. */
- E1000_WRITE_REG(hw, E1000_HICR, hicr | E1000_HICR_C);
-
- for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
- hicr = E1000_READ_REG(hw, E1000_HICR);
- if (!(hicr & E1000_HICR_C))
- break;
- msec_delay(1);
- }
-
- /* Check command successful completion. */
- if (i == E1000_HI_COMMAND_TIMEOUT ||
- (!(E1000_READ_REG(hw, E1000_HICR) & E1000_HICR_SV))) {
- DEBUGOUT("Command has failed with no status valid.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- for (i = 0; i < length; i++)
- *((u32 *)buffer + i) = E1000_READ_REG_ARRAY_DWORD(hw,
- E1000_HOST_IF,
- i);
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_load_firmware - Writes proxy FW code buffer to host interface
- * and execute.
- * @hw: pointer to the HW structure
- * @buffer: contains a firmware to write
- * @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- * Upon success returns E1000_SUCCESS, returns E1000_ERR_CONFIG if not enabled
- * in HW else returns E1000_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 e1000_load_firmware(struct e1000_hw *hw, u8 *buffer, u32 length)
-{
- u32 hicr, hibba, fwsm, icr, i;
-
- DEBUGFUNC("e1000_load_firmware");
-
- if (hw->mac.type < e1000_i210) {
- DEBUGOUT("Hardware doesn't support loading FW by the driver\n");
- return -E1000_ERR_CONFIG;
- }
-
- /* Check that the host interface is enabled. */
- hicr = E1000_READ_REG(hw, E1000_HICR);
- if (!(hicr & E1000_HICR_EN)) {
- DEBUGOUT("E1000_HOST_EN bit disabled.\n");
- return -E1000_ERR_CONFIG;
- }
- if (!(hicr & E1000_HICR_MEMORY_BASE_EN)) {
- DEBUGOUT("E1000_HICR_MEMORY_BASE_EN bit disabled.\n");
- return -E1000_ERR_CONFIG;
- }
-
- if (length == 0 || length & 0x3 || length > E1000_HI_FW_MAX_LENGTH) {
- DEBUGOUT("Buffer length failure.\n");
- return -E1000_ERR_INVALID_ARGUMENT;
- }
-
- /* Clear notification from ROM-FW by reading ICR register */
- icr = E1000_READ_REG(hw, E1000_ICR_V2);
-
- /* Reset ROM-FW */
- hicr = E1000_READ_REG(hw, E1000_HICR);
- hicr |= E1000_HICR_FW_RESET_ENABLE;
- E1000_WRITE_REG(hw, E1000_HICR, hicr);
- hicr |= E1000_HICR_FW_RESET;
- E1000_WRITE_REG(hw, E1000_HICR, hicr);
- E1000_WRITE_FLUSH(hw);
-
- /* Wait till MAC notifies about its readiness after ROM-FW reset */
- for (i = 0; i < (E1000_HI_COMMAND_TIMEOUT * 2); i++) {
- icr = E1000_READ_REG(hw, E1000_ICR_V2);
- if (icr & E1000_ICR_MNG)
- break;
- msec_delay(1);
- }
-
- /* Check for timeout */
- if (i == E1000_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("FW reset failed.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Wait till MAC is ready to accept new FW code */
- for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
- fwsm = E1000_READ_REG(hw, E1000_FWSM);
- if ((fwsm & E1000_FWSM_FW_VALID) &&
- ((fwsm & E1000_FWSM_MODE_MASK) >> E1000_FWSM_MODE_SHIFT ==
- E1000_FWSM_HI_EN_ONLY_MODE))
- break;
- msec_delay(1);
- }
-
- /* Check for timeout */
- if (i == E1000_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("FW reset failed.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Calculate length in DWORDs */
- length >>= 2;
-
- /* The device driver writes the relevant FW code block
- * into the ram area in DWORDs via 1kB ram addressing window.
- */
- for (i = 0; i < length; i++) {
- if (!(i % E1000_HI_FW_BLOCK_DWORD_LENGTH)) {
- /* Point to correct 1kB ram window */
- hibba = E1000_HI_FW_BASE_ADDRESS +
- ((E1000_HI_FW_BLOCK_DWORD_LENGTH << 2) *
- (i / E1000_HI_FW_BLOCK_DWORD_LENGTH));
-
- E1000_WRITE_REG(hw, E1000_HIBBA, hibba);
- }
-
- E1000_WRITE_REG_ARRAY_DWORD(hw, E1000_HOST_IF,
- i % E1000_HI_FW_BLOCK_DWORD_LENGTH,
- *((u32 *)buffer + i));
- }
-
- /* Setting this bit tells the ARC that a new FW is ready to execute. */
- hicr = E1000_READ_REG(hw, E1000_HICR);
- E1000_WRITE_REG(hw, E1000_HICR, hicr | E1000_HICR_C);
-
- for (i = 0; i < E1000_HI_COMMAND_TIMEOUT; i++) {
- hicr = E1000_READ_REG(hw, E1000_HICR);
- if (!(hicr & E1000_HICR_C))
- break;
- msec_delay(1);
- }
-
- /* Check for successful FW start. */
- if (i == E1000_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("New FW did not start within timeout period.\n");
- return -E1000_ERR_HOST_INTERFACE_COMMAND;
- }
-
- return E1000_SUCCESS;
-}
diff --git a/drivers/net/e1000/base/e1000_manage.h b/drivers/net/e1000/base/e1000_manage.h
index 268a13381d..da0246b6a9 100644
--- a/drivers/net/e1000/base/e1000_manage.h
+++ b/drivers/net/e1000/base/e1000_manage.h
@@ -16,8 +16,6 @@ s32 e1000_mng_write_dhcp_info_generic(struct e1000_hw *hw,
u8 *buffer, u16 length);
bool e1000_enable_mng_pass_thru(struct e1000_hw *hw);
u8 e1000_calculate_checksum(u8 *buffer, u32 length);
-s32 e1000_host_interface_command(struct e1000_hw *hw, u8 *buffer, u32 length);
-s32 e1000_load_firmware(struct e1000_hw *hw, u8 *buffer, u32 length);
enum e1000_mng_mode {
e1000_mng_mode_none = 0,
diff --git a/drivers/net/e1000/base/e1000_nvm.c b/drivers/net/e1000/base/e1000_nvm.c
index 430fecaf6d..4b3ce7d634 100644
--- a/drivers/net/e1000/base/e1000_nvm.c
+++ b/drivers/net/e1000/base/e1000_nvm.c
@@ -947,135 +947,6 @@ s32 e1000_read_pba_num_generic(struct e1000_hw *hw, u32 *pba_num)
return E1000_SUCCESS;
}
-
-/**
- * e1000_read_pba_raw
- * @hw: pointer to the HW structure
- * @eeprom_buf: optional pointer to EEPROM image
- * @eeprom_buf_size: size of EEPROM image in words
- * @max_pba_block_size: PBA block size limit
- * @pba: pointer to output PBA structure
- *
- * Reads PBA from EEPROM image when eeprom_buf is not NULL.
- * Reads PBA from physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 e1000_read_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, u16 max_pba_block_size,
- struct e1000_pba *pba)
-{
- s32 ret_val;
- u16 pba_block_size;
-
- if (pba == NULL)
- return -E1000_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = e1000_read_nvm(hw, NVM_PBA_OFFSET_0, 2,
- &pba->word[0]);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
- pba->word[0] = eeprom_buf[NVM_PBA_OFFSET_0];
- pba->word[1] = eeprom_buf[NVM_PBA_OFFSET_1];
- } else {
- return -E1000_ERR_PARAM;
- }
- }
-
- if (pba->word[0] == NVM_PBA_PTR_GUARD) {
- if (pba->pba_block == NULL)
- return -E1000_ERR_PARAM;
-
- ret_val = e1000_get_pba_block_size(hw, eeprom_buf,
- eeprom_buf_size,
- &pba_block_size);
- if (ret_val)
- return ret_val;
-
- if (pba_block_size > max_pba_block_size)
- return -E1000_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = e1000_read_nvm(hw, pba->word[1],
- pba_block_size,
- pba->pba_block);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > (u32)(pba->word[1] +
- pba_block_size)) {
- memcpy(pba->pba_block,
- &eeprom_buf[pba->word[1]],
- pba_block_size * sizeof(u16));
- } else {
- return -E1000_ERR_PARAM;
- }
- }
- }
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_write_pba_raw
- * @hw: pointer to the HW structure
- * @eeprom_buf: optional pointer to EEPROM image
- * @eeprom_buf_size: size of EEPROM image in words
- * @pba: pointer to PBA structure
- *
- * Writes PBA to EEPROM image when eeprom_buf is not NULL.
- * Writes PBA to physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 e1000_write_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, struct e1000_pba *pba)
-{
- s32 ret_val;
-
- if (pba == NULL)
- return -E1000_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = e1000_write_nvm(hw, NVM_PBA_OFFSET_0, 2,
- &pba->word[0]);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
- eeprom_buf[NVM_PBA_OFFSET_0] = pba->word[0];
- eeprom_buf[NVM_PBA_OFFSET_1] = pba->word[1];
- } else {
- return -E1000_ERR_PARAM;
- }
- }
-
- if (pba->word[0] == NVM_PBA_PTR_GUARD) {
- if (pba->pba_block == NULL)
- return -E1000_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = e1000_write_nvm(hw, pba->word[1],
- pba->pba_block[0],
- pba->pba_block);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > (u32)(pba->word[1] +
- pba->pba_block[0])) {
- memcpy(&eeprom_buf[pba->word[1]],
- pba->pba_block,
- pba->pba_block[0] * sizeof(u16));
- } else {
- return -E1000_ERR_PARAM;
- }
- }
- }
-
- return E1000_SUCCESS;
-}
-
/**
* e1000_get_pba_block_size
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_nvm.h b/drivers/net/e1000/base/e1000_nvm.h
index 056f823537..e48d638795 100644
--- a/drivers/net/e1000/base/e1000_nvm.h
+++ b/drivers/net/e1000/base/e1000_nvm.h
@@ -40,11 +40,6 @@ s32 e1000_read_pba_num_generic(struct e1000_hw *hw, u32 *pba_num);
s32 e1000_read_pba_string_generic(struct e1000_hw *hw, u8 *pba_num,
u32 pba_num_size);
s32 e1000_read_pba_length_generic(struct e1000_hw *hw, u32 *pba_num_size);
-s32 e1000_read_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, u16 max_pba_block_size,
- struct e1000_pba *pba);
-s32 e1000_write_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, struct e1000_pba *pba);
s32 e1000_get_pba_block_size(struct e1000_hw *hw, u16 *eeprom_buf,
u32 eeprom_buf_size, u16 *pba_block_size);
s32 e1000_read_nvm_spi(struct e1000_hw *hw, u16 offset, u16 words, u16 *data);
diff --git a/drivers/net/e1000/base/e1000_phy.c b/drivers/net/e1000/base/e1000_phy.c
index 62d0be5080..b3be39f7bd 100644
--- a/drivers/net/e1000/base/e1000_phy.c
+++ b/drivers/net/e1000/base/e1000_phy.c
@@ -545,79 +545,6 @@ s32 e1000_read_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 *data)
return E1000_SUCCESS;
}
-/**
- * e1000_write_sfp_data_byte - Writes SFP module data.
- * @hw: pointer to the HW structure
- * @offset: byte location offset to write to
- * @data: data to write
- *
- * Writes one byte to SFP module data stored
- * in SFP resided EEPROM memory or SFP diagnostic area.
- * Function should be called with
- * E1000_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- * E1000_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- * access
- **/
-s32 e1000_write_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 data)
-{
- u32 i = 0;
- u32 i2ccmd = 0;
- u32 data_local = 0;
-
- DEBUGFUNC("e1000_write_sfp_data_byte");
-
- if (offset > E1000_I2CCMD_SFP_DIAG_ADDR(255)) {
- DEBUGOUT("I2CCMD command address exceeds upper limit\n");
- return -E1000_ERR_PHY;
- }
- /* The programming interface is 16 bits wide
- * so we need to read the whole word first
- * then update appropriate byte lane and write
- * the updated word back.
- */
- /* Set up Op-code, EEPROM Address,in the I2CCMD
- * register. The MAC will take care of interfacing
- * with an EEPROM to write the data given.
- */
- i2ccmd = ((offset << E1000_I2CCMD_REG_ADDR_SHIFT) |
- E1000_I2CCMD_OPCODE_READ);
- /* Set a command to read single word */
- E1000_WRITE_REG(hw, E1000_I2CCMD, i2ccmd);
- for (i = 0; i < E1000_I2CCMD_PHY_TIMEOUT; i++) {
- usec_delay(50);
- /* Poll the ready bit to see if lastly
- * launched I2C operation completed
- */
- i2ccmd = E1000_READ_REG(hw, E1000_I2CCMD);
- if (i2ccmd & E1000_I2CCMD_READY) {
- /* Check if this is READ or WRITE phase */
- if ((i2ccmd & E1000_I2CCMD_OPCODE_READ) ==
- E1000_I2CCMD_OPCODE_READ) {
- /* Write the selected byte
- * lane and update whole word
- */
- data_local = i2ccmd & 0xFF00;
- data_local |= (u32)data;
- i2ccmd = ((offset <<
- E1000_I2CCMD_REG_ADDR_SHIFT) |
- E1000_I2CCMD_OPCODE_WRITE | data_local);
- E1000_WRITE_REG(hw, E1000_I2CCMD, i2ccmd);
- } else {
- break;
- }
- }
- }
- if (!(i2ccmd & E1000_I2CCMD_READY)) {
- DEBUGOUT("I2CCMD Write did not complete\n");
- return -E1000_ERR_PHY;
- }
- if (i2ccmd & E1000_I2CCMD_ERROR) {
- DEBUGOUT("I2CCMD Error bit set\n");
- return -E1000_ERR_PHY;
- }
- return E1000_SUCCESS;
-}
-
/**
* e1000_read_phy_reg_m88 - Read m88 PHY register
* @hw: pointer to the HW structure
@@ -4083,134 +4010,6 @@ s32 e1000_read_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 *data)
return ret_val;
}
-/**
- * e1000_read_phy_reg_mphy - Read mPHY control register
- * @hw: pointer to the HW structure
- * @address: address to be read
- * @data: pointer to the read data
- *
- * Reads the mPHY control register in the PHY at offset and stores the
- * information read to data.
- **/
-s32 e1000_read_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 *data)
-{
- u32 mphy_ctrl = 0;
- bool locked = false;
- bool ready;
-
- DEBUGFUNC("e1000_read_phy_reg_mphy");
-
- /* Check if mPHY is ready to read/write operations */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
-
- /* Check if mPHY access is disabled and enable it if so */
- mphy_ctrl = E1000_READ_REG(hw, E1000_MPHY_ADDR_CTRL);
- if (mphy_ctrl & E1000_MPHY_DIS_ACCESS) {
- locked = true;
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- mphy_ctrl |= E1000_MPHY_ENA_ACCESS;
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
- }
-
- /* Set the address that we want to read */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
-
- /* We mask address, because we want to use only current lane */
- mphy_ctrl = (mphy_ctrl & ~E1000_MPHY_ADDRESS_MASK &
- ~E1000_MPHY_ADDRESS_FNC_OVERRIDE) |
- (address & E1000_MPHY_ADDRESS_MASK);
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-
- /* Read data from the address */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- *data = E1000_READ_REG(hw, E1000_MPHY_DATA);
-
- /* Disable access to mPHY if it was originally disabled */
- if (locked) {
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL,
- E1000_MPHY_DIS_ACCESS);
- }
-
- return E1000_SUCCESS;
-}
-
-/**
- * e1000_write_phy_reg_mphy - Write mPHY control register
- * @hw: pointer to the HW structure
- * @address: address to write to
- * @data: data to write to register at offset
- * @line_override: used when we want to use different line than default one
- *
- * Writes data to mPHY control register.
- **/
-s32 e1000_write_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 data,
- bool line_override)
-{
- u32 mphy_ctrl = 0;
- bool locked = false;
- bool ready;
-
- DEBUGFUNC("e1000_write_phy_reg_mphy");
-
- /* Check if mPHY is ready to read/write operations */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
-
- /* Check if mPHY access is disabled and enable it if so */
- mphy_ctrl = E1000_READ_REG(hw, E1000_MPHY_ADDR_CTRL);
- if (mphy_ctrl & E1000_MPHY_DIS_ACCESS) {
- locked = true;
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- mphy_ctrl |= E1000_MPHY_ENA_ACCESS;
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
- }
-
- /* Set the address that we want to read */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
-
- /* We mask address, because we want to use only current lane */
- if (line_override)
- mphy_ctrl |= E1000_MPHY_ADDRESS_FNC_OVERRIDE;
- else
- mphy_ctrl &= ~E1000_MPHY_ADDRESS_FNC_OVERRIDE;
- mphy_ctrl = (mphy_ctrl & ~E1000_MPHY_ADDRESS_MASK) |
- (address & E1000_MPHY_ADDRESS_MASK);
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL, mphy_ctrl);
-
- /* Read data from the address */
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- E1000_WRITE_REG(hw, E1000_MPHY_DATA, data);
-
- /* Disable access to mPHY if it was originally disabled */
- if (locked) {
- ready = e1000_is_mphy_ready(hw);
- if (!ready)
- return -E1000_ERR_PHY;
- E1000_WRITE_REG(hw, E1000_MPHY_ADDR_CTRL,
- E1000_MPHY_DIS_ACCESS);
- }
-
- return E1000_SUCCESS;
-}
-
/**
* e1000_is_mphy_ready - Check if mPHY control register is not busy
* @hw: pointer to the HW structure
diff --git a/drivers/net/e1000/base/e1000_phy.h b/drivers/net/e1000/base/e1000_phy.h
index 81c5308589..fcd1e09f42 100644
--- a/drivers/net/e1000/base/e1000_phy.h
+++ b/drivers/net/e1000/base/e1000_phy.h
@@ -71,7 +71,6 @@ s32 e1000_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data);
s32 e1000_read_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 *data);
s32 e1000_write_phy_reg_i2c(struct e1000_hw *hw, u32 offset, u16 data);
s32 e1000_read_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 *data);
-s32 e1000_write_sfp_data_byte(struct e1000_hw *hw, u16 offset, u8 data);
s32 e1000_read_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 *data);
s32 e1000_read_phy_reg_hv_locked(struct e1000_hw *hw, u32 offset, u16 *data);
s32 e1000_read_phy_reg_page_hv(struct e1000_hw *hw, u32 offset, u16 *data);
@@ -86,9 +85,6 @@ s32 e1000_phy_force_speed_duplex_82577(struct e1000_hw *hw);
s32 e1000_get_cable_length_82577(struct e1000_hw *hw);
s32 e1000_write_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 data);
s32 e1000_read_phy_reg_gs40g(struct e1000_hw *hw, u32 offset, u16 *data);
-s32 e1000_read_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 *data);
-s32 e1000_write_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 data,
- bool line_override);
bool e1000_is_mphy_ready(struct e1000_hw *hw);
s32 e1000_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr,
diff --git a/drivers/net/e1000/base/e1000_vf.c b/drivers/net/e1000/base/e1000_vf.c
index 44ebe07ee4..9b001f9c2e 100644
--- a/drivers/net/e1000/base/e1000_vf.c
+++ b/drivers/net/e1000/base/e1000_vf.c
@@ -411,25 +411,6 @@ void e1000_update_mc_addr_list_vf(struct e1000_hw *hw,
e1000_write_msg_read_ack(hw, msgbuf, E1000_VFMAILBOX_SIZE);
}
-/**
- * e1000_vfta_set_vf - Set/Unset vlan filter table address
- * @hw: pointer to the HW structure
- * @vid: determines the vfta register and bit to set/unset
- * @set: if true then set bit, else clear bit
- **/
-void e1000_vfta_set_vf(struct e1000_hw *hw, u16 vid, bool set)
-{
- u32 msgbuf[2];
-
- msgbuf[0] = E1000_VF_SET_VLAN;
- msgbuf[1] = vid;
- /* Setting the 8 bit field MSG INFO to TRUE indicates "add" */
- if (set)
- msgbuf[0] |= E1000_VF_SET_VLAN_ADD;
-
- e1000_write_msg_read_ack(hw, msgbuf, 2);
-}
-
/** e1000_rlpml_set_vf - Set the maximum receive packet length
* @hw: pointer to the HW structure
* @max_size: value to assign to max frame size
diff --git a/drivers/net/e1000/base/e1000_vf.h b/drivers/net/e1000/base/e1000_vf.h
index 4bec21c935..ff62970132 100644
--- a/drivers/net/e1000/base/e1000_vf.h
+++ b/drivers/net/e1000/base/e1000_vf.h
@@ -260,7 +260,6 @@ enum e1000_promisc_type {
/* These functions must be implemented by drivers */
s32 e1000_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value);
-void e1000_vfta_set_vf(struct e1000_hw *, u16, bool);
void e1000_rlpml_set_vf(struct e1000_hw *, u16);
s32 e1000_promisc_set_vf(struct e1000_hw *, enum e1000_promisc_type);
#endif /* _E1000_VF_H_ */
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index aae68721fb..04fd15c998 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -1064,11 +1064,6 @@ static int ena_com_get_feature(struct ena_com_dev *ena_dev,
feature_ver);
}
-int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev)
-{
- return ena_dev->rss.hash_func;
-}
-
static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev)
{
struct ena_admin_feature_rss_flow_hash_control *hash_key =
@@ -1318,31 +1313,6 @@ static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev)
return 0;
}
-static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev,
- u16 intr_delay_resolution)
-{
- u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution;
-
- if (unlikely(!intr_delay_resolution)) {
- ena_trc_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n");
- intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
- }
-
- /* update Rx */
- ena_dev->intr_moder_rx_interval =
- ena_dev->intr_moder_rx_interval *
- prev_intr_delay_resolution /
- intr_delay_resolution;
-
- /* update Tx */
- ena_dev->intr_moder_tx_interval =
- ena_dev->intr_moder_tx_interval *
- prev_intr_delay_resolution /
- intr_delay_resolution;
-
- ena_dev->intr_delay_resolution = intr_delay_resolution;
-}
-
/*****************************************************************************/
/******************************* API ******************************/
/*****************************************************************************/
@@ -1703,17 +1673,6 @@ void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling)
ena_dev->admin_queue.polling = polling;
}
-bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev)
-{
- return ena_dev->admin_queue.polling;
-}
-
-void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
- bool polling)
-{
- ena_dev->admin_queue.auto_polling = polling;
-}
-
int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev)
{
struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read;
@@ -1942,12 +1901,6 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid)
ena_com_io_queue_free(ena_dev, io_sq, io_cq);
}
-int ena_com_get_link_params(struct ena_com_dev *ena_dev,
- struct ena_admin_get_feat_resp *resp)
-{
- return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0);
-}
-
int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
struct ena_com_dev_get_features_ctx *get_feat_ctx)
{
@@ -2277,24 +2230,6 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
return ret;
}
-int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
- struct ena_admin_feature_offload_desc *offload)
-{
- int ret;
- struct ena_admin_get_feat_resp resp;
-
- ret = ena_com_get_feature(ena_dev, &resp,
- ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
- if (unlikely(ret)) {
- ena_trc_err("Failed to get offload capabilities %d\n", ret);
- return ret;
- }
-
- memcpy(offload, &resp.u.offload, sizeof(resp.u.offload));
-
- return 0;
-}
-
int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
{
struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue;
@@ -2416,44 +2351,6 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
return rc;
}
-int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
- enum ena_admin_hash_functions *func)
-{
- struct ena_rss *rss = &ena_dev->rss;
- struct ena_admin_get_feat_resp get_resp;
- int rc;
-
- if (unlikely(!func))
- return ENA_COM_INVAL;
-
- rc = ena_com_get_feature_ex(ena_dev, &get_resp,
- ENA_ADMIN_RSS_HASH_FUNCTION,
- rss->hash_key_dma_addr,
- sizeof(*rss->hash_key), 0);
- if (unlikely(rc))
- return rc;
-
- /* ENA_FFS() returns 1 in case the lsb is set */
- rss->hash_func = ENA_FFS(get_resp.u.flow_hash_func.selected_func);
- if (rss->hash_func)
- rss->hash_func--;
-
- *func = rss->hash_func;
-
- return 0;
-}
-
-int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key)
-{
- struct ena_admin_feature_rss_flow_hash_control *hash_key =
- ena_dev->rss.hash_key;
-
- if (key)
- memcpy(key, hash_key->key, (size_t)(hash_key->keys_num) << 2);
-
- return 0;
-}
-
int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev,
enum ena_admin_flow_hash_proto proto,
u16 *fields)
@@ -2582,43 +2479,6 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev)
return rc;
}
-int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
- enum ena_admin_flow_hash_proto proto,
- u16 hash_fields)
-{
- struct ena_rss *rss = &ena_dev->rss;
- struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl;
- u16 supported_fields;
- int rc;
-
- if (proto >= ENA_ADMIN_RSS_PROTO_NUM) {
- ena_trc_err("Invalid proto num (%u)\n", proto);
- return ENA_COM_INVAL;
- }
-
- /* Get the ctrl table */
- rc = ena_com_get_hash_ctrl(ena_dev, proto, NULL);
- if (unlikely(rc))
- return rc;
-
- /* Make sure all the fields are supported */
- supported_fields = hash_ctrl->supported_fields[proto].fields;
- if ((hash_fields & supported_fields) != hash_fields) {
- ena_trc_err("proto %d doesn't support the required fields %x. supports only: %x\n",
- proto, hash_fields, supported_fields);
- }
-
- hash_ctrl->selected_fields[proto].fields = hash_fields;
-
- rc = ena_com_set_hash_ctrl(ena_dev);
-
- /* In case of failure, restore the old hash ctrl */
- if (unlikely(rc))
- ena_com_get_hash_ctrl(ena_dev, 0, NULL);
-
- return 0;
-}
-
int ena_com_indirect_table_fill_entry(struct ena_com_dev *ena_dev,
u16 entry_idx, u16 entry_value)
{
@@ -2874,88 +2734,6 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev)
return ret;
}
-/* Interrupt moderation */
-bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev)
-{
- return ena_com_check_supported_feature_id(ena_dev,
- ENA_ADMIN_INTERRUPT_MODERATION);
-}
-
-static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs,
- u32 intr_delay_resolution,
- u32 *intr_moder_interval)
-{
- if (!intr_delay_resolution) {
- ena_trc_err("Illegal interrupt delay granularity value\n");
- return ENA_COM_FAULT;
- }
-
- *intr_moder_interval = coalesce_usecs / intr_delay_resolution;
-
- return 0;
-}
-
-
-int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev,
- u32 tx_coalesce_usecs)
-{
- return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs,
- ena_dev->intr_delay_resolution,
- &ena_dev->intr_moder_tx_interval);
-}
-
-int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev,
- u32 rx_coalesce_usecs)
-{
- return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs,
- ena_dev->intr_delay_resolution,
- &ena_dev->intr_moder_rx_interval);
-}
-
-int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev)
-{
- struct ena_admin_get_feat_resp get_resp;
- u16 delay_resolution;
- int rc;
-
- rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_INTERRUPT_MODERATION, 0);
-
- if (rc) {
- if (rc == ENA_COM_UNSUPPORTED) {
- ena_trc_dbg("Feature %d isn't supported\n",
- ENA_ADMIN_INTERRUPT_MODERATION);
- rc = 0;
- } else {
- ena_trc_err("Failed to get interrupt moderation admin cmd. rc: %d\n",
- rc);
- }
-
- /* no moderation supported, disable adaptive support */
- ena_com_disable_adaptive_moderation(ena_dev);
- return rc;
- }
-
- /* if moderation is supported by device we set adaptive moderation */
- delay_resolution = get_resp.u.intr_moderation.intr_delay_resolution;
- ena_com_update_intr_delay_resolution(ena_dev, delay_resolution);
-
- /* Disable adaptive moderation by default - can be enabled later */
- ena_com_disable_adaptive_moderation(ena_dev);
-
- return 0;
-}
-
-unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev)
-{
- return ena_dev->intr_moder_tx_interval;
-}
-
-unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev)
-{
- return ena_dev->intr_moder_rx_interval;
-}
-
int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
struct ena_admin_feature_llq_desc *llq_features,
struct ena_llq_configurations *llq_default_cfg)
diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h
index 64d8f247cb..f82c9f1876 100644
--- a/drivers/net/ena/base/ena_com.h
+++ b/drivers/net/ena/base/ena_com.h
@@ -483,29 +483,6 @@ bool ena_com_get_admin_running_state(struct ena_com_dev *ena_dev);
*/
void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling);
-/* ena_com_get_admin_polling_mode - Get the admin completion queue polling mode
- * @ena_dev: ENA communication layer struct
- *
- * Get the admin completion mode.
- * If polling mode is on, ena_com_execute_admin_command will perform a
- * polling on the admin completion queue for the commands completion,
- * otherwise it will wait on wait event.
- *
- * @return state
- */
-bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev);
-
-/* ena_com_set_admin_auto_polling_mode - Enable autoswitch to polling mode
- * @ena_dev: ENA communication layer struct
- * @polling: Enable/Disable polling mode
- *
- * Set the autopolling mode.
- * If autopolling is on:
- * In case of missing interrupt when data is available switch to polling.
- */
-void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
- bool polling);
-
/* ena_com_admin_q_comp_intr_handler - admin queue interrupt handler
* @ena_dev: ENA communication layer struct
*
@@ -552,18 +529,6 @@ void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev);
*/
int ena_com_validate_version(struct ena_com_dev *ena_dev);
-/* ena_com_get_link_params - Retrieve physical link parameters.
- * @ena_dev: ENA communication layer struct
- * @resp: Link parameters
- *
- * Retrieve the physical link parameters,
- * like speed, auto-negotiation and full duplex support.
- *
- * @return - 0 on Success negative value otherwise.
- */
-int ena_com_get_link_params(struct ena_com_dev *ena_dev,
- struct ena_admin_get_feat_resp *resp);
-
/* ena_com_get_dma_width - Retrieve physical dma address width the device
* supports.
* @ena_dev: ENA communication layer struct
@@ -619,15 +584,6 @@ int ena_com_get_eni_stats(struct ena_com_dev *ena_dev,
*/
int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu);
-/* ena_com_get_offload_settings - Retrieve the device offloads capabilities
- * @ena_dev: ENA communication layer struct
- * @offlad: offload return value
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
- struct ena_admin_feature_offload_desc *offload);
-
/* ena_com_rss_init - Init RSS
* @ena_dev: ENA communication layer struct
* @log_size: indirection log size
@@ -647,14 +603,6 @@ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 log_size);
*/
void ena_com_rss_destroy(struct ena_com_dev *ena_dev);
-/* ena_com_get_current_hash_function - Get RSS hash function
- * @ena_dev: ENA communication layer struct
- *
- * Return the current hash function.
- * @return: 0 or one of the ena_admin_hash_functions values.
- */
-int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev);
-
/* ena_com_fill_hash_function - Fill RSS hash function
* @ena_dev: ENA communication layer struct
* @func: The hash function (Toeplitz or crc)
@@ -686,48 +634,6 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
*/
int ena_com_set_hash_function(struct ena_com_dev *ena_dev);
-/* ena_com_get_hash_function - Retrieve the hash function from the device.
- * @ena_dev: ENA communication layer struct
- * @func: hash function
- *
- * Retrieve the hash function from the device.
- *
- * @note: If the caller called ena_com_fill_hash_function but didn't flush
- * it to the device, the new configuration will be lost.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
- enum ena_admin_hash_functions *func);
-
-/* ena_com_get_hash_key - Retrieve the hash key
- * @ena_dev: ENA communication layer struct
- * @key: hash key
- *
- * Retrieve the hash key.
- *
- * @note: If the caller called ena_com_fill_hash_key but didn't flush
- * it to the device, the new configuration will be lost.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key);
-/* ena_com_fill_hash_ctrl - Fill RSS hash control
- * @ena_dev: ENA communication layer struct.
- * @proto: The protocol to configure.
- * @hash_fields: bit mask of ena_admin_flow_hash_fields
- *
- * Fill the ena_dev resources with the desire hash control (the ethernet
- * fields that take part of the hash) for a specific protocol.
- * To flush the hash control to the device, the caller should call
- * ena_com_set_hash_ctrl.
- *
- * @return: 0 on Success and negative value otherwise.
- */
-int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
- enum ena_admin_flow_hash_proto proto,
- u16 hash_fields);
-
/* ena_com_set_hash_ctrl - Flush the hash control resources to the device.
* @ena_dev: ENA communication layer struct
*
@@ -884,56 +790,6 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
struct ena_admin_acq_entry *cmd_comp,
size_t cmd_comp_size);
-/* ena_com_init_interrupt_moderation - Init interrupt moderation
- * @ena_dev: ENA communication layer struct
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev);
-
-/* ena_com_interrupt_moderation_supported - Return if interrupt moderation
- * capability is supported by the device.
- *
- * @return - supported or not.
- */
-bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev);
-
-/* ena_com_update_nonadaptive_moderation_interval_tx - Update the
- * non-adaptive interval in Tx direction.
- * @ena_dev: ENA communication layer struct
- * @tx_coalesce_usecs: Interval in usec.
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev,
- u32 tx_coalesce_usecs);
-
-/* ena_com_update_nonadaptive_moderation_interval_rx - Update the
- * non-adaptive interval in Rx direction.
- * @ena_dev: ENA communication layer struct
- * @rx_coalesce_usecs: Interval in usec.
- *
- * @return - 0 on success, negative value on failure.
- */
-int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev,
- u32 rx_coalesce_usecs);
-
-/* ena_com_get_nonadaptive_moderation_interval_tx - Retrieve the
- * non-adaptive interval in Tx direction.
- * @ena_dev: ENA communication layer struct
- *
- * @return - interval in usec
- */
-unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev);
-
-/* ena_com_get_nonadaptive_moderation_interval_rx - Retrieve the
- * non-adaptive interval in Rx direction.
- * @ena_dev: ENA communication layer struct
- *
- * @return - interval in usec
- */
-unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev);
-
/* ena_com_config_dev_mode - Configure the placement policy of the device.
* @ena_dev: ENA communication layer struct
* @llq_features: LLQ feature descriptor, retrieve via
diff --git a/drivers/net/ena/base/ena_eth_com.c b/drivers/net/ena/base/ena_eth_com.c
index a35d92fbd3..05ab030d07 100644
--- a/drivers/net/ena/base/ena_eth_com.c
+++ b/drivers/net/ena/base/ena_eth_com.c
@@ -613,14 +613,3 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
return ena_com_sq_update_tail(io_sq);
}
-
-bool ena_com_cq_empty(struct ena_com_io_cq *io_cq)
-{
- struct ena_eth_io_rx_cdesc_base *cdesc;
-
- cdesc = ena_com_get_next_rx_cdesc(io_cq);
- if (cdesc)
- return false;
- else
- return true;
-}
diff --git a/drivers/net/ena/base/ena_eth_com.h b/drivers/net/ena/base/ena_eth_com.h
index 7dda16cd9f..3799f08bf4 100644
--- a/drivers/net/ena/base/ena_eth_com.h
+++ b/drivers/net/ena/base/ena_eth_com.h
@@ -64,8 +64,6 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
struct ena_com_buf *ena_buf,
u16 req_id);
-bool ena_com_cq_empty(struct ena_com_io_cq *io_cq);
-
static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq,
struct ena_eth_io_intr_reg *intr_reg)
{
diff --git a/drivers/net/fm10k/base/fm10k_api.c b/drivers/net/fm10k/base/fm10k_api.c
index dfb50a10d1..631babcdd6 100644
--- a/drivers/net/fm10k/base/fm10k_api.c
+++ b/drivers/net/fm10k/base/fm10k_api.c
@@ -140,34 +140,6 @@ s32 fm10k_start_hw(struct fm10k_hw *hw)
FM10K_NOT_IMPLEMENTED);
}
-/**
- * fm10k_get_bus_info - Set PCI bus info
- * @hw: pointer to hardware structure
- *
- * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure
- **/
-s32 fm10k_get_bus_info(struct fm10k_hw *hw)
-{
- return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw),
- FM10K_NOT_IMPLEMENTED);
-}
-
-#ifndef NO_IS_SLOT_APPROPRIATE_CHECK
-/**
- * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU
- * @hw: pointer to hardware structure
- *
- * Looks at the PCIe bus info to confirm whether or not this slot can support
- * the necessary bandwidth for this device.
- **/
-bool fm10k_is_slot_appropriate(struct fm10k_hw *hw)
-{
- if (hw->mac.ops.is_slot_appropriate)
- return hw->mac.ops.is_slot_appropriate(hw);
- return true;
-}
-
-#endif
/**
* fm10k_update_vlan - Clear VLAN ID to VLAN filter table
* @hw: pointer to hardware structure
@@ -233,36 +205,6 @@ void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats)
}
}
-/**
- * fm10k_configure_dglort_map - Configures GLORT entry and queues
- * @hw: pointer to hardware structure
- * @dglort: pointer to dglort configuration structure
- *
- * Reads the configuration structure contained in dglort_cfg and uses
- * that information to then populate a DGLORTMAP/DEC entry and the queues
- * to which it has been assigned.
- **/
-s32 fm10k_configure_dglort_map(struct fm10k_hw *hw,
- struct fm10k_dglort_cfg *dglort)
-{
- return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map,
- (hw, dglort), FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system
- * @hw: pointer to hardware structure
- * @dma_mask: 64 bit DMA mask required for platform
- *
- * This function configures the endpoint to limit the access to memory
- * beyond what is physically in the system.
- **/
-void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask)
-{
- if (hw->mac.ops.set_dma_mask)
- hw->mac.ops.set_dma_mask(hw, dma_mask);
-}
-
/**
* fm10k_get_fault - Record a fault in one of the interface units
* @hw: pointer to hardware structure
@@ -298,49 +240,3 @@ s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport,
(hw, lport, mac, vid, add, flags),
FM10K_NOT_IMPLEMENTED);
}
-
-/**
- * fm10k_update_mc_addr - Update device multicast address
- * @hw: pointer to the HW structure
- * @lport: logical port ID to update - unused
- * @mac: MAC address to add/remove from table
- * @vid: VLAN ID to add/remove from table
- * @add: Indicates if this is an add or remove operation
- *
- * This function is used to add or remove multicast MAC addresses
- **/
-s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport,
- const u8 *mac, u16 vid, bool add)
-{
- return fm10k_call_func(hw, hw->mac.ops.update_mc_addr,
- (hw, lport, mac, vid, add),
- FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- * fm10k_adjust_systime - Adjust systime frequency
- * @hw: pointer to hardware structure
- * @ppb: adjustment rate in parts per billion
- *
- * This function is meant to update the frequency of the clock represented
- * by the SYSTIME register.
- **/
-s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb)
-{
- return fm10k_call_func(hw, hw->mac.ops.adjust_systime,
- (hw, ppb), FM10K_NOT_IMPLEMENTED);
-}
-
-/**
- * fm10k_notify_offset - Notify switch of change in PTP offset
- * @hw: pointer to hardware structure
- * @offset: 64bit unsigned offset from hardware SYSTIME value
- *
- * This function is meant to notify switch of change in the PTP offset for
- * the hardware SYSTIME registers.
- **/
-s32 fm10k_notify_offset(struct fm10k_hw *hw, u64 offset)
-{
- return fm10k_call_func(hw, hw->mac.ops.notify_offset,
- (hw, offset), FM10K_NOT_IMPLEMENTED);
-}
diff --git a/drivers/net/fm10k/base/fm10k_api.h b/drivers/net/fm10k/base/fm10k_api.h
index d9593bba00..4ffe41cd08 100644
--- a/drivers/net/fm10k/base/fm10k_api.h
+++ b/drivers/net/fm10k/base/fm10k_api.h
@@ -14,22 +14,11 @@ s32 fm10k_init_hw(struct fm10k_hw *hw);
s32 fm10k_stop_hw(struct fm10k_hw *hw);
s32 fm10k_start_hw(struct fm10k_hw *hw);
s32 fm10k_init_shared_code(struct fm10k_hw *hw);
-s32 fm10k_get_bus_info(struct fm10k_hw *hw);
-#ifndef NO_IS_SLOT_APPROPRIATE_CHECK
-bool fm10k_is_slot_appropriate(struct fm10k_hw *hw);
-#endif
s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set);
s32 fm10k_read_mac_addr(struct fm10k_hw *hw);
void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats);
void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats);
-s32 fm10k_configure_dglort_map(struct fm10k_hw *hw,
- struct fm10k_dglort_cfg *dglort);
-void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask);
s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault);
s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport,
const u8 *mac, u16 vid, bool add, u8 flags);
-s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport,
- const u8 *mac, u16 vid, bool add);
-s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb);
-s32 fm10k_notify_offset(struct fm10k_hw *hw, u64 offset);
#endif /* _FM10K_API_H_ */
diff --git a/drivers/net/fm10k/base/fm10k_tlv.c b/drivers/net/fm10k/base/fm10k_tlv.c
index adffc1bcef..72b0ffd4cb 100644
--- a/drivers/net/fm10k/base/fm10k_tlv.c
+++ b/drivers/net/fm10k/base/fm10k_tlv.c
@@ -24,59 +24,6 @@ s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id)
return FM10K_SUCCESS;
}
-/**
- * fm10k_tlv_attr_put_null_string - Place null terminated string on message
- * @msg: Pointer to message block
- * @attr_id: Attribute ID
- * @string: Pointer to string to be stored in attribute
- *
- * This function will reorder a string to be CPU endian and store it in
- * the attribute buffer. It will return success if provided with a valid
- * pointers.
- **/
-static s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id,
- const unsigned char *string)
-{
- u32 attr_data = 0, len = 0;
- u32 *attr;
-
- DEBUGFUNC("fm10k_tlv_attr_put_null_string");
-
- /* verify pointers are not NULL */
- if (!string || !msg)
- return FM10K_ERR_PARAM;
-
- attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
-
- /* copy string into local variable and then write to msg */
- do {
- /* write data to message */
- if (len && !(len % 4)) {
- attr[len / 4] = attr_data;
- attr_data = 0;
- }
-
- /* record character to offset location */
- attr_data |= (u32)(*string) << (8 * (len % 4));
- len++;
-
- /* test for NULL and then increment */
- } while (*(string++));
-
- /* write last piece of data to message */
- attr[(len + 3) / 4] = attr_data;
-
- /* record attribute header, update message length */
- len <<= FM10K_TLV_LEN_SHIFT;
- attr[0] = len | attr_id;
-
- /* add header length to length */
- len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
- *msg += FM10K_TLV_LEN_ALIGN(len);
-
- return FM10K_SUCCESS;
-}
-
/**
* fm10k_tlv_attr_get_null_string - Get null terminated string from attribute
* @attr: Pointer to attribute
@@ -346,68 +293,6 @@ s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len)
return FM10K_SUCCESS;
}
-/**
- * fm10k_tlv_attr_nest_start - Start a set of nested attributes
- * @msg: Pointer to message block
- * @attr_id: Attribute ID
- *
- * This function will mark off a new nested region for encapsulating
- * a given set of attributes. The idea is if you wish to place a secondary
- * structure within the message this mechanism allows for that. The
- * function will return NULL on failure, and a pointer to the start
- * of the nested attributes on success.
- **/
-static u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id)
-{
- u32 *attr;
-
- DEBUGFUNC("fm10k_tlv_attr_nest_start");
-
- /* verify pointer is not NULL */
- if (!msg)
- return NULL;
-
- attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
-
- attr[0] = attr_id;
-
- /* return pointer to nest header */
- return attr;
-}
-
-/**
- * fm10k_tlv_attr_nest_stop - Stop a set of nested attributes
- * @msg: Pointer to message block
- *
- * This function closes off an existing set of nested attributes. The
- * message pointer should be pointing to the parent of the nest. So in
- * the case of a nest within the nest this would be the outer nest pointer.
- * This function will return success provided all pointers are valid.
- **/
-static s32 fm10k_tlv_attr_nest_stop(u32 *msg)
-{
- u32 *attr;
- u32 len;
-
- DEBUGFUNC("fm10k_tlv_attr_nest_stop");
-
- /* verify pointer is not NULL */
- if (!msg)
- return FM10K_ERR_PARAM;
-
- /* locate the nested header and retrieve its length */
- attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
- len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT;
-
- /* only include nest if data was added to it */
- if (len) {
- len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
- *msg += len;
- }
-
- return FM10K_SUCCESS;
-}
-
/**
* fm10k_tlv_attr_validate - Validate attribute metadata
* @attr: Pointer to attribute
@@ -661,74 +546,6 @@ const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = {
FM10K_TLV_ATTR_LAST
};
-/**
- * fm10k_tlv_msg_test_generate_data - Stuff message with data
- * @msg: Pointer to message
- * @attr_flags: List of flags indicating what attributes to add
- *
- * This function is meant to load a message buffer with attribute data
- **/
-STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags)
-{
- DEBUGFUNC("fm10k_tlv_msg_test_generate_data");
-
- if (attr_flags & BIT(FM10K_TEST_MSG_STRING))
- fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING,
- test_str);
- if (attr_flags & BIT(FM10K_TEST_MSG_MAC_ADDR))
- fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR,
- test_mac, test_vlan);
- if (attr_flags & BIT(FM10K_TEST_MSG_U8))
- fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8);
- if (attr_flags & BIT(FM10K_TEST_MSG_U16))
- fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16);
- if (attr_flags & BIT(FM10K_TEST_MSG_U32))
- fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32);
- if (attr_flags & BIT(FM10K_TEST_MSG_U64))
- fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64);
- if (attr_flags & BIT(FM10K_TEST_MSG_S8))
- fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8);
- if (attr_flags & BIT(FM10K_TEST_MSG_S16))
- fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16);
- if (attr_flags & BIT(FM10K_TEST_MSG_S32))
- fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32);
- if (attr_flags & BIT(FM10K_TEST_MSG_S64))
- fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64);
- if (attr_flags & BIT(FM10K_TEST_MSG_LE_STRUCT))
- fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT,
- test_le, 8);
-}
-
-/**
- * fm10k_tlv_msg_test_create - Create a test message testing all attributes
- * @msg: Pointer to message
- * @attr_flags: List of flags indicating what attributes to add
- *
- * This function is meant to load a message buffer with all attribute types
- * including a nested attribute.
- **/
-void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags)
-{
- u32 *nest = NULL;
-
- DEBUGFUNC("fm10k_tlv_msg_test_create");
-
- fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST);
-
- fm10k_tlv_msg_test_generate_data(msg, attr_flags);
-
- /* check for nested attributes */
- attr_flags >>= FM10K_TEST_MSG_NESTED;
-
- if (attr_flags) {
- nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED);
-
- fm10k_tlv_msg_test_generate_data(nest, attr_flags);
-
- fm10k_tlv_attr_nest_stop(msg);
- }
-}
-
/**
* fm10k_tlv_msg_test - Validate all results on test message receive
* @hw: Pointer to hardware structure
diff --git a/drivers/net/fm10k/base/fm10k_tlv.h b/drivers/net/fm10k/base/fm10k_tlv.h
index af2e4c76a3..1665709d3d 100644
--- a/drivers/net/fm10k/base/fm10k_tlv.h
+++ b/drivers/net/fm10k/base/fm10k_tlv.h
@@ -155,7 +155,6 @@ enum fm10k_tlv_test_attr_id {
};
extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[];
-void fm10k_tlv_msg_test_create(u32 *, u32);
s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
#define FM10K_TLV_MSG_TEST_HANDLER(func) \
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index e20bb9ac35..b93000a2aa 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1115,32 +1115,6 @@ enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
return status;
}
-/**
- * i40e_get_port_mac_addr - get Port MAC address
- * @hw: pointer to the HW structure
- * @mac_addr: pointer to Port MAC address
- *
- * Reads the adapter's Port MAC address
- **/
-enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr)
-{
- struct i40e_aqc_mac_address_read_data addrs;
- enum i40e_status_code status;
- u16 flags = 0;
-
- status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
- if (status)
- return status;
-
- if (flags & I40E_AQC_PORT_ADDR_VALID)
- i40e_memcpy(mac_addr, &addrs.port_mac, sizeof(addrs.port_mac),
- I40E_NONDMA_TO_NONDMA);
- else
- status = I40E_ERR_INVALID_MAC_ADDR;
-
- return status;
-}
-
/**
* i40e_pre_tx_queue_cfg - pre tx queue configure
* @hw: pointer to the HW structure
@@ -1173,92 +1147,6 @@ void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable)
wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), reg_val);
}
-/**
- * i40e_get_san_mac_addr - get SAN MAC address
- * @hw: pointer to the HW structure
- * @mac_addr: pointer to SAN MAC address
- *
- * Reads the adapter's SAN MAC address from NVM
- **/
-enum i40e_status_code i40e_get_san_mac_addr(struct i40e_hw *hw,
- u8 *mac_addr)
-{
- struct i40e_aqc_mac_address_read_data addrs;
- enum i40e_status_code status;
- u16 flags = 0;
-
- status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL);
- if (status)
- return status;
-
- if (flags & I40E_AQC_SAN_ADDR_VALID)
- i40e_memcpy(mac_addr, &addrs.pf_san_mac, sizeof(addrs.pf_san_mac),
- I40E_NONDMA_TO_NONDMA);
- else
- status = I40E_ERR_INVALID_MAC_ADDR;
-
- return status;
-}
-
-/**
- * i40e_read_pba_string - Reads part number string from EEPROM
- * @hw: pointer to hardware structure
- * @pba_num: stores the part number string from the EEPROM
- * @pba_num_size: part number string buffer length
- *
- * Reads the part number string from the EEPROM.
- **/
-enum i40e_status_code i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- u16 pba_word = 0;
- u16 pba_size = 0;
- u16 pba_ptr = 0;
- u16 i = 0;
-
- status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word);
- if ((status != I40E_SUCCESS) || (pba_word != 0xFAFA)) {
- DEBUGOUT("Failed to read PBA flags or flag is invalid.\n");
- return status;
- }
-
- status = i40e_read_nvm_word(hw, I40E_SR_PBA_BLOCK_PTR, &pba_ptr);
- if (status != I40E_SUCCESS) {
- DEBUGOUT("Failed to read PBA Block pointer.\n");
- return status;
- }
-
- status = i40e_read_nvm_word(hw, pba_ptr, &pba_size);
- if (status != I40E_SUCCESS) {
- DEBUGOUT("Failed to read PBA Block size.\n");
- return status;
- }
-
- /* Subtract one to get PBA word count (PBA Size word is included in
- * total size)
- */
- pba_size--;
- if (pba_num_size < (((u32)pba_size * 2) + 1)) {
- DEBUGOUT("Buffer to small for PBA data.\n");
- return I40E_ERR_PARAM;
- }
-
- for (i = 0; i < pba_size; i++) {
- status = i40e_read_nvm_word(hw, (pba_ptr + 1) + i, &pba_word);
- if (status != I40E_SUCCESS) {
- DEBUGOUT1("Failed to read PBA Block word %d.\n", i);
- return status;
- }
-
- pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
- pba_num[(i * 2) + 1] = pba_word & 0xFF;
- }
- pba_num[(pba_size * 2)] = '\0';
-
- return status;
-}
-
/**
* i40e_get_media_type - Gets media type
* @hw: pointer to the hardware structure
@@ -1970,36 +1858,6 @@ enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_set_link_restart_an
- * @hw: pointer to the hw struct
- * @enable_link: if true: enable link, if false: disable link
- * @cmd_details: pointer to command details structure or NULL
- *
- * Sets up the link and restarts the Auto-Negotiation over the link.
- **/
-enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw,
- bool enable_link, struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_link_restart_an *cmd =
- (struct i40e_aqc_set_link_restart_an *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_link_restart_an);
-
- cmd->command = I40E_AQ_PHY_RESTART_AN;
- if (enable_link)
- cmd->command |= I40E_AQ_PHY_LINK_ENABLE;
- else
- cmd->command &= ~I40E_AQ_PHY_LINK_ENABLE;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_get_link_info
* @hw: pointer to the hw struct
@@ -2127,98 +1985,6 @@ enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_get_local_advt_reg
- * @hw: pointer to the hw struct
- * @advt_reg: local AN advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the Local AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw,
- u64 *advt_reg,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_an_advt_reg *resp =
- (struct i40e_aqc_an_advt_reg *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_local_advt_reg);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (status != I40E_SUCCESS)
- goto aq_get_local_advt_reg_exit;
-
- *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32;
- *advt_reg |= LE32_TO_CPU(resp->local_an_reg0);
-
-aq_get_local_advt_reg_exit:
- return status;
-}
-
-/**
- * i40e_aq_set_local_advt_reg
- * @hw: pointer to the hw struct
- * @advt_reg: local AN advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the Local AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
- u64 advt_reg,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_an_advt_reg *cmd =
- (struct i40e_aqc_an_advt_reg *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_local_advt_reg);
-
- cmd->local_an_reg0 = CPU_TO_LE32(I40E_LO_DWORD(advt_reg));
- cmd->local_an_reg1 = CPU_TO_LE16(I40E_HI_DWORD(advt_reg));
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_get_partner_advt
- * @hw: pointer to the hw struct
- * @advt_reg: AN partner advertisement register value
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get the link partner AN advertisement register value.
- **/
-enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw,
- u64 *advt_reg,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_an_advt_reg *resp =
- (struct i40e_aqc_an_advt_reg *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_partner_advt);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (status != I40E_SUCCESS)
- goto aq_get_partner_advt_exit;
-
- *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32;
- *advt_reg |= LE32_TO_CPU(resp->local_an_reg0);
-
-aq_get_partner_advt_exit:
- return status;
-}
-
/**
* i40e_aq_set_lb_modes
* @hw: pointer to the hw struct
@@ -2246,32 +2012,6 @@ enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_set_phy_debug
- * @hw: pointer to the hw struct
- * @cmd_flags: debug command flags
- * @cmd_details: pointer to command details structure or NULL
- *
- * Reset the external PHY.
- **/
-enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_phy_debug *cmd =
- (struct i40e_aqc_set_phy_debug *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_phy_debug);
-
- cmd->command_flags = cmd_flags;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_hw_ver_ge
* @hw: pointer to the hw struct
@@ -2333,62 +2073,6 @@ enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_set_default_vsi
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw,
- u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)
- &desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- cmd->promiscuous_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
- cmd->seid = CPU_TO_LE16(seid);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_clear_default_vsi
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_clear_default_vsi(struct i40e_hw *hw,
- u16 seid,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)
- &desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- cmd->promiscuous_flags = CPU_TO_LE16(0);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT);
- cmd->seid = CPU_TO_LE16(seid);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_set_vsi_unicast_promiscuous
* @hw: pointer to the hw struct
@@ -2463,36 +2147,34 @@ enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
}
/**
-* i40e_aq_set_vsi_full_promiscuous
-* @hw: pointer to the hw struct
-* @seid: VSI number
-* @set: set promiscuous enable/disable
-* @cmd_details: pointer to command details structure or NULL
-**/
-enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
- u16 seid, bool set,
+ * i40e_aq_set_vsi_broadcast
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @set_filter: true to set filter, false to clear filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear the broadcast promiscuous flag (filter) for a given VSI.
+ **/
+enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
+ u16 seid, bool set_filter,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
(struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
enum i40e_status_code status;
- u16 flags = 0;
i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- if (set)
- flags = I40E_AQC_SET_VSI_PROMISC_UNICAST |
- I40E_AQC_SET_VSI_PROMISC_MULTICAST |
- I40E_AQC_SET_VSI_PROMISC_BROADCAST;
-
- cmd->promiscuous_flags = CPU_TO_LE16(flags);
+ i40e_aqc_opc_set_vsi_promiscuous_modes);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST |
- I40E_AQC_SET_VSI_PROMISC_MULTICAST |
- I40E_AQC_SET_VSI_PROMISC_BROADCAST);
+ if (set_filter)
+ cmd->promiscuous_flags
+ |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
+ else
+ cmd->promiscuous_flags
+ &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST);
+ cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
cmd->seid = CPU_TO_LE16(seid);
status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
@@ -2500,15 +2182,14 @@ enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
}
/**
- * i40e_aq_set_vsi_mc_promisc_on_vlan
+ * i40e_aq_set_vsi_vlan_promisc - control the VLAN promiscuous setting
* @hw: pointer to the hw struct
* @seid: vsi number
* @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag
* @cmd_details: pointer to command details structure or NULL
**/
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
+enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
+ u16 seid, bool enable,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
@@ -2519,14 +2200,12 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_vsi_promiscuous_modes);
-
if (enable)
- flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST;
+ flags |= I40E_AQC_SET_VSI_PROMISC_VLAN;
cmd->promiscuous_flags = CPU_TO_LE16(flags);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_MULTICAST);
+ cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_VLAN);
cmd->seid = CPU_TO_LE16(seid);
- cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
@@ -2534,166 +2213,26 @@ enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
}
/**
- * i40e_aq_set_vsi_uc_promisc_on_vlan
+ * i40e_get_vsi_params - get VSI configuration info
* @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag
+ * @vsi_ctx: pointer to a vsi context struct
* @cmd_details: pointer to command details structure or NULL
**/
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
+enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw,
+ struct i40e_vsi_context *vsi_ctx,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+ struct i40e_aqc_add_get_update_vsi *cmd =
+ (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw;
+ struct i40e_aqc_add_get_update_vsi_completion *resp =
+ (struct i40e_aqc_add_get_update_vsi_completion *)
+ &desc.params.raw;
enum i40e_status_code status;
- u16 flags = 0;
+ UNREFERENCED_1PARAMETER(cmd_details);
i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- if (enable) {
- flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
- if (i40e_hw_ver_ge(hw, 1, 5))
- flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
- }
-
- cmd->promiscuous_flags = CPU_TO_LE16(flags);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
- if (i40e_hw_ver_ge(hw, 1, 5))
- cmd->valid_flags |=
- CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
- cmd->seid = CPU_TO_LE16(seid);
- cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_set_vsi_bc_promisc_on_vlan
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set broadcast promiscuous enable/disable for a given VLAN
- * @vid: The VLAN tag filter - capture any broadcast packet with this VLAN tag
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- enum i40e_status_code status;
- u16 flags = 0;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- if (enable)
- flags |= I40E_AQC_SET_VSI_PROMISC_BROADCAST;
-
- cmd->promiscuous_flags = CPU_TO_LE16(flags);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
- cmd->seid = CPU_TO_LE16(seid);
- cmd->vlan_tag = CPU_TO_LE16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_set_vsi_broadcast
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @set_filter: true to set filter, false to clear filter
- * @cmd_details: pointer to command details structure or NULL
- *
- * Set or clear the broadcast promiscuous flag (filter) for a given VSI.
- **/
-enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
- u16 seid, bool set_filter,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
-
- if (set_filter)
- cmd->promiscuous_flags
- |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
- else
- cmd->promiscuous_flags
- &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST);
-
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST);
- cmd->seid = CPU_TO_LE16(seid);
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_set_vsi_vlan_promisc - control the VLAN promiscuous setting
- * @hw: pointer to the hw struct
- * @seid: vsi number
- * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
- u16 seid, bool enable,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
- (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
- enum i40e_status_code status;
- u16 flags = 0;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_vsi_promiscuous_modes);
- if (enable)
- flags |= I40E_AQC_SET_VSI_PROMISC_VLAN;
-
- cmd->promiscuous_flags = CPU_TO_LE16(flags);
- cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_VLAN);
- cmd->seid = CPU_TO_LE16(seid);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_get_vsi_params - get VSI configuration info
- * @hw: pointer to the hw struct
- * @vsi_ctx: pointer to a vsi context struct
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw,
- struct i40e_vsi_context *vsi_ctx,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_get_update_vsi *cmd =
- (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw;
- struct i40e_aqc_add_get_update_vsi_completion *resp =
- (struct i40e_aqc_add_get_update_vsi_completion *)
- &desc.params.raw;
- enum i40e_status_code status;
-
- UNREFERENCED_1PARAMETER(cmd_details);
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_vsi_parameters);
+ i40e_aqc_opc_get_vsi_parameters);
cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid);
@@ -2867,73 +2406,6 @@ enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_send_driver_version
- * @hw: pointer to the hw struct
- * @dv: driver's major, minor version
- * @cmd_details: pointer to command details structure or NULL
- *
- * Send the driver version to the firmware
- **/
-enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw,
- struct i40e_driver_version *dv,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_driver_version *cmd =
- (struct i40e_aqc_driver_version *)&desc.params.raw;
- enum i40e_status_code status;
- u16 len;
-
- if (dv == NULL)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_driver_version);
-
- desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD);
- cmd->driver_major_ver = dv->major_version;
- cmd->driver_minor_ver = dv->minor_version;
- cmd->driver_build_ver = dv->build_version;
- cmd->driver_subbuild_ver = dv->subbuild_version;
-
- len = 0;
- while (len < sizeof(dv->driver_string) &&
- (dv->driver_string[len] < 0x80) &&
- dv->driver_string[len])
- len++;
- status = i40e_asq_send_command(hw, &desc, dv->driver_string,
- len, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_get_link_status - get status of the HW network link
- * @hw: pointer to the hw struct
- * @link_up: pointer to bool (true/false = linkup/linkdown)
- *
- * Variable link_up true if link is up, false if link is down.
- * The variable link_up is invalid if returned value of status != I40E_SUCCESS
- *
- * Side effect: LinkStatusEvent reporting becomes enabled
- **/
-enum i40e_status_code i40e_get_link_status(struct i40e_hw *hw, bool *link_up)
-{
- enum i40e_status_code status = I40E_SUCCESS;
-
- if (hw->phy.get_link_info) {
- status = i40e_update_link_info(hw);
-
- if (status != I40E_SUCCESS)
- i40e_debug(hw, I40E_DEBUG_LINK, "get link failed: status %d\n",
- status);
- }
-
- *link_up = hw->phy.link_info.link_info & I40E_AQ_LINK_UP;
-
- return status;
-}
-
/**
* i40e_updatelink_status - update status of the HW network link
* @hw: pointer to the hw struct
@@ -2973,31 +2445,6 @@ enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw)
return status;
}
-
-/**
- * i40e_get_link_speed
- * @hw: pointer to the hw struct
- *
- * Returns the link speed of the adapter.
- **/
-enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw)
-{
- enum i40e_aq_link_speed speed = I40E_LINK_SPEED_UNKNOWN;
- enum i40e_status_code status = I40E_SUCCESS;
-
- if (hw->phy.get_link_info) {
- status = i40e_aq_get_link_info(hw, true, NULL, NULL);
-
- if (status != I40E_SUCCESS)
- goto i40e_link_speed_exit;
- }
-
- speed = hw->phy.link_info.link_speed;
-
-i40e_link_speed_exit:
- return speed;
-}
-
/**
* i40e_aq_add_veb - Insert a VEB between the VSI and the MAC
* @hw: pointer to the hw struct
@@ -3204,134 +2651,6 @@ enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
return status;
}
-/**
- * i40e_mirrorrule_op - Internal helper function to add/delete mirror rule
- * @hw: pointer to the hw struct
- * @opcode: AQ opcode for add or delete mirror rule
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @id: Destination VSI SEID or Rule ID
- * @count: length of the list
- * @mr_list: list of mirrored VSI SEIDs or VLAN IDs
- * @cmd_details: pointer to command details structure or NULL
- * @rule_id: Rule ID returned from FW
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Add/Delete a mirror rule to a specific switch. Mirror rules are supported for
- * VEBs/VEPA elements only
- **/
-static enum i40e_status_code i40e_mirrorrule_op(struct i40e_hw *hw,
- u16 opcode, u16 sw_seid, u16 rule_type, u16 id,
- u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_delete_mirror_rule *cmd =
- (struct i40e_aqc_add_delete_mirror_rule *)&desc.params.raw;
- struct i40e_aqc_add_delete_mirror_rule_completion *resp =
- (struct i40e_aqc_add_delete_mirror_rule_completion *)&desc.params.raw;
- enum i40e_status_code status;
- u16 buf_size;
-
- buf_size = count * sizeof(*mr_list);
-
- /* prep the rest of the request */
- i40e_fill_default_direct_cmd_desc(&desc, opcode);
- cmd->seid = CPU_TO_LE16(sw_seid);
- cmd->rule_type = CPU_TO_LE16(rule_type &
- I40E_AQC_MIRROR_RULE_TYPE_MASK);
- cmd->num_entries = CPU_TO_LE16(count);
- /* Dest VSI for add, rule_id for delete */
- cmd->destination = CPU_TO_LE16(id);
- if (mr_list) {
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF |
- I40E_AQ_FLAG_RD));
- if (buf_size > I40E_AQ_LARGE_BUF)
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
- }
-
- status = i40e_asq_send_command(hw, &desc, mr_list, buf_size,
- cmd_details);
- if (status == I40E_SUCCESS ||
- hw->aq.asq_last_status == I40E_AQ_RC_ENOSPC) {
- if (rule_id)
- *rule_id = LE16_TO_CPU(resp->rule_id);
- if (rules_used)
- *rules_used = LE16_TO_CPU(resp->mirror_rules_used);
- if (rules_free)
- *rules_free = LE16_TO_CPU(resp->mirror_rules_free);
- }
- return status;
-}
-
-/**
- * i40e_aq_add_mirrorrule - add a mirror rule
- * @hw: pointer to the hw struct
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @dest_vsi: SEID of VSI to which packets will be mirrored
- * @count: length of the list
- * @mr_list: list of mirrored VSI SEIDs or VLAN IDs
- * @cmd_details: pointer to command details structure or NULL
- * @rule_id: Rule ID returned from FW
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Add mirror rule. Mirror rules are supported for VEBs or VEPA elements only
- **/
-enum i40e_status_code i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free)
-{
- if (!(rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS ||
- rule_type == I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS)) {
- if (count == 0 || !mr_list)
- return I40E_ERR_PARAM;
- }
-
- return i40e_mirrorrule_op(hw, i40e_aqc_opc_add_mirror_rule, sw_seid,
- rule_type, dest_vsi, count, mr_list,
- cmd_details, rule_id, rules_used, rules_free);
-}
-
-/**
- * i40e_aq_delete_mirrorrule - delete a mirror rule
- * @hw: pointer to the hw struct
- * @sw_seid: Switch SEID (to which rule refers)
- * @rule_type: Rule Type (ingress/egress/VLAN)
- * @count: length of the list
- * @rule_id: Rule ID that is returned in the receive desc as part of
- * add_mirrorrule.
- * @mr_list: list of mirrored VLAN IDs to be removed
- * @cmd_details: pointer to command details structure or NULL
- * @rules_used: Number of rules used in internal switch
- * @rules_free: Number of rules free in internal switch
- *
- * Delete a mirror rule. Mirror rules are supported for VEBs/VEPA elements only
- **/
-enum i40e_status_code i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rules_used, u16 *rules_free)
-{
- /* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */
- if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
- /* count and mr_list shall be valid for rule_type INGRESS VLAN
- * mirroring. For other rule_type, count and rule_type should
- * not matter.
- */
- if (count == 0 || !mr_list)
- return I40E_ERR_PARAM;
- }
-
- return i40e_mirrorrule_op(hw, i40e_aqc_opc_delete_mirror_rule, sw_seid,
- rule_type, rule_id, count, mr_list,
- cmd_details, NULL, rules_used, rules_free);
-}
-
/**
* i40e_aq_add_vlan - Add VLAN ids to the HW filtering
* @hw: pointer to the hw struct
@@ -3638,196 +2957,41 @@ enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
}
/**
- * i40e_aq_read_nvm_config - read an nvm config block
+ * i40e_aq_erase_nvm
* @hw: pointer to the hw struct
- * @cmd_flags: NVM access admin command bits
- * @field_id: field or feature id
- * @data: buffer for result
- * @buf_size: buffer size
- * @element_count: pointer to count of elements read by FW
+ * @module_pointer: module pointer location in words from the NVM beginning
+ * @offset: offset in the module (expressed in 4 KB from module's beginning)
+ * @length: length of the section to be erased (expressed in 4 KB)
+ * @last_command: tells if this is the last command in a series
* @cmd_details: pointer to command details structure or NULL
+ *
+ * Erase the NVM sector using the admin queue commands
**/
-enum i40e_status_code i40e_aq_read_nvm_config(struct i40e_hw *hw,
- u8 cmd_flags, u32 field_id, void *data,
- u16 buf_size, u16 *element_count,
+enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
+ u32 offset, u16 length, bool last_command,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- struct i40e_aqc_nvm_config_read *cmd =
- (struct i40e_aqc_nvm_config_read *)&desc.params.raw;
+ struct i40e_aqc_nvm_update *cmd =
+ (struct i40e_aqc_nvm_update *)&desc.params.raw;
enum i40e_status_code status;
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_config_read);
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF));
- if (buf_size > I40E_AQ_LARGE_BUF)
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
+ DEBUGFUNC("i40e_aq_erase_nvm");
- cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
- cmd->element_id = CPU_TO_LE16((u16)(0xffff & field_id));
- if (cmd_flags & I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK)
- cmd->element_id_msw = CPU_TO_LE16((u16)(field_id >> 16));
- else
- cmd->element_id_msw = 0;
+ /* In offset the highest byte must be zeroed. */
+ if (offset & 0xFF000000) {
+ status = I40E_ERR_PARAM;
+ goto i40e_aq_erase_nvm_exit;
+ }
- status = i40e_asq_send_command(hw, &desc, data, buf_size, cmd_details);
+ i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase);
- if (!status && element_count)
- *element_count = LE16_TO_CPU(cmd->element_count);
-
- return status;
-}
-
-/**
- * i40e_aq_write_nvm_config - write an nvm config block
- * @hw: pointer to the hw struct
- * @cmd_flags: NVM access admin command bits
- * @data: buffer for result
- * @buf_size: buffer size
- * @element_count: count of elements to be written
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_write_nvm_config(struct i40e_hw *hw,
- u8 cmd_flags, void *data, u16 buf_size,
- u16 element_count,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_nvm_config_write *cmd =
- (struct i40e_aqc_nvm_config_write *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_config_write);
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
- if (buf_size > I40E_AQ_LARGE_BUF)
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
- cmd->element_count = CPU_TO_LE16(element_count);
- cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
- status = i40e_asq_send_command(hw, &desc, data, buf_size, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_nvm_update_in_process
- * @hw: pointer to the hw struct
- * @update_flow_state: True indicates that update flow starts, false that ends
- * @cmd_details: pointer to command details structure or NULL
- *
- * Indicate NVM update in process.
- **/
-enum i40e_status_code
-i40e_aq_nvm_update_in_process(struct i40e_hw *hw,
- bool update_flow_state,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_nvm_update_in_process *cmd =
- (struct i40e_aqc_nvm_update_in_process *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_nvm_update_in_process);
-
- cmd->command = I40E_AQ_UPDATE_FLOW_END;
-
- if (update_flow_state)
- cmd->command |= I40E_AQ_UPDATE_FLOW_START;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_min_rollback_rev_update - triggers an ow after update
- * @hw: pointer to the hw struct
- * @mode: opt-in mode, 1b for single module update, 0b for bulk update
- * @module: module to be updated. Ignored if mode is 0b
- * @min_rrev: value of the new minimal version. Ignored if mode is 0b
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code
-i40e_aq_min_rollback_rev_update(struct i40e_hw *hw, u8 mode, u8 module,
- u32 min_rrev,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_rollback_revision_update *cmd =
- (struct i40e_aqc_rollback_revision_update *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_rollback_revision_update);
- cmd->optin_mode = mode;
- cmd->module_selected = module;
- cmd->min_rrev = min_rrev;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_oem_post_update - triggers an OEM specific flow after update
- * @hw: pointer to the hw struct
- * @buff: buffer for result
- * @buff_size: buffer size
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_oem_post_update(struct i40e_hw *hw,
- void *buff, u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- UNREFERENCED_2PARAMETER(buff, buff_size);
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_oem_post_update);
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
- if (status && LE16_TO_CPU(desc.retval) == I40E_AQ_RC_ESRCH)
- status = I40E_ERR_NOT_IMPLEMENTED;
-
- return status;
-}
-
-/**
- * i40e_aq_erase_nvm
- * @hw: pointer to the hw struct
- * @module_pointer: module pointer location in words from the NVM beginning
- * @offset: offset in the module (expressed in 4 KB from module's beginning)
- * @length: length of the section to be erased (expressed in 4 KB)
- * @last_command: tells if this is the last command in a series
- * @cmd_details: pointer to command details structure or NULL
- *
- * Erase the NVM sector using the admin queue commands
- **/
-enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
- u32 offset, u16 length, bool last_command,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_nvm_update *cmd =
- (struct i40e_aqc_nvm_update *)&desc.params.raw;
- enum i40e_status_code status;
-
- DEBUGFUNC("i40e_aq_erase_nvm");
-
- /* In offset the highest byte must be zeroed. */
- if (offset & 0xFF000000) {
- status = I40E_ERR_PARAM;
- goto i40e_aq_erase_nvm_exit;
- }
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase);
-
- /* If this is the last command in a series, set the proper flag. */
- if (last_command)
- cmd->command_flags |= I40E_AQ_NVM_LAST_CMD;
- cmd->module_pointer = module_pointer;
- cmd->offset = CPU_TO_LE32(offset);
- cmd->length = CPU_TO_LE16(length);
+ /* If this is the last command in a series, set the proper flag. */
+ if (last_command)
+ cmd->command_flags |= I40E_AQ_NVM_LAST_CMD;
+ cmd->module_pointer = module_pointer;
+ cmd->offset = CPU_TO_LE32(offset);
+ cmd->length = CPU_TO_LE16(length);
status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
@@ -4302,43 +3466,6 @@ enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
return status;
}
-/**
- * i40e_aq_rearrange_nvm
- * @hw: pointer to the hw struct
- * @rearrange_nvm: defines direction of rearrangement
- * @cmd_details: pointer to command details structure or NULL
- *
- * Rearrange NVM structure, available only for transition FW
- **/
-enum i40e_status_code i40e_aq_rearrange_nvm(struct i40e_hw *hw,
- u8 rearrange_nvm,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aqc_nvm_update *cmd;
- enum i40e_status_code status;
- struct i40e_aq_desc desc;
-
- DEBUGFUNC("i40e_aq_rearrange_nvm");
-
- cmd = (struct i40e_aqc_nvm_update *)&desc.params.raw;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_update);
-
- rearrange_nvm &= (I40E_AQ_NVM_REARRANGE_TO_FLAT |
- I40E_AQ_NVM_REARRANGE_TO_STRUCT);
-
- if (!rearrange_nvm) {
- status = I40E_ERR_PARAM;
- goto i40e_aq_rearrange_nvm_exit;
- }
-
- cmd->command_flags |= rearrange_nvm;
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-i40e_aq_rearrange_nvm_exit:
- return status;
-}
-
/**
* i40e_aq_get_lldp_mib
* @hw: pointer to the hw struct
@@ -4459,44 +3586,6 @@ enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_restore_lldp
- * @hw: pointer to the hw struct
- * @setting: pointer to factory setting variable or NULL
- * @restore: True if factory settings should be restored
- * @cmd_details: pointer to command details structure or NULL
- *
- * Restore LLDP Agent factory settings if @restore set to True. In other case
- * only returns factory setting in AQ response.
- **/
-enum i40e_status_code
-i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_lldp_restore *cmd =
- (struct i40e_aqc_lldp_restore *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_PERSISTENT)) {
- i40e_debug(hw, I40E_DEBUG_ALL,
- "Restore LLDP not supported by current FW version.\n");
- return I40E_ERR_DEVICE_NOT_SUPPORTED;
- }
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_restore);
-
- if (restore)
- cmd->command |= I40E_AQ_LLDP_AGENT_RESTORE;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (setting)
- *setting = cmd->command & 1;
-
- return status;
-}
-
/**
* i40e_aq_stop_lldp
* @hw: pointer to the hw struct
@@ -4567,37 +3656,6 @@ enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_set_dcb_parameters
- * @hw: pointer to the hw struct
- * @cmd_details: pointer to command details structure or NULL
- * @dcb_enable: True if DCB configuration needs to be applied
- *
- **/
-enum i40e_status_code
-i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_dcb_parameters *cmd =
- (struct i40e_aqc_set_dcb_parameters *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE))
- return I40E_ERR_DEVICE_NOT_SUPPORTED;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_dcb_parameters);
-
- if (dcb_enable) {
- cmd->valid_flags = I40E_DCB_VALID;
- cmd->command = I40E_AQ_DCB_SET_AGENT;
- }
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_get_cee_dcb_config
* @hw: pointer to the hw struct
@@ -4626,36 +3684,6 @@ enum i40e_status_code i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_start_stop_dcbx - Start/Stop DCBx service in FW
- * @hw: pointer to the hw struct
- * @start_agent: True if DCBx Agent needs to be Started
- * False if DCBx Agent needs to be Stopped
- * @cmd_details: pointer to command details structure or NULL
- *
- * Start/Stop the embedded dcbx Agent
- **/
-enum i40e_status_code i40e_aq_start_stop_dcbx(struct i40e_hw *hw,
- bool start_agent,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_lldp_stop_start_specific_agent *cmd =
- (struct i40e_aqc_lldp_stop_start_specific_agent *)
- &desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_lldp_stop_start_spec_agent);
-
- if (start_agent)
- cmd->command = I40E_AQC_START_SPECIFIC_AGENT_MASK;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_add_udp_tunnel
* @hw: pointer to the hw struct
@@ -4716,45 +3744,6 @@ enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
return status;
}
-/**
- * i40e_aq_get_switch_resource_alloc (0x0204)
- * @hw: pointer to the hw struct
- * @num_entries: pointer to u8 to store the number of resource entries returned
- * @buf: pointer to a user supplied buffer. This buffer must be large enough
- * to store the resource information for all resource types. Each
- * resource type is a i40e_aqc_switch_resource_alloc_data structure.
- * @count: size, in bytes, of the buffer provided
- * @cmd_details: pointer to command details structure or NULL
- *
- * Query the resources allocated to a function.
- **/
-enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw,
- u8 *num_entries,
- struct i40e_aqc_switch_resource_alloc_element_resp *buf,
- u16 count,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_get_switch_resource_alloc *cmd_resp =
- (struct i40e_aqc_get_switch_resource_alloc *)&desc.params.raw;
- enum i40e_status_code status;
- u16 length = count * sizeof(*buf);
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_switch_resource_alloc);
-
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- if (length > I40E_AQ_LARGE_BUF)
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
- status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details);
-
- if (!status && num_entries)
- *num_entries = cmd_resp->num_entries;
-
- return status;
-}
-
/**
* i40e_aq_delete_element - Delete switch element
* @hw: pointer to the hw struct
@@ -4784,178 +3773,45 @@ enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
}
/**
- * i40e_aq_add_pvirt - Instantiate a Port Virtualizer on a port
- * @hw: pointer to the hw struct
- * @flags: component flags
- * @mac_seid: uplink seid (MAC SEID)
- * @vsi_seid: connected vsi seid
- * @ret_seid: seid of create pv component
- *
- * This instantiates an i40e port virtualizer with specified flags.
- * Depending on specified flags the port virtualizer can act as a
- * 802.1Qbr port virtualizer or a 802.1Qbg S-component.
- */
-enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags,
- u16 mac_seid, u16 vsi_seid,
- u16 *ret_seid)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_update_pv *cmd =
- (struct i40e_aqc_add_update_pv *)&desc.params.raw;
- struct i40e_aqc_add_update_pv_completion *resp =
- (struct i40e_aqc_add_update_pv_completion *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (vsi_seid == 0)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_pv);
- cmd->command_flags = CPU_TO_LE16(flags);
- cmd->uplink_seid = CPU_TO_LE16(mac_seid);
- cmd->connected_seid = CPU_TO_LE16(vsi_seid);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
- if (!status && ret_seid)
- *ret_seid = LE16_TO_CPU(resp->pv_seid);
-
- return status;
-}
-
-/**
- * i40e_aq_add_tag - Add an S/E-tag
+ * i40e_aq_add_mcast_etag - Add a multicast E-tag
* @hw: pointer to the hw struct
- * @direct_to_queue: should s-tag direct flow to a specific queue
- * @vsi_seid: VSI SEID to use this tag
- * @tag: value of the tag
- * @queue_num: queue number, only valid is direct_to_queue is true
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
+ * @pv_seid: Port Virtualizer of this SEID to associate E-tag with
+ * @etag: value of E-tag to add
+ * @num_tags_in_buf: number of unicast E-tags in indirect buffer
+ * @buf: address of indirect buffer
+ * @tags_used: return value, number of E-tags in use by this port
+ * @tags_free: return value, number of unallocated M-tags
* @cmd_details: pointer to command details structure or NULL
*
- * This associates an S- or E-tag to a VSI in the switch complex. It returns
+ * This associates a multicast E-tag to a port virtualizer. It will return
* the number of tags allocated by the PF, and the number of unallocated
* tags available.
+ *
+ * The indirect buffer pointed to by buf is a list of 2-byte E-tags,
+ * num_tags_in_buf long.
**/
-enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue,
- u16 vsi_seid, u16 tag, u16 queue_num,
+enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
+ u16 etag, u8 num_tags_in_buf, void *buf,
u16 *tags_used, u16 *tags_free,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
- struct i40e_aqc_add_tag *cmd =
- (struct i40e_aqc_add_tag *)&desc.params.raw;
- struct i40e_aqc_add_remove_tag_completion *resp =
- (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw;
+ struct i40e_aqc_add_remove_mcast_etag *cmd =
+ (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
+ struct i40e_aqc_add_remove_mcast_etag_completion *resp =
+ (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
enum i40e_status_code status;
+ u16 length = sizeof(u16) * num_tags_in_buf;
- if (vsi_seid == 0)
+ if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0))
return I40E_ERR_PARAM;
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_tag);
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_add_multicast_etag);
- cmd->seid = CPU_TO_LE16(vsi_seid);
- cmd->tag = CPU_TO_LE16(tag);
- if (direct_to_queue) {
- cmd->flags = CPU_TO_LE16(I40E_AQC_ADD_TAG_FLAG_TO_QUEUE);
- cmd->queue_number = CPU_TO_LE16(queue_num);
- }
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status) {
- if (tags_used != NULL)
- *tags_used = LE16_TO_CPU(resp->tags_used);
- if (tags_free != NULL)
- *tags_free = LE16_TO_CPU(resp->tags_free);
- }
-
- return status;
-}
-
-/**
- * i40e_aq_remove_tag - Remove an S- or E-tag
- * @hw: pointer to the hw struct
- * @vsi_seid: VSI SEID this tag is associated with
- * @tag: value of the S-tag to delete
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This deletes an S- or E-tag from a VSI in the switch complex. It returns
- * the number of tags allocated by the PF, and the number of unallocated
- * tags available.
- **/
-enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid,
- u16 tag, u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_remove_tag *cmd =
- (struct i40e_aqc_remove_tag *)&desc.params.raw;
- struct i40e_aqc_add_remove_tag_completion *resp =
- (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (vsi_seid == 0)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_tag);
-
- cmd->seid = CPU_TO_LE16(vsi_seid);
- cmd->tag = CPU_TO_LE16(tag);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status) {
- if (tags_used != NULL)
- *tags_used = LE16_TO_CPU(resp->tags_used);
- if (tags_free != NULL)
- *tags_free = LE16_TO_CPU(resp->tags_free);
- }
-
- return status;
-}
-
-/**
- * i40e_aq_add_mcast_etag - Add a multicast E-tag
- * @hw: pointer to the hw struct
- * @pv_seid: Port Virtualizer of this SEID to associate E-tag with
- * @etag: value of E-tag to add
- * @num_tags_in_buf: number of unicast E-tags in indirect buffer
- * @buf: address of indirect buffer
- * @tags_used: return value, number of E-tags in use by this port
- * @tags_free: return value, number of unallocated M-tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This associates a multicast E-tag to a port virtualizer. It will return
- * the number of tags allocated by the PF, and the number of unallocated
- * tags available.
- *
- * The indirect buffer pointed to by buf is a list of 2-byte E-tags,
- * num_tags_in_buf long.
- **/
-enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
- u16 etag, u8 num_tags_in_buf, void *buf,
- u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_mcast_etag *cmd =
- (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
- struct i40e_aqc_add_remove_mcast_etag_completion *resp =
- (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
- enum i40e_status_code status;
- u16 length = sizeof(u16) * num_tags_in_buf;
-
- if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0))
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_add_multicast_etag);
-
- cmd->pv_seid = CPU_TO_LE16(pv_seid);
- cmd->etag = CPU_TO_LE16(etag);
- cmd->num_unicast_etags = num_tags_in_buf;
+ cmd->pv_seid = CPU_TO_LE16(pv_seid);
+ cmd->etag = CPU_TO_LE16(etag);
+ cmd->num_unicast_etags = num_tags_in_buf;
desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
@@ -4971,239 +3827,6 @@ enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
return status;
}
-/**
- * i40e_aq_remove_mcast_etag - Remove a multicast E-tag
- * @hw: pointer to the hw struct
- * @pv_seid: Port Virtualizer SEID this M-tag is associated with
- * @etag: value of the E-tag to remove
- * @tags_used: return value, number of tags in use by this port
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This deletes an E-tag from the port virtualizer. It will return
- * the number of tags allocated by the port, and the number of unallocated
- * tags available.
- **/
-enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pv_seid,
- u16 etag, u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_mcast_etag *cmd =
- (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw;
- struct i40e_aqc_add_remove_mcast_etag_completion *resp =
- (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw;
- enum i40e_status_code status;
-
-
- if (pv_seid == 0)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_remove_multicast_etag);
-
- cmd->pv_seid = CPU_TO_LE16(pv_seid);
- cmd->etag = CPU_TO_LE16(etag);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status) {
- if (tags_used != NULL)
- *tags_used = LE16_TO_CPU(resp->mcast_etags_used);
- if (tags_free != NULL)
- *tags_free = LE16_TO_CPU(resp->mcast_etags_free);
- }
-
- return status;
-}
-
-/**
- * i40e_aq_update_tag - Update an S/E-tag
- * @hw: pointer to the hw struct
- * @vsi_seid: VSI SEID using this S-tag
- * @old_tag: old tag value
- * @new_tag: new tag value
- * @tags_used: return value, number of tags in use by this PF
- * @tags_free: return value, number of unallocated tags
- * @cmd_details: pointer to command details structure or NULL
- *
- * This updates the value of the tag currently attached to this VSI
- * in the switch complex. It will return the number of tags allocated
- * by the PF, and the number of unallocated tags available.
- **/
-enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid,
- u16 old_tag, u16 new_tag, u16 *tags_used,
- u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_update_tag *cmd =
- (struct i40e_aqc_update_tag *)&desc.params.raw;
- struct i40e_aqc_update_tag_completion *resp =
- (struct i40e_aqc_update_tag_completion *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (vsi_seid == 0)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_update_tag);
-
- cmd->seid = CPU_TO_LE16(vsi_seid);
- cmd->old_tag = CPU_TO_LE16(old_tag);
- cmd->new_tag = CPU_TO_LE16(new_tag);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status) {
- if (tags_used != NULL)
- *tags_used = LE16_TO_CPU(resp->tags_used);
- if (tags_free != NULL)
- *tags_free = LE16_TO_CPU(resp->tags_free);
- }
-
- return status;
-}
-
-/**
- * i40e_aq_dcb_ignore_pfc - Ignore PFC for given TCs
- * @hw: pointer to the hw struct
- * @tcmap: TC map for request/release any ignore PFC condition
- * @request: request or release ignore PFC condition
- * @tcmap_ret: return TCs for which PFC is currently ignored
- * @cmd_details: pointer to command details structure or NULL
- *
- * This sends out request/release to ignore PFC condition for a TC.
- * It will return the TCs for which PFC is currently ignored.
- **/
-enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, u8 tcmap,
- bool request, u8 *tcmap_ret,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_pfc_ignore *cmd_resp =
- (struct i40e_aqc_pfc_ignore *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_ignore_pfc);
-
- if (request)
- cmd_resp->command_flags = I40E_AQC_PFC_IGNORE_SET;
-
- cmd_resp->tc_bitmap = tcmap;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status) {
- if (tcmap_ret != NULL)
- *tcmap_ret = cmd_resp->tc_bitmap;
- }
-
- return status;
-}
-
-/**
- * i40e_aq_dcb_updated - DCB Updated Command
- * @hw: pointer to the hw struct
- * @cmd_details: pointer to command details structure or NULL
- *
- * When LLDP is handled in PF this command is used by the PF
- * to notify EMP that a DCB setting is modified.
- * When LLDP is handled in EMP this command is used by the PF
- * to notify EMP whenever one of the following parameters get
- * modified:
- * - PFCLinkDelayAllowance in PRTDCB_GENC.PFCLDA
- * - PCIRTT in PRTDCB_GENC.PCIRTT
- * - Maximum Frame Size for non-FCoE TCs set by PRTDCB_TDPUC.MAX_TXFRAME.
- * EMP will return when the shared RPB settings have been
- * recomputed and modified. The retval field in the descriptor
- * will be set to 0 when RPB is modified.
- **/
-enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_add_statistics - Add a statistics block to a VLAN in a switch.
- * @hw: pointer to the hw struct
- * @seid: defines the SEID of the switch for which the stats are requested
- * @vlan_id: the VLAN ID for which the statistics are requested
- * @stat_index: index of the statistics counters block assigned to this VLAN
- * @cmd_details: pointer to command details structure or NULL
- *
- * XL710 supports 128 smonVlanStats counters.This command is used to
- * allocate a set of smonVlanStats counters to a specific VLAN in a specific
- * switch.
- **/
-enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid,
- u16 vlan_id, u16 *stat_index,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_statistics *cmd_resp =
- (struct i40e_aqc_add_remove_statistics *)&desc.params.raw;
- enum i40e_status_code status;
-
- if ((seid == 0) || (stat_index == NULL))
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_statistics);
-
- cmd_resp->seid = CPU_TO_LE16(seid);
- cmd_resp->vlan = CPU_TO_LE16(vlan_id);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (!status && stat_index)
- *stat_index = LE16_TO_CPU(cmd_resp->stat_index);
-
- return status;
-}
-
-/**
- * i40e_aq_remove_statistics - Remove a statistics block to a VLAN in a switch.
- * @hw: pointer to the hw struct
- * @seid: defines the SEID of the switch for which the stats are requested
- * @vlan_id: the VLAN ID for which the statistics are requested
- * @stat_index: index of the statistics counters block assigned to this VLAN
- * @cmd_details: pointer to command details structure or NULL
- *
- * XL710 supports 128 smonVlanStats counters.This command is used to
- * deallocate a set of smonVlanStats counters to a specific VLAN in a specific
- * switch.
- **/
-enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid,
- u16 vlan_id, u16 stat_index,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_statistics *cmd =
- (struct i40e_aqc_add_remove_statistics *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (seid == 0)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_remove_statistics);
-
- cmd->seid = CPU_TO_LE16(seid);
- cmd->vlan = CPU_TO_LE16(vlan_id);
- cmd->stat_index = CPU_TO_LE16(stat_index);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_set_port_parameters - set physical port parameters.
* @hw: pointer to the hw struct
@@ -5332,35 +3955,6 @@ enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_aq_config_switch_comp_bw_limit - Configure Switching component BW Limit
- * @hw: pointer to the hw struct
- * @seid: switching component seid
- * @credit: BW limit credits (0 = disabled)
- * @max_bw: Max BW limit credits
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
- u16 seid, u16 credit, u8 max_bw,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_configure_switching_comp_bw_limit *cmd =
- (struct i40e_aqc_configure_switching_comp_bw_limit *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_configure_switching_comp_bw_limit);
-
- cmd->seid = CPU_TO_LE16(seid);
- cmd->credit = CPU_TO_LE16(credit);
- cmd->max_bw = max_bw;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_config_vsi_ets_sla_bw_limit - Config VSI BW Limit per TC
* @hw: pointer to the hw struct
@@ -5430,23 +4024,6 @@ enum i40e_status_code i40e_aq_config_switch_comp_bw_config(struct i40e_hw *hw,
cmd_details);
}
-/**
- * i40e_aq_config_switch_comp_ets_bw_limit - Config Switch comp BW Limit per TC
- * @hw: pointer to the hw struct
- * @seid: seid of the switching component
- * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit(
- struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
-{
- return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
- i40e_aqc_opc_configure_switching_comp_ets_bw_limit,
- cmd_details);
-}
-
/**
* i40e_aq_query_vsi_bw_config - Query VSI BW configuration
* @hw: pointer to the hw struct
@@ -5499,27 +4076,10 @@ enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
}
/**
- * i40e_aq_query_port_ets_config - Query Physical Port ETS configuration
+ * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration
* @hw: pointer to the hw struct
- * @seid: seid of the VSI or switching component connected to Physical Port
- * @bw_data: Buffer to hold current ETS configuration for the Physical Port
- * @cmd_details: pointer to command details structure or NULL
- **/
-enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_port_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
-{
- return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data),
- i40e_aqc_opc_query_port_ets_config,
- cmd_details);
-}
-
-/**
- * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration
- * @hw: pointer to the hw struct
- * @seid: seid of the switching component
- * @bw_data: Buffer to hold switching component's BW configuration
+ * @seid: seid of the switching component
+ * @bw_data: Buffer to hold switching component's BW configuration
* @cmd_details: pointer to command details structure or NULL
**/
enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
@@ -5758,28 +4318,6 @@ enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
return status;
}
-/**
- * i40e_add_filter_to_drop_tx_flow_control_frames- filter to drop flow control
- * @hw: pointer to the hw struct
- * @seid: VSI seid to add ethertype filter from
- **/
-void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
- u16 seid)
-{
-#define I40E_FLOW_CONTROL_ETHTYPE 0x8808
- u16 flag = I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC |
- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP |
- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX;
- u16 ethtype = I40E_FLOW_CONTROL_ETHTYPE;
- enum i40e_status_code status;
-
- status = i40e_aq_add_rem_control_packet_filter(hw, NULL, ethtype, flag,
- seid, 0, true, NULL,
- NULL);
- if (status)
- DEBUGOUT("Ethtype Filter Add failed: Error pruning Tx flow control frames\n");
-}
-
/**
* i40e_fix_up_geneve_vni - adjust Geneve VNI for HW issue
* @filters: list of cloud filters
@@ -5900,649 +4438,195 @@ i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
}
}
- status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_rem_cloud_filters
- * @hw: pointer to the hardware structure
- * @seid: VSI seid to remove cloud filters from
- * @filters: Buffer which contains the filters to be removed
- * @filter_count: number of filters contained in the buffer
- *
- * Remove the cloud filters for a given VSI. The contents of the
- * i40e_aqc_cloud_filters_element_data are filled in by the caller
- * of the function.
- *
- **/
-enum i40e_status_code
-i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_cloud_filters_element_data *filters,
- u8 filter_count)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_cloud_filters *cmd =
- (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- enum i40e_status_code status;
- u16 buff_len;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_remove_cloud_filters);
-
- buff_len = filter_count * sizeof(*filters);
- desc.datalen = CPU_TO_LE16(buff_len);
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
- cmd->num_filters = filter_count;
- cmd->seid = CPU_TO_LE16(seid);
-
- i40e_fix_up_geneve_vni(filters, filter_count);
-
- status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_rem_cloud_filters_bb
- * @hw: pointer to the hardware structure
- * @seid: VSI seid to remove cloud filters from
- * @filters: Buffer which contains the filters in big buffer to be removed
- * @filter_count: number of filters contained in the buffer
- *
- * Remove the big buffer cloud filters for a given VSI. The contents of the
- * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
- * function.
- *
- **/
-enum i40e_status_code
-i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_cloud_filters_element_bb *filters,
- u8 filter_count)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_add_remove_cloud_filters *cmd =
- (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
- enum i40e_status_code status;
- u16 buff_len;
- int i;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_remove_cloud_filters);
-
- buff_len = filter_count * sizeof(*filters);
- desc.datalen = CPU_TO_LE16(buff_len);
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
- cmd->num_filters = filter_count;
- cmd->seid = CPU_TO_LE16(seid);
- cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
-
- for (i = 0; i < filter_count; i++) {
- u16 tnl_type;
- u32 ti;
-
- tnl_type = (LE16_TO_CPU(filters[i].element.flags) &
- I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
- I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
-
- /* Due to hardware eccentricities, the VNI for Geneve is shifted
- * one more byte further than normally used for Tenant ID in
- * other tunnel types.
- */
- if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
- ti = LE32_TO_CPU(filters[i].element.tenant_id);
- filters[i].element.tenant_id = CPU_TO_LE32(ti << 8);
- }
- }
-
- status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_replace_cloud_filters - Replace cloud filter command
- * @hw: pointer to the hw struct
- * @filters: pointer to the i40e_aqc_replace_cloud_filter_cmd struct
- * @cmd_buf: pointer to the i40e_aqc_replace_cloud_filter_cmd_buf struct
- *
- **/
-enum
-i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
- struct i40e_aqc_replace_cloud_filters_cmd *filters,
- struct i40e_aqc_replace_cloud_filters_cmd_buf *cmd_buf)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_replace_cloud_filters_cmd *cmd =
- (struct i40e_aqc_replace_cloud_filters_cmd *)&desc.params.raw;
- enum i40e_status_code status = I40E_SUCCESS;
- int i = 0;
-
- /* X722 doesn't support this command */
- if (hw->mac.type == I40E_MAC_X722)
- return I40E_ERR_DEVICE_NOT_SUPPORTED;
-
- /* need FW version greater than 6.00 */
- if (hw->aq.fw_maj_ver < 6)
- return I40E_NOT_SUPPORTED;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_replace_cloud_filters);
-
- desc.datalen = CPU_TO_LE16(32);
- desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
- cmd->old_filter_type = filters->old_filter_type;
- cmd->new_filter_type = filters->new_filter_type;
- cmd->valid_flags = filters->valid_flags;
- cmd->tr_bit = filters->tr_bit;
- cmd->tr_bit2 = filters->tr_bit2;
-
- status = i40e_asq_send_command(hw, &desc, cmd_buf,
- sizeof(struct i40e_aqc_replace_cloud_filters_cmd_buf), NULL);
-
- /* for get cloud filters command */
- for (i = 0; i < 32; i += 4) {
- cmd_buf->filters[i / 4].filter_type = cmd_buf->data[i];
- cmd_buf->filters[i / 4].input[0] = cmd_buf->data[i + 1];
- cmd_buf->filters[i / 4].input[1] = cmd_buf->data[i + 2];
- cmd_buf->filters[i / 4].input[2] = cmd_buf->data[i + 3];
- }
-
- return status;
-}
-
-
-/**
- * i40e_aq_alternate_write
- * @hw: pointer to the hardware structure
- * @reg_addr0: address of first dword to be read
- * @reg_val0: value to be written under 'reg_addr0'
- * @reg_addr1: address of second dword to be read
- * @reg_val1: value to be written under 'reg_addr1'
- *
- * Write one or two dwords to alternate structure. Fields are indicated
- * by 'reg_addr0' and 'reg_addr1' register numbers.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw,
- u32 reg_addr0, u32 reg_val0,
- u32 reg_addr1, u32 reg_val1)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_write *cmd_resp =
- (struct i40e_aqc_alternate_write *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_write);
- cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
- cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
- cmd_resp->data0 = CPU_TO_LE32(reg_val0);
- cmd_resp->data1 = CPU_TO_LE32(reg_val1);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_alternate_write_indirect
- * @hw: pointer to the hardware structure
- * @addr: address of a first register to be modified
- * @dw_count: number of alternate structure fields to write
- * @buffer: pointer to the command buffer
- *
- * Write 'dw_count' dwords from 'buffer' to alternate structure
- * starting at 'addr'.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw,
- u32 addr, u32 dw_count, void *buffer)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_ind_write *cmd_resp =
- (struct i40e_aqc_alternate_ind_write *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (buffer == NULL)
- return I40E_ERR_PARAM;
-
- /* Indirect command */
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_alternate_write_indirect);
-
- desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD);
- desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF);
- if (dw_count > (I40E_AQ_LARGE_BUF/4))
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
- cmd_resp->address = CPU_TO_LE32(addr);
- cmd_resp->length = CPU_TO_LE32(dw_count);
-
- status = i40e_asq_send_command(hw, &desc, buffer,
- I40E_LO_DWORD(4*dw_count), NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_alternate_read
- * @hw: pointer to the hardware structure
- * @reg_addr0: address of first dword to be read
- * @reg_val0: pointer for data read from 'reg_addr0'
- * @reg_addr1: address of second dword to be read
- * @reg_val1: pointer for data read from 'reg_addr1'
- *
- * Read one or two dwords from alternate structure. Fields are indicated
- * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer
- * is not passed then only register at 'reg_addr0' is read.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
- u32 reg_addr0, u32 *reg_val0,
- u32 reg_addr1, u32 *reg_val1)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_write *cmd_resp =
- (struct i40e_aqc_alternate_write *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (reg_val0 == NULL)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read);
- cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
- cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
- if (status == I40E_SUCCESS) {
- *reg_val0 = LE32_TO_CPU(cmd_resp->data0);
-
- if (reg_val1 != NULL)
- *reg_val1 = LE32_TO_CPU(cmd_resp->data1);
- }
-
- return status;
-}
-
-/**
- * i40e_aq_alternate_read_indirect
- * @hw: pointer to the hardware structure
- * @addr: address of the alternate structure field
- * @dw_count: number of alternate structure fields to read
- * @buffer: pointer to the command buffer
- *
- * Read 'dw_count' dwords from alternate structure starting at 'addr' and
- * place them in 'buffer'. The buffer should be allocated by caller.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw,
- u32 addr, u32 dw_count, void *buffer)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_ind_write *cmd_resp =
- (struct i40e_aqc_alternate_ind_write *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (buffer == NULL)
- return I40E_ERR_PARAM;
-
- /* Indirect command */
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_alternate_read_indirect);
-
- desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD);
- desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF);
- if (dw_count > (I40E_AQ_LARGE_BUF/4))
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
-
- cmd_resp->address = CPU_TO_LE32(addr);
- cmd_resp->length = CPU_TO_LE32(dw_count);
-
- status = i40e_asq_send_command(hw, &desc, buffer,
- I40E_LO_DWORD(4*dw_count), NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_alternate_clear
- * @hw: pointer to the HW structure.
- *
- * Clear the alternate structures of the port from which the function
- * is called.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_alternate_clear_port);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
-
- return status;
-}
-
-/**
- * i40e_aq_alternate_write_done
- * @hw: pointer to the HW structure.
- * @bios_mode: indicates whether the command is executed by UEFI or legacy BIOS
- * @reset_needed: indicates the SW should trigger GLOBAL reset
- *
- * Indicates to the FW that alternate structures have been changed.
- *
- **/
-enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw,
- u8 bios_mode, bool *reset_needed)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_write_done *cmd =
- (struct i40e_aqc_alternate_write_done *)&desc.params.raw;
- enum i40e_status_code status;
-
- if (reset_needed == NULL)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_alternate_write_done);
-
- cmd->cmd_flags = CPU_TO_LE16(bios_mode);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
- if (!status && reset_needed)
- *reset_needed = ((LE16_TO_CPU(cmd->cmd_flags) &
- I40E_AQ_ALTERNATE_RESET_NEEDED) != 0);
-
- return status;
-}
-
-/**
- * i40e_aq_set_oem_mode
- * @hw: pointer to the HW structure.
- * @oem_mode: the OEM mode to be used
- *
- * Sets the device to a specific operating mode. Currently the only supported
- * mode is no_clp, which causes FW to refrain from using Alternate RAM.
- *
- **/
-enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw,
- u8 oem_mode)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_alternate_write_done *cmd =
- (struct i40e_aqc_alternate_write_done *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_alternate_set_mode);
-
- cmd->cmd_flags = CPU_TO_LE16(oem_mode);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
+ status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
return status;
}
/**
- * i40e_aq_resume_port_tx
+ * i40e_aq_rem_cloud_filters
* @hw: pointer to the hardware structure
- * @cmd_details: pointer to command details structure or NULL
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the cloud filters for a given VSI. The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
*
- * Resume port's Tx traffic
**/
-enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
+ struct i40e_aqc_cloud_filters_element_data *filters,
+ u8 filter_count)
{
struct i40e_aq_desc desc;
+ struct i40e_aqc_add_remove_cloud_filters *cmd =
+ (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
enum i40e_status_code status;
+ u16 buff_len;
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_remove_cloud_filters);
- return status;
-}
+ buff_len = filter_count * sizeof(*filters);
+ desc.datalen = CPU_TO_LE16(buff_len);
+ desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+ cmd->num_filters = filter_count;
+ cmd->seid = CPU_TO_LE16(seid);
-/**
- * i40e_set_pci_config_data - store PCI bus info
- * @hw: pointer to hardware structure
- * @link_status: the link status word from PCI config space
- *
- * Stores the PCI bus info (speed, width, type) within the i40e_hw structure
- **/
-void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status)
-{
- hw->bus.type = i40e_bus_type_pci_express;
+ i40e_fix_up_geneve_vni(filters, filter_count);
- switch (link_status & I40E_PCI_LINK_WIDTH) {
- case I40E_PCI_LINK_WIDTH_1:
- hw->bus.width = i40e_bus_width_pcie_x1;
- break;
- case I40E_PCI_LINK_WIDTH_2:
- hw->bus.width = i40e_bus_width_pcie_x2;
- break;
- case I40E_PCI_LINK_WIDTH_4:
- hw->bus.width = i40e_bus_width_pcie_x4;
- break;
- case I40E_PCI_LINK_WIDTH_8:
- hw->bus.width = i40e_bus_width_pcie_x8;
- break;
- default:
- hw->bus.width = i40e_bus_width_unknown;
- break;
- }
+ status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
- switch (link_status & I40E_PCI_LINK_SPEED) {
- case I40E_PCI_LINK_SPEED_2500:
- hw->bus.speed = i40e_bus_speed_2500;
- break;
- case I40E_PCI_LINK_SPEED_5000:
- hw->bus.speed = i40e_bus_speed_5000;
- break;
- case I40E_PCI_LINK_SPEED_8000:
- hw->bus.speed = i40e_bus_speed_8000;
- break;
- default:
- hw->bus.speed = i40e_bus_speed_unknown;
- break;
- }
+ return status;
}
/**
- * i40e_aq_debug_dump
+ * i40e_aq_rem_cloud_filters_bb
* @hw: pointer to the hardware structure
- * @cluster_id: specific cluster to dump
- * @table_id: table id within cluster
- * @start_index: index of line in the block to read
- * @buff_size: dump buffer size
- * @buff: dump buffer
- * @ret_buff_size: actual buffer size returned
- * @ret_next_table: next block to read
- * @ret_next_index: next index to read
- * @cmd_details: pointer to command details structure or NULL
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be removed
+ * @filter_count: number of filters contained in the buffer
*
- * Dump internal FW/HW data for debug purposes.
+ * Remove the big buffer cloud filters for a given VSI. The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
*
**/
-enum i40e_status_code i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
- u8 table_id, u32 start_index, u16 buff_size,
- void *buff, u16 *ret_buff_size,
- u8 *ret_next_table, u32 *ret_next_index,
- struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+ struct i40e_aqc_cloud_filters_element_bb *filters,
+ u8 filter_count)
{
struct i40e_aq_desc desc;
- struct i40e_aqc_debug_dump_internals *cmd =
- (struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
- struct i40e_aqc_debug_dump_internals *resp =
- (struct i40e_aqc_debug_dump_internals *)&desc.params.raw;
+ struct i40e_aqc_add_remove_cloud_filters *cmd =
+ (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
enum i40e_status_code status;
-
- if (buff_size == 0 || !buff)
- return I40E_ERR_PARAM;
+ u16 buff_len;
+ int i;
i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_debug_dump_internals);
- /* Indirect Command */
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- if (buff_size > I40E_AQ_LARGE_BUF)
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB);
+ i40e_aqc_opc_remove_cloud_filters);
+
+ buff_len = filter_count * sizeof(*filters);
+ desc.datalen = CPU_TO_LE16(buff_len);
+ desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+ cmd->num_filters = filter_count;
+ cmd->seid = CPU_TO_LE16(seid);
+ cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
- cmd->cluster_id = cluster_id;
- cmd->table_id = table_id;
- cmd->idx = CPU_TO_LE32(start_index);
+ for (i = 0; i < filter_count; i++) {
+ u16 tnl_type;
+ u32 ti;
- desc.datalen = CPU_TO_LE16(buff_size);
+ tnl_type = (LE16_TO_CPU(filters[i].element.flags) &
+ I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+ I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
- status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
- if (!status) {
- if (ret_buff_size != NULL)
- *ret_buff_size = LE16_TO_CPU(desc.datalen);
- if (ret_next_table != NULL)
- *ret_next_table = resp->table_id;
- if (ret_next_index != NULL)
- *ret_next_index = LE32_TO_CPU(resp->idx);
+ /* Due to hardware eccentricities, the VNI for Geneve is shifted
+ * one more byte further than normally used for Tenant ID in
+ * other tunnel types.
+ */
+ if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+ ti = LE32_TO_CPU(filters[i].element.tenant_id);
+ filters[i].element.tenant_id = CPU_TO_LE32(ti << 8);
+ }
}
+ status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
return status;
}
-
/**
- * i40e_enable_eee
- * @hw: pointer to the hardware structure
- * @enable: state of Energy Efficient Ethernet mode to be set
+ * i40e_aq_replace_cloud_filters - Replace cloud filter command
+ * @hw: pointer to the hw struct
+ * @filters: pointer to the i40e_aqc_replace_cloud_filter_cmd struct
+ * @cmd_buf: pointer to the i40e_aqc_replace_cloud_filter_cmd_buf struct
*
- * Enables or disables Energy Efficient Ethernet (EEE) mode
- * accordingly to @enable parameter.
**/
-enum i40e_status_code i40e_enable_eee(struct i40e_hw *hw, bool enable)
+enum
+i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
+ struct i40e_aqc_replace_cloud_filters_cmd *filters,
+ struct i40e_aqc_replace_cloud_filters_cmd_buf *cmd_buf)
{
- struct i40e_aq_get_phy_abilities_resp abilities;
- struct i40e_aq_set_phy_config config;
- enum i40e_status_code status;
- __le16 eee_capability;
+ struct i40e_aq_desc desc;
+ struct i40e_aqc_replace_cloud_filters_cmd *cmd =
+ (struct i40e_aqc_replace_cloud_filters_cmd *)&desc.params.raw;
+ enum i40e_status_code status = I40E_SUCCESS;
+ int i = 0;
- /* Get initial PHY capabilities */
- status = i40e_aq_get_phy_capabilities(hw, false, true, &abilities,
- NULL);
- if (status)
- goto err;
+ /* X722 doesn't support this command */
+ if (hw->mac.type == I40E_MAC_X722)
+ return I40E_ERR_DEVICE_NOT_SUPPORTED;
- /* Check whether NIC configuration is compatible with Energy Efficient
- * Ethernet (EEE) mode.
- */
- if (abilities.eee_capability == 0) {
- status = I40E_ERR_CONFIG;
- goto err;
- }
+ /* need FW version greater than 6.00 */
+ if (hw->aq.fw_maj_ver < 6)
+ return I40E_NOT_SUPPORTED;
- /* Cache initial EEE capability */
- eee_capability = abilities.eee_capability;
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_replace_cloud_filters);
- /* Get current configuration */
- status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- NULL);
- if (status)
- goto err;
+ desc.datalen = CPU_TO_LE16(32);
+ desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+ cmd->old_filter_type = filters->old_filter_type;
+ cmd->new_filter_type = filters->new_filter_type;
+ cmd->valid_flags = filters->valid_flags;
+ cmd->tr_bit = filters->tr_bit;
+ cmd->tr_bit2 = filters->tr_bit2;
- /* Cache current configuration */
- config.phy_type = abilities.phy_type;
- config.phy_type_ext = abilities.phy_type_ext;
- config.link_speed = abilities.link_speed;
- config.abilities = abilities.abilities |
- I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
- config.eeer = abilities.eeer_val;
- config.low_power_ctrl = abilities.d3_lpan;
- config.fec_config = abilities.fec_cfg_curr_mod_ext_info &
- I40E_AQ_PHY_FEC_CONFIG_MASK;
-
- /* Set desired EEE state */
- if (enable) {
- config.eee_capability = eee_capability;
- config.eeer |= I40E_PRTPM_EEER_TX_LPI_EN_MASK;
- } else {
- config.eee_capability = 0;
- config.eeer &= ~I40E_PRTPM_EEER_TX_LPI_EN_MASK;
+ status = i40e_asq_send_command(hw, &desc, cmd_buf,
+ sizeof(struct i40e_aqc_replace_cloud_filters_cmd_buf), NULL);
+
+ /* for get cloud filters command */
+ for (i = 0; i < 32; i += 4) {
+ cmd_buf->filters[i / 4].filter_type = cmd_buf->data[i];
+ cmd_buf->filters[i / 4].input[0] = cmd_buf->data[i + 1];
+ cmd_buf->filters[i / 4].input[1] = cmd_buf->data[i + 2];
+ cmd_buf->filters[i / 4].input[2] = cmd_buf->data[i + 3];
}
- /* Save modified config */
- status = i40e_aq_set_phy_config(hw, &config, NULL);
-err:
return status;
}
/**
- * i40e_read_bw_from_alt_ram
+ * i40e_aq_alternate_read
* @hw: pointer to the hardware structure
- * @max_bw: pointer for max_bw read
- * @min_bw: pointer for min_bw read
- * @min_valid: pointer for bool that is true if min_bw is a valid value
- * @max_valid: pointer for bool that is true if max_bw is a valid value
+ * @reg_addr0: address of first dword to be read
+ * @reg_val0: pointer for data read from 'reg_addr0'
+ * @reg_addr1: address of second dword to be read
+ * @reg_val1: pointer for data read from 'reg_addr1'
*
- * Read bw from the alternate ram for the given pf
- **/
-enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
- u32 *max_bw, u32 *min_bw,
- bool *min_valid, bool *max_valid)
-{
- enum i40e_status_code status;
- u32 max_bw_addr, min_bw_addr;
-
- /* Calculate the address of the min/max bw registers */
- max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET +
- I40E_ALT_STRUCT_MAX_BW_OFFSET +
- (I40E_ALT_STRUCT_DWORDS_PER_PF * hw->pf_id);
- min_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET +
- I40E_ALT_STRUCT_MIN_BW_OFFSET +
- (I40E_ALT_STRUCT_DWORDS_PER_PF * hw->pf_id);
-
- /* Read the bandwidths from alt ram */
- status = i40e_aq_alternate_read(hw, max_bw_addr, max_bw,
- min_bw_addr, min_bw);
-
- if (*min_bw & I40E_ALT_BW_VALID_MASK)
- *min_valid = true;
- else
- *min_valid = false;
-
- if (*max_bw & I40E_ALT_BW_VALID_MASK)
- *max_valid = true;
- else
- *max_valid = false;
-
- return status;
-}
-
-/**
- * i40e_aq_configure_partition_bw
- * @hw: pointer to the hardware structure
- * @bw_data: Buffer holding valid pfs and bw limits
- * @cmd_details: pointer to command details
+ * Read one or two dwords from alternate structure. Fields are indicated
+ * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer
+ * is not passed then only register at 'reg_addr0' is read.
*
- * Configure partitions guaranteed/max bw
**/
-enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw,
- struct i40e_aqc_configure_partition_bw_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details)
+enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
+ u32 reg_addr0, u32 *reg_val0,
+ u32 reg_addr1, u32 *reg_val1)
{
- enum i40e_status_code status;
struct i40e_aq_desc desc;
- u16 bwd_size = sizeof(*bw_data);
+ struct i40e_aqc_alternate_write *cmd_resp =
+ (struct i40e_aqc_alternate_write *)&desc.params.raw;
+ enum i40e_status_code status;
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_configure_partition_bw);
+ if (reg_val0 == NULL)
+ return I40E_ERR_PARAM;
- /* Indirect command */
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
+ i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read);
+ cmd_resp->address0 = CPU_TO_LE32(reg_addr0);
+ cmd_resp->address1 = CPU_TO_LE32(reg_addr1);
+
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL);
- desc.datalen = CPU_TO_LE16(bwd_size);
+ if (status == I40E_SUCCESS) {
+ *reg_val0 = LE32_TO_CPU(cmd_resp->data0);
- status = i40e_asq_send_command(hw, &desc, bw_data, bwd_size, cmd_details);
+ if (reg_val1 != NULL)
+ *reg_val1 = LE32_TO_CPU(cmd_resp->data1);
+ }
return status;
}
@@ -6758,93 +4842,18 @@ enum i40e_status_code i40e_write_phy_register_clause45(struct i40e_hw *hw,
(I40E_GLGEN_MSCA_MDIINPROGEN_MASK);
status = I40E_ERR_TIMEOUT;
retry = 1000;
- wr32(hw, I40E_GLGEN_MSCA(port_num), command);
- do {
- command = rd32(hw, I40E_GLGEN_MSCA(port_num));
- if (!(command & I40E_GLGEN_MSCA_MDICMD_MASK)) {
- status = I40E_SUCCESS;
- break;
- }
- i40e_usec_delay(10);
- retry--;
- } while (retry);
-
-phy_write_end:
- return status;
-}
-
-/**
- * i40e_write_phy_register
- * @hw: pointer to the HW structure
- * @page: registers page number
- * @reg: register address in the page
- * @phy_addr: PHY address on MDIO interface
- * @value: PHY register value
- *
- * Writes value to specified PHY register
- **/
-enum i40e_status_code i40e_write_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 value)
-{
- enum i40e_status_code status;
-
- switch (hw->device_id) {
- case I40E_DEV_ID_1G_BASE_T_X722:
- status = i40e_write_phy_register_clause22(hw,
- reg, phy_addr, value);
- break;
- case I40E_DEV_ID_10G_BASE_T:
- case I40E_DEV_ID_10G_BASE_T4:
- case I40E_DEV_ID_10G_BASE_T_BC:
- case I40E_DEV_ID_5G_BASE_T_BC:
- case I40E_DEV_ID_10G_BASE_T_X722:
- case I40E_DEV_ID_25G_B:
- case I40E_DEV_ID_25G_SFP28:
- status = i40e_write_phy_register_clause45(hw,
- page, reg, phy_addr, value);
- break;
- default:
- status = I40E_ERR_UNKNOWN_PHY;
- break;
- }
-
- return status;
-}
-
-/**
- * i40e_read_phy_register
- * @hw: pointer to the HW structure
- * @page: registers page number
- * @reg: register address in the page
- * @phy_addr: PHY address on MDIO interface
- * @value: PHY register value
- *
- * Reads specified PHY register value
- **/
-enum i40e_status_code i40e_read_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 *value)
-{
- enum i40e_status_code status;
-
- switch (hw->device_id) {
- case I40E_DEV_ID_1G_BASE_T_X722:
- status = i40e_read_phy_register_clause22(hw, reg, phy_addr,
- value);
- break;
- case I40E_DEV_ID_10G_BASE_T:
- case I40E_DEV_ID_10G_BASE_T4:
- case I40E_DEV_ID_5G_BASE_T_BC:
- case I40E_DEV_ID_10G_BASE_T_X722:
- case I40E_DEV_ID_25G_B:
- case I40E_DEV_ID_25G_SFP28:
- status = i40e_read_phy_register_clause45(hw, page, reg,
- phy_addr, value);
- break;
- default:
- status = I40E_ERR_UNKNOWN_PHY;
- break;
- }
+ wr32(hw, I40E_GLGEN_MSCA(port_num), command);
+ do {
+ command = rd32(hw, I40E_GLGEN_MSCA(port_num));
+ if (!(command & I40E_GLGEN_MSCA_MDICMD_MASK)) {
+ status = I40E_SUCCESS;
+ break;
+ }
+ i40e_usec_delay(10);
+ retry--;
+ } while (retry);
+phy_write_end:
return status;
}
@@ -6863,80 +4872,6 @@ u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num)
return (u8)(reg_val >> ((dev_num + 1) * 5)) & 0x1f;
}
-/**
- * i40e_blink_phy_led
- * @hw: pointer to the HW structure
- * @time: time how long led will blinks in secs
- * @interval: gap between LED on and off in msecs
- *
- * Blinks PHY link LED
- **/
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- u32 i;
- u16 led_ctl = 0;
- u16 gpio_led_port;
- u16 led_reg;
- u16 led_addr = I40E_PHY_LED_PROV_REG_1;
- u8 phy_addr = 0;
- u8 port_num;
-
- i = rd32(hw, I40E_PFGEN_PORTNUM);
- port_num = (u8)(i & I40E_PFGEN_PORTNUM_PORT_NUM_MASK);
- phy_addr = i40e_get_phy_address(hw, port_num);
-
- for (gpio_led_port = 0; gpio_led_port < 3; gpio_led_port++,
- led_addr++) {
- status = i40e_read_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- led_addr, phy_addr,
- &led_reg);
- if (status)
- goto phy_blinking_end;
- led_ctl = led_reg;
- if (led_reg & I40E_PHY_LED_LINK_MODE_MASK) {
- led_reg = 0;
- status = i40e_write_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- led_addr, phy_addr,
- led_reg);
- if (status)
- goto phy_blinking_end;
- break;
- }
- }
-
- if (time > 0 && interval > 0) {
- for (i = 0; i < time * 1000; i += interval) {
- status = i40e_read_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- led_addr, phy_addr, &led_reg);
- if (status)
- goto restore_config;
- if (led_reg & I40E_PHY_LED_MANUAL_ON)
- led_reg = 0;
- else
- led_reg = I40E_PHY_LED_MANUAL_ON;
- status = i40e_write_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- led_addr, phy_addr, led_reg);
- if (status)
- goto restore_config;
- i40e_msec_delay(interval);
- }
- }
-
-restore_config:
- status = i40e_write_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- led_addr, phy_addr, led_ctl);
-
-phy_blinking_end:
- return status;
-}
-
/**
* i40e_led_get_reg - read LED register
* @hw: pointer to the HW structure
@@ -6995,153 +4930,7 @@ enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
return status;
}
-/**
- * i40e_led_get_phy - return current on/off mode
- * @hw: pointer to the hw struct
- * @led_addr: address of led register to use
- * @val: original value of register to use
- *
- **/
-enum i40e_status_code i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
- u16 *val)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- u16 gpio_led_port;
- u32 reg_val_aq;
- u16 temp_addr;
- u8 phy_addr = 0;
- u16 reg_val;
-
- if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
- status = i40e_aq_get_phy_register(hw,
- I40E_AQ_PHY_REG_ACCESS_EXTERNAL,
- I40E_PHY_COM_REG_PAGE, true,
- I40E_PHY_LED_PROV_REG_1,
- ®_val_aq, NULL);
- if (status == I40E_SUCCESS)
- *val = (u16)reg_val_aq;
- return status;
- }
- temp_addr = I40E_PHY_LED_PROV_REG_1;
- phy_addr = i40e_get_phy_address(hw, hw->port);
- for (gpio_led_port = 0; gpio_led_port < 3; gpio_led_port++,
- temp_addr++) {
- status = i40e_read_phy_register_clause45(hw,
- I40E_PHY_COM_REG_PAGE,
- temp_addr, phy_addr,
- ®_val);
- if (status)
- return status;
- *val = reg_val;
- if (reg_val & I40E_PHY_LED_LINK_MODE_MASK) {
- *led_addr = temp_addr;
- break;
- }
- }
- return status;
-}
-
-/**
- * i40e_led_set_phy
- * @hw: pointer to the HW structure
- * @on: true or false
- * @led_addr: address of led register to use
- * @mode: original val plus bit for set or ignore
- *
- * Set led's on or off when controlled by the PHY
- *
- **/
-enum i40e_status_code i40e_led_set_phy(struct i40e_hw *hw, bool on,
- u16 led_addr, u32 mode)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- u32 led_ctl = 0;
- u32 led_reg = 0;
-
- status = i40e_led_get_reg(hw, led_addr, &led_reg);
- if (status)
- return status;
- led_ctl = led_reg;
- if (led_reg & I40E_PHY_LED_LINK_MODE_MASK) {
- led_reg = 0;
- status = i40e_led_set_reg(hw, led_addr, led_reg);
- if (status)
- return status;
- }
- status = i40e_led_get_reg(hw, led_addr, &led_reg);
- if (status)
- goto restore_config;
- if (on)
- led_reg = I40E_PHY_LED_MANUAL_ON;
- else
- led_reg = 0;
- status = i40e_led_set_reg(hw, led_addr, led_reg);
- if (status)
- goto restore_config;
- if (mode & I40E_PHY_LED_MODE_ORIG) {
- led_ctl = (mode & I40E_PHY_LED_MODE_MASK);
- status = i40e_led_set_reg(hw, led_addr, led_ctl);
- }
- return status;
-
-restore_config:
- status = i40e_led_set_reg(hw, led_addr, led_ctl);
- return status;
-}
#endif /* PF_DRIVER */
-/**
- * i40e_get_phy_lpi_status - read LPI status from PHY or MAC register
- * @hw: pointer to the hw struct
- * @stat: pointer to structure with status of rx and tx lpi
- *
- * Read LPI state directly from external PHY register or from MAC
- * register, depending on device ID and current link speed.
- */
-enum i40e_status_code i40e_get_phy_lpi_status(struct i40e_hw *hw,
- struct i40e_hw_port_stats *stat)
-{
- enum i40e_status_code ret = I40E_SUCCESS;
- bool eee_mrvl_phy;
- bool eee_bcm_phy;
- u32 val;
-
- stat->rx_lpi_status = 0;
- stat->tx_lpi_status = 0;
-
- eee_bcm_phy =
- (hw->device_id == I40E_DEV_ID_10G_BASE_T_BC ||
- hw->device_id == I40E_DEV_ID_5G_BASE_T_BC) &&
- (hw->phy.link_info.link_speed == I40E_LINK_SPEED_2_5GB ||
- hw->phy.link_info.link_speed == I40E_LINK_SPEED_5GB);
- eee_mrvl_phy =
- hw->device_id == I40E_DEV_ID_1G_BASE_T_X722;
-
- if (eee_bcm_phy || eee_mrvl_phy) {
- /* read Clause 45 PCS Status 1 register */
- ret = i40e_aq_get_phy_register(hw,
- I40E_AQ_PHY_REG_ACCESS_EXTERNAL,
- I40E_BCM_PHY_PCS_STATUS1_PAGE,
- true,
- I40E_BCM_PHY_PCS_STATUS1_REG,
- &val, NULL);
-
- if (ret != I40E_SUCCESS)
- return ret;
-
- stat->rx_lpi_status = !!(val & I40E_BCM_PHY_PCS_STATUS1_RX_LPI);
- stat->tx_lpi_status = !!(val & I40E_BCM_PHY_PCS_STATUS1_TX_LPI);
-
- return ret;
- }
-
- val = rd32(hw, I40E_PRTPM_EEE_STAT);
- stat->rx_lpi_status = (val & I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK) >>
- I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT;
- stat->tx_lpi_status = (val & I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK) >>
- I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT;
-
- return ret;
-}
/**
* i40e_get_lpi_counters - read LPI counters from EEE statistics
@@ -7185,108 +4974,6 @@ enum i40e_status_code i40e_get_lpi_counters(struct i40e_hw *hw,
return I40E_SUCCESS;
}
-/**
- * i40e_get_lpi_duration - read LPI time duration from EEE statistics
- * @hw: pointer to the hw struct
- * @stat: pointer to structure with status of rx and tx lpi
- * @tx_duration: pointer to memory for TX LPI time duration
- * @rx_duration: pointer to memory for RX LPI time duration
- *
- * Read Low Power Idle (LPI) mode time duration from Energy Efficient
- * Ethernet (EEE) statistics.
- */
-enum i40e_status_code i40e_get_lpi_duration(struct i40e_hw *hw,
- struct i40e_hw_port_stats *stat,
- u64 *tx_duration, u64 *rx_duration)
-{
- u32 tx_time_dur, rx_time_dur;
- enum i40e_status_code retval;
- u32 cmd_status;
-
- if (hw->device_id != I40E_DEV_ID_10G_BASE_T_BC &&
- hw->device_id != I40E_DEV_ID_5G_BASE_T_BC)
- return I40E_ERR_NOT_IMPLEMENTED;
-
- retval = i40e_aq_run_phy_activity
- (hw, I40E_AQ_RUN_PHY_ACT_ID_USR_DFND,
- I40E_AQ_RUN_PHY_ACT_DNL_OPCODE_GET_EEE_DUR,
- &cmd_status, &tx_time_dur, &rx_time_dur, NULL);
-
- if (retval)
- return retval;
- if ((cmd_status & I40E_AQ_RUN_PHY_ACT_CMD_STAT_MASK) !=
- I40E_AQ_RUN_PHY_ACT_CMD_STAT_SUCC)
- return I40E_ERR_ADMIN_QUEUE_ERROR;
-
- if (hw->phy.link_info.link_speed == I40E_LINK_SPEED_1GB &&
- !tx_time_dur && !rx_time_dur &&
- stat->tx_lpi_status && stat->rx_lpi_status) {
- retval = i40e_aq_run_phy_activity
- (hw, I40E_AQ_RUN_PHY_ACT_ID_USR_DFND,
- I40E_AQ_RUN_PHY_ACT_DNL_OPCODE_GET_EEE_STAT_DUR,
- &cmd_status,
- &tx_time_dur, &rx_time_dur, NULL);
-
- if (retval)
- return retval;
- if ((cmd_status & I40E_AQ_RUN_PHY_ACT_CMD_STAT_MASK) !=
- I40E_AQ_RUN_PHY_ACT_CMD_STAT_SUCC)
- return I40E_ERR_ADMIN_QUEUE_ERROR;
- tx_time_dur = 0;
- rx_time_dur = 0;
- }
-
- *tx_duration = tx_time_dur;
- *rx_duration = rx_time_dur;
-
- return retval;
-}
-
-/**
- * i40e_lpi_stat_update - update LPI counters with values relative to offset
- * @hw: pointer to the hw struct
- * @offset_loaded: flag indicating need of writing current value to offset
- * @tx_offset: pointer to offset of TX LPI counter
- * @tx_stat: pointer to value of TX LPI counter
- * @rx_offset: pointer to offset of RX LPI counter
- * @rx_stat: pointer to value of RX LPI counter
- *
- * Update Low Power Idle (LPI) mode counters while having regard to passed
- * offsets.
- **/
-enum i40e_status_code i40e_lpi_stat_update(struct i40e_hw *hw,
- bool offset_loaded, u64 *tx_offset,
- u64 *tx_stat, u64 *rx_offset,
- u64 *rx_stat)
-{
- enum i40e_status_code retval;
- u32 tx_counter, rx_counter;
- bool is_clear;
-
- retval = i40e_get_lpi_counters(hw, &tx_counter, &rx_counter, &is_clear);
- if (retval)
- goto err;
-
- if (is_clear) {
- *tx_stat += tx_counter;
- *rx_stat += rx_counter;
- } else {
- if (!offset_loaded) {
- *tx_offset = tx_counter;
- *rx_offset = rx_counter;
- }
-
- *tx_stat = (tx_counter >= *tx_offset) ?
- (u32)(tx_counter - *tx_offset) :
- (u32)((tx_counter + BIT_ULL(32)) - *tx_offset);
- *rx_stat = (rx_counter >= *rx_offset) ?
- (u32)(rx_counter - *rx_offset) :
- (u32)((rx_counter + BIT_ULL(32)) - *rx_offset);
- }
-err:
- return retval;
-}
-
/**
* i40e_aq_rx_ctl_read_register - use FW to read from an Rx control register
* @hw: pointer to the hw struct
@@ -7674,195 +5361,6 @@ enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw)
}
#endif /* VF_DRIVER */
-/**
- * i40e_aq_set_arp_proxy_config
- * @hw: pointer to the HW structure
- * @proxy_config: pointer to proxy config command table struct
- * @cmd_details: pointer to command details
- *
- * Set ARP offload parameters from pre-populated
- * i40e_aqc_arp_proxy_data struct
- **/
-enum i40e_status_code i40e_aq_set_arp_proxy_config(struct i40e_hw *hw,
- struct i40e_aqc_arp_proxy_data *proxy_config,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- if (!proxy_config)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_proxy_config);
-
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
- desc.params.external.addr_high =
- CPU_TO_LE32(I40E_HI_DWORD((u64)proxy_config));
- desc.params.external.addr_low =
- CPU_TO_LE32(I40E_LO_DWORD((u64)proxy_config));
- desc.datalen = CPU_TO_LE16(sizeof(struct i40e_aqc_arp_proxy_data));
-
- status = i40e_asq_send_command(hw, &desc, proxy_config,
- sizeof(struct i40e_aqc_arp_proxy_data),
- cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_opc_set_ns_proxy_table_entry
- * @hw: pointer to the HW structure
- * @ns_proxy_table_entry: pointer to NS table entry command struct
- * @cmd_details: pointer to command details
- *
- * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
- * from pre-populated i40e_aqc_ns_proxy_data struct
- **/
-enum i40e_status_code i40e_aq_set_ns_proxy_table_entry(struct i40e_hw *hw,
- struct i40e_aqc_ns_proxy_data *ns_proxy_table_entry,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- if (!ns_proxy_table_entry)
- return I40E_ERR_PARAM;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_ns_proxy_table_entry);
-
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
- desc.params.external.addr_high =
- CPU_TO_LE32(I40E_HI_DWORD((u64)ns_proxy_table_entry));
- desc.params.external.addr_low =
- CPU_TO_LE32(I40E_LO_DWORD((u64)ns_proxy_table_entry));
- desc.datalen = CPU_TO_LE16(sizeof(struct i40e_aqc_ns_proxy_data));
-
- status = i40e_asq_send_command(hw, &desc, ns_proxy_table_entry,
- sizeof(struct i40e_aqc_ns_proxy_data),
- cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_set_clear_wol_filter
- * @hw: pointer to the hw struct
- * @filter_index: index of filter to modify (0-7)
- * @filter: buffer containing filter to be set
- * @set_filter: true to set filter, false to clear filter
- * @no_wol_tco: if true, pass through packets cannot cause wake-up
- * if false, pass through packets may cause wake-up
- * @filter_valid: true if filter action is valid
- * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
- * @cmd_details: pointer to command details structure or NULL
- *
- * Set or clear WoL filter for port attached to the PF
- **/
-enum i40e_status_code i40e_aq_set_clear_wol_filter(struct i40e_hw *hw,
- u8 filter_index,
- struct i40e_aqc_set_wol_filter_data *filter,
- bool set_filter, bool no_wol_tco,
- bool filter_valid, bool no_wol_tco_valid,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_set_wol_filter *cmd =
- (struct i40e_aqc_set_wol_filter *)&desc.params.raw;
- enum i40e_status_code status;
- u16 cmd_flags = 0;
- u16 valid_flags = 0;
- u16 buff_len = 0;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_wol_filter);
-
- if (filter_index >= I40E_AQC_MAX_NUM_WOL_FILTERS)
- return I40E_ERR_PARAM;
- cmd->filter_index = CPU_TO_LE16(filter_index);
-
- if (set_filter) {
- if (!filter)
- return I40E_ERR_PARAM;
-
- cmd_flags |= I40E_AQC_SET_WOL_FILTER;
- cmd_flags |= I40E_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
- }
-
- if (no_wol_tco)
- cmd_flags |= I40E_AQC_SET_WOL_FILTER_NO_TCO_WOL;
- cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
-
- if (filter_valid)
- valid_flags |= I40E_AQC_SET_WOL_FILTER_ACTION_VALID;
- if (no_wol_tco_valid)
- valid_flags |= I40E_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
- cmd->valid_flags = CPU_TO_LE16(valid_flags);
-
- buff_len = sizeof(*filter);
- desc.datalen = CPU_TO_LE16(buff_len);
-
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD);
-
- cmd->address_high = CPU_TO_LE32(I40E_HI_DWORD((u64)filter));
- cmd->address_low = CPU_TO_LE32(I40E_LO_DWORD((u64)filter));
-
- status = i40e_asq_send_command(hw, &desc, filter,
- buff_len, cmd_details);
-
- return status;
-}
-
-/**
- * i40e_aq_get_wake_event_reason
- * @hw: pointer to the hw struct
- * @wake_reason: return value, index of matching filter
- * @cmd_details: pointer to command details structure or NULL
- *
- * Get information for the reason of a Wake Up event
- **/
-enum i40e_status_code i40e_aq_get_wake_event_reason(struct i40e_hw *hw,
- u16 *wake_reason,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aqc_get_wake_reason_completion *resp =
- (struct i40e_aqc_get_wake_reason_completion *)&desc.params.raw;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_wake_reason);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- if (status == I40E_SUCCESS)
- *wake_reason = LE16_TO_CPU(resp->wake_reason);
-
- return status;
-}
-
-/**
-* i40e_aq_clear_all_wol_filters
-* @hw: pointer to the hw struct
-* @cmd_details: pointer to command details structure or NULL
-*
-* Get information for the reason of a Wake Up event
-**/
-enum i40e_status_code i40e_aq_clear_all_wol_filters(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- enum i40e_status_code status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_clear_all_wol_filters);
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
/**
* i40e_aq_write_ddp - Write dynamic device personalization (ddp)
* @hw: pointer to the hw struct
@@ -8243,42 +5741,3 @@ i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
}
return status;
}
-
-/**
- * i40e_add_pinfo_to_list
- * @hw: pointer to the hardware structure
- * @profile: pointer to the profile segment of the package
- * @profile_info_sec: buffer for information section
- * @track_id: package tracking id
- *
- * Register a profile to the list of loaded profiles.
- */
-enum i40e_status_code
-i40e_add_pinfo_to_list(struct i40e_hw *hw,
- struct i40e_profile_segment *profile,
- u8 *profile_info_sec, u32 track_id)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- struct i40e_profile_section_header *sec = NULL;
- struct i40e_profile_info *pinfo;
- u32 offset = 0, info = 0;
-
- sec = (struct i40e_profile_section_header *)profile_info_sec;
- sec->tbl_size = 1;
- sec->data_end = sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info);
- sec->section.type = SECTION_TYPE_INFO;
- sec->section.offset = sizeof(struct i40e_profile_section_header);
- sec->section.size = sizeof(struct i40e_profile_info);
- pinfo = (struct i40e_profile_info *)(profile_info_sec +
- sec->section.offset);
- pinfo->track_id = track_id;
- pinfo->version = profile->version;
- pinfo->op = I40E_DDP_ADD_TRACKID;
- i40e_memcpy(pinfo->name, profile->name, I40E_DDP_NAME_SIZE,
- I40E_NONDMA_TO_NONDMA);
-
- status = i40e_aq_write_ddp(hw, (void *)sec, sec->data_end,
- track_id, &offset, &info, NULL);
- return status;
-}
diff --git a/drivers/net/i40e/base/i40e_dcb.c b/drivers/net/i40e/base/i40e_dcb.c
index 388af3d64d..ceb2f37927 100644
--- a/drivers/net/i40e/base/i40e_dcb.c
+++ b/drivers/net/i40e/base/i40e_dcb.c
@@ -932,49 +932,6 @@ enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw, bool enable_mib_change)
return ret;
}
-/**
- * i40e_get_fw_lldp_status
- * @hw: pointer to the hw struct
- * @lldp_status: pointer to the status enum
- *
- * Get status of FW Link Layer Discovery Protocol (LLDP) Agent.
- * Status of agent is reported via @lldp_status parameter.
- **/
-enum i40e_status_code
-i40e_get_fw_lldp_status(struct i40e_hw *hw,
- enum i40e_get_fw_lldp_status_resp *lldp_status)
-{
- enum i40e_status_code ret;
- struct i40e_virt_mem mem;
- u8 *lldpmib;
-
- if (!lldp_status)
- return I40E_ERR_PARAM;
-
- /* Allocate buffer for the LLDPDU */
- ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE);
- if (ret)
- return ret;
-
- lldpmib = (u8 *)mem.va;
- ret = i40e_aq_get_lldp_mib(hw, 0, 0, (void *)lldpmib,
- I40E_LLDPDU_SIZE, NULL, NULL, NULL);
-
- if (ret == I40E_SUCCESS) {
- *lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED;
- } else if (hw->aq.asq_last_status == I40E_AQ_RC_ENOENT) {
- /* MIB is not available yet but the agent is running */
- *lldp_status = I40E_GET_FW_LLDP_STATUS_ENABLED;
- ret = I40E_SUCCESS;
- } else if (hw->aq.asq_last_status == I40E_AQ_RC_EPERM) {
- *lldp_status = I40E_GET_FW_LLDP_STATUS_DISABLED;
- ret = I40E_SUCCESS;
- }
-
- i40e_free_virt_mem(hw, &mem);
- return ret;
-}
-
/**
* i40e_add_ieee_ets_tlv - Prepare ETS TLV in IEEE format
* @tlv: Fill the ETS config data in IEEE format
diff --git a/drivers/net/i40e/base/i40e_dcb.h b/drivers/net/i40e/base/i40e_dcb.h
index 0409fd3e1a..01c1d8af11 100644
--- a/drivers/net/i40e/base/i40e_dcb.h
+++ b/drivers/net/i40e/base/i40e_dcb.h
@@ -199,9 +199,6 @@ enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type,
enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw);
enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw,
bool enable_mib_change);
-enum i40e_status_code
-i40e_get_fw_lldp_status(struct i40e_hw *hw,
- enum i40e_get_fw_lldp_status_resp *lldp_status);
enum i40e_status_code i40e_set_dcb_config(struct i40e_hw *hw);
enum i40e_status_code i40e_dcb_config_to_lldp(u8 *lldpmib, u16 *miblen,
struct i40e_dcbx_config *dcbcfg);
diff --git a/drivers/net/i40e/base/i40e_diag.c b/drivers/net/i40e/base/i40e_diag.c
deleted file mode 100644
index b3c4cfd3aa..0000000000
--- a/drivers/net/i40e/base/i40e_diag.c
+++ /dev/null
@@ -1,146 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2020 Intel Corporation
- */
-
-#include "i40e_diag.h"
-#include "i40e_prototype.h"
-
-/**
- * i40e_diag_set_loopback
- * @hw: pointer to the hw struct
- * @mode: loopback mode
- *
- * Set chosen loopback mode
- **/
-enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw,
- enum i40e_lb_mode mode)
-{
- enum i40e_status_code ret_code = I40E_SUCCESS;
-
- if (i40e_aq_set_lb_modes(hw, mode, NULL))
- ret_code = I40E_ERR_DIAG_TEST_FAILED;
-
- return ret_code;
-}
-
-/**
- * i40e_diag_reg_pattern_test
- * @hw: pointer to the hw struct
- * @reg: reg to be tested
- * @mask: bits to be touched
- **/
-static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw,
- u32 reg, u32 mask)
-{
- const u32 patterns[] = {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF};
- u32 pat, val, orig_val;
- int i;
-
- orig_val = rd32(hw, reg);
- for (i = 0; i < ARRAY_SIZE(patterns); i++) {
- pat = patterns[i];
- wr32(hw, reg, (pat & mask));
- val = rd32(hw, reg);
- if ((val & mask) != (pat & mask)) {
- return I40E_ERR_DIAG_TEST_FAILED;
- }
- }
-
- wr32(hw, reg, orig_val);
- val = rd32(hw, reg);
- if (val != orig_val) {
- return I40E_ERR_DIAG_TEST_FAILED;
- }
-
- return I40E_SUCCESS;
-}
-
-static struct i40e_diag_reg_test_info i40e_reg_list[] = {
- /* offset mask elements stride */
- {I40E_QTX_CTL(0), 0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
- {I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)},
- {I40E_PFINT_ITRN(0, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(0, 1) - I40E_PFINT_ITRN(0, 0)},
- {I40E_PFINT_ITRN(1, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(1, 1) - I40E_PFINT_ITRN(1, 0)},
- {I40E_PFINT_ITRN(2, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(2, 1) - I40E_PFINT_ITRN(2, 0)},
- {I40E_PFINT_STAT_CTL0, 0x0000000C, 1, 0},
- {I40E_PFINT_LNKLST0, 0x00001FFF, 1, 0},
- {I40E_PFINT_LNKLSTN(0), 0x000007FF, 1, I40E_PFINT_LNKLSTN(1) - I40E_PFINT_LNKLSTN(0)},
- {I40E_QINT_TQCTL(0), 0x000000FF, 1, I40E_QINT_TQCTL(1) - I40E_QINT_TQCTL(0)},
- {I40E_QINT_RQCTL(0), 0x000000FF, 1, I40E_QINT_RQCTL(1) - I40E_QINT_RQCTL(0)},
- {I40E_PFINT_ICR0_ENA, 0xF7F20000, 1, 0},
- { 0 }
-};
-
-/**
- * i40e_diag_reg_test
- * @hw: pointer to the hw struct
- *
- * Perform registers diagnostic test
- **/
-enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw)
-{
- enum i40e_status_code ret_code = I40E_SUCCESS;
- u32 reg, mask;
- u32 i, j;
-
- for (i = 0; i40e_reg_list[i].offset != 0 &&
- ret_code == I40E_SUCCESS; i++) {
-
- /* set actual reg range for dynamically allocated resources */
- if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
- hw->func_caps.num_tx_qp != 0)
- i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
- if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
- i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
- i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
- i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
- i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
- hw->func_caps.num_msix_vectors != 0)
- i40e_reg_list[i].elements =
- hw->func_caps.num_msix_vectors - 1;
-
- /* test register access */
- mask = i40e_reg_list[i].mask;
- for (j = 0; j < i40e_reg_list[i].elements &&
- ret_code == I40E_SUCCESS; j++) {
- reg = i40e_reg_list[i].offset
- + (j * i40e_reg_list[i].stride);
- ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
- }
- }
-
- return ret_code;
-}
-
-/**
- * i40e_diag_eeprom_test
- * @hw: pointer to the hw struct
- *
- * Perform EEPROM diagnostic test
- **/
-enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw)
-{
- enum i40e_status_code ret_code;
- u16 reg_val;
-
- /* read NVM control word and if NVM valid, validate EEPROM checksum*/
- ret_code = i40e_read_nvm_word(hw, I40E_SR_NVM_CONTROL_WORD, ®_val);
- if ((ret_code == I40E_SUCCESS) &&
- ((reg_val & I40E_SR_CONTROL_WORD_1_MASK) ==
- BIT(I40E_SR_CONTROL_WORD_1_SHIFT)))
- return i40e_validate_nvm_checksum(hw, NULL);
- else
- return I40E_ERR_DIAG_TEST_FAILED;
-}
-
-/**
- * i40e_diag_fw_alive_test
- * @hw: pointer to the hw struct
- *
- * Perform FW alive diagnostic test
- **/
-enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw)
-{
- UNREFERENCED_1PARAMETER(hw);
- return I40E_SUCCESS;
-}
diff --git a/drivers/net/i40e/base/i40e_diag.h b/drivers/net/i40e/base/i40e_diag.h
deleted file mode 100644
index cb59285d9c..0000000000
--- a/drivers/net/i40e/base/i40e_diag.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2020 Intel Corporation
- */
-
-#ifndef _I40E_DIAG_H_
-#define _I40E_DIAG_H_
-
-#include "i40e_type.h"
-
-enum i40e_lb_mode {
- I40E_LB_MODE_NONE = 0x0,
- I40E_LB_MODE_PHY_LOCAL = I40E_AQ_LB_PHY_LOCAL,
- I40E_LB_MODE_PHY_REMOTE = I40E_AQ_LB_PHY_REMOTE,
- I40E_LB_MODE_MAC_LOCAL = I40E_AQ_LB_MAC_LOCAL,
-};
-
-struct i40e_diag_reg_test_info {
- u32 offset; /* the base register */
- u32 mask; /* bits that can be tested */
- u32 elements; /* number of elements if array */
- u32 stride; /* bytes between each element */
-};
-
-enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw,
- enum i40e_lb_mode mode);
-enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw);
-enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw);
-enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw);
-
-#endif /* _I40E_DIAG_H_ */
diff --git a/drivers/net/i40e/base/i40e_lan_hmc.c b/drivers/net/i40e/base/i40e_lan_hmc.c
index d3969396f0..5242ba8deb 100644
--- a/drivers/net/i40e/base/i40e_lan_hmc.c
+++ b/drivers/net/i40e/base/i40e_lan_hmc.c
@@ -914,228 +914,6 @@ static void i40e_write_qword(u8 *hmc_bits,
i40e_memcpy(dest, &dest_qword, sizeof(dest_qword), I40E_NONDMA_TO_DMA);
}
-/**
- * i40e_read_byte - read HMC context byte into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_byte(u8 *hmc_bits,
- struct i40e_context_ele *ce_info,
- u8 *dest)
-{
- u8 dest_byte, mask;
- u8 *src, *target;
- u16 shift_width;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = (u8)(BIT(ce_info->width) - 1);
-
- /* shift to correct alignment */
- mask <<= shift_width;
-
- /* get the current bits from the src bit string */
- src = hmc_bits + (ce_info->lsb / 8);
-
- i40e_memcpy(&dest_byte, src, sizeof(dest_byte), I40E_DMA_TO_NONDMA);
-
- dest_byte &= ~(mask);
-
- dest_byte >>= shift_width;
-
- /* get the address from the struct field */
- target = dest + ce_info->offset;
-
- /* put it back in the struct */
- i40e_memcpy(target, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_word - read HMC context word into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_word(u8 *hmc_bits,
- struct i40e_context_ele *ce_info,
- u8 *dest)
-{
- u16 dest_word, mask;
- u8 *src, *target;
- u16 shift_width;
- __le16 src_word;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = BIT(ce_info->width) - 1;
-
- /* shift to correct alignment */
- mask <<= shift_width;
-
- /* get the current bits from the src bit string */
- src = hmc_bits + (ce_info->lsb / 8);
-
- i40e_memcpy(&src_word, src, sizeof(src_word), I40E_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_word &= ~(CPU_TO_LE16(mask));
-
- /* get the data back into host order before shifting */
- dest_word = LE16_TO_CPU(src_word);
-
- dest_word >>= shift_width;
-
- /* get the address from the struct field */
- target = dest + ce_info->offset;
-
- /* put it back in the struct */
- i40e_memcpy(target, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_dword - read HMC context dword into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_dword(u8 *hmc_bits,
- struct i40e_context_ele *ce_info,
- u8 *dest)
-{
- u32 dest_dword, mask;
- u8 *src, *target;
- u16 shift_width;
- __le32 src_dword;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
-
- /* if the field width is exactly 32 on an x86 machine, then the shift
- * operation will not work because the SHL instructions count is masked
- * to 5 bits so the shift will do nothing
- */
- if (ce_info->width < 32)
- mask = BIT(ce_info->width) - 1;
- else
- mask = ~(u32)0;
-
- /* shift to correct alignment */
- mask <<= shift_width;
-
- /* get the current bits from the src bit string */
- src = hmc_bits + (ce_info->lsb / 8);
-
- i40e_memcpy(&src_dword, src, sizeof(src_dword), I40E_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_dword &= ~(CPU_TO_LE32(mask));
-
- /* get the data back into host order before shifting */
- dest_dword = LE32_TO_CPU(src_dword);
-
- dest_dword >>= shift_width;
-
- /* get the address from the struct field */
- target = dest + ce_info->offset;
-
- /* put it back in the struct */
- i40e_memcpy(target, &dest_dword, sizeof(dest_dword),
- I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_read_qword - read HMC context qword into struct
- * @hmc_bits: pointer to the HMC memory
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static void i40e_read_qword(u8 *hmc_bits,
- struct i40e_context_ele *ce_info,
- u8 *dest)
-{
- u64 dest_qword, mask;
- u8 *src, *target;
- u16 shift_width;
- __le64 src_qword;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
-
- /* if the field width is exactly 64 on an x86 machine, then the shift
- * operation will not work because the SHL instructions count is masked
- * to 6 bits so the shift will do nothing
- */
- if (ce_info->width < 64)
- mask = BIT_ULL(ce_info->width) - 1;
- else
- mask = ~(u64)0;
-
- /* shift to correct alignment */
- mask <<= shift_width;
-
- /* get the current bits from the src bit string */
- src = hmc_bits + (ce_info->lsb / 8);
-
- i40e_memcpy(&src_qword, src, sizeof(src_qword), I40E_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_qword &= ~(CPU_TO_LE64(mask));
-
- /* get the data back into host order before shifting */
- dest_qword = LE64_TO_CPU(src_qword);
-
- dest_qword >>= shift_width;
-
- /* get the address from the struct field */
- target = dest + ce_info->offset;
-
- /* put it back in the struct */
- i40e_memcpy(target, &dest_qword, sizeof(dest_qword),
- I40E_NONDMA_TO_DMA);
-}
-
-/**
- * i40e_get_hmc_context - extract HMC context bits
- * @context_bytes: pointer to the context bit array
- * @ce_info: a description of the struct to be filled
- * @dest: the struct to be filled
- **/
-static enum i40e_status_code i40e_get_hmc_context(u8 *context_bytes,
- struct i40e_context_ele *ce_info,
- u8 *dest)
-{
- int f;
-
- for (f = 0; ce_info[f].width != 0; f++) {
- switch (ce_info[f].size_of) {
- case 1:
- i40e_read_byte(context_bytes, &ce_info[f], dest);
- break;
- case 2:
- i40e_read_word(context_bytes, &ce_info[f], dest);
- break;
- case 4:
- i40e_read_dword(context_bytes, &ce_info[f], dest);
- break;
- case 8:
- i40e_read_qword(context_bytes, &ce_info[f], dest);
- break;
- default:
- /* nothing to do, just keep going */
- break;
- }
- }
-
- return I40E_SUCCESS;
-}
-
/**
* i40e_clear_hmc_context - zero out the HMC context bits
* @hw: the hardware struct
@@ -1261,27 +1039,6 @@ enum i40e_status_code i40e_hmc_get_object_va(struct i40e_hw *hw,
return ret_code;
}
-/**
- * i40e_get_lan_tx_queue_context - return the HMC context for the queue
- * @hw: the hardware struct
- * @queue: the queue we care about
- * @s: the struct to be filled
- **/
-enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_txq *s)
-{
- enum i40e_status_code err;
- u8 *context_bytes;
-
- err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_TX, queue);
- if (err < 0)
- return err;
-
- return i40e_get_hmc_context(context_bytes,
- i40e_hmc_txq_ce_info, (u8 *)s);
-}
-
/**
* i40e_clear_lan_tx_queue_context - clear the HMC context for the queue
* @hw: the hardware struct
@@ -1321,27 +1078,6 @@ enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
i40e_hmc_txq_ce_info, (u8 *)s);
}
-/**
- * i40e_get_lan_rx_queue_context - return the HMC context for the queue
- * @hw: the hardware struct
- * @queue: the queue we care about
- * @s: the struct to be filled
- **/
-enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_rxq *s)
-{
- enum i40e_status_code err;
- u8 *context_bytes;
-
- err = i40e_hmc_get_object_va(hw, &context_bytes, I40E_HMC_LAN_RX, queue);
- if (err < 0)
- return err;
-
- return i40e_get_hmc_context(context_bytes,
- i40e_hmc_rxq_ce_info, (u8 *)s);
-}
-
/**
* i40e_clear_lan_rx_queue_context - clear the HMC context for the queue
* @hw: the hardware struct
diff --git a/drivers/net/i40e/base/i40e_lan_hmc.h b/drivers/net/i40e/base/i40e_lan_hmc.h
index aa5dceb792..1d2707e5ad 100644
--- a/drivers/net/i40e/base/i40e_lan_hmc.h
+++ b/drivers/net/i40e/base/i40e_lan_hmc.h
@@ -147,17 +147,11 @@ enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw);
u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
u32 fcoe_cntx_num, u32 fcoe_filt_num);
-enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_txq *s);
enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue);
enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue,
struct i40e_hmc_obj_txq *s);
-enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw,
- u16 queue,
- struct i40e_hmc_obj_rxq *s);
enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
u16 queue);
enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c
index 561ed21136..f1d1ff3685 100644
--- a/drivers/net/i40e/base/i40e_nvm.c
+++ b/drivers/net/i40e/base/i40e_nvm.c
@@ -599,61 +599,6 @@ enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer,
return ret_code;
}
-/**
- * __i40e_write_nvm_word - Writes Shadow RAM word
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to write
- * @data: word to write to the Shadow RAM
- *
- * Writes a 16 bit word to the SR using the i40e_write_nvm_aq() method.
- * NVM ownership have to be acquired and released (on ARQ completion event
- * reception) by caller. To commit SR to NVM update checksum function
- * should be called.
- **/
-enum i40e_status_code __i40e_write_nvm_word(struct i40e_hw *hw, u32 offset,
- void *data)
-{
- DEBUGFUNC("i40e_write_nvm_word");
-
- *((__le16 *)data) = CPU_TO_LE16(*((u16 *)data));
-
- /* Value 0x00 below means that we treat SR as a flat mem */
- return i40e_write_nvm_aq(hw, 0x00, offset, 1, data, false);
-}
-
-/**
- * __i40e_write_nvm_buffer - Writes Shadow RAM buffer
- * @hw: pointer to the HW structure
- * @module_pointer: module pointer location in words from the NVM beginning
- * @offset: offset of the Shadow RAM buffer to write
- * @words: number of words to write
- * @data: words to write to the Shadow RAM
- *
- * Writes a 16 bit words buffer to the Shadow RAM using the admin command.
- * NVM ownership must be acquired before calling this function and released
- * on ARQ completion event reception by caller. To commit SR to NVM update
- * checksum function should be called.
- **/
-enum i40e_status_code __i40e_write_nvm_buffer(struct i40e_hw *hw,
- u8 module_pointer, u32 offset,
- u16 words, void *data)
-{
- __le16 *le_word_ptr = (__le16 *)data;
- u16 *word_ptr = (u16 *)data;
- u32 i = 0;
-
- DEBUGFUNC("i40e_write_nvm_buffer");
-
- for (i = 0; i < words; i++)
- le_word_ptr[i] = CPU_TO_LE16(word_ptr[i]);
-
- /* Here we will only write one buffer as the size of the modules
- * mirrored in the Shadow RAM is always less than 4K.
- */
- return i40e_write_nvm_aq(hw, module_pointer, offset, words,
- data, false);
-}
-
/**
* i40e_calc_nvm_checksum - Calculates and returns the checksum
* @hw: pointer to hardware structure
@@ -807,521 +752,6 @@ enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw,
return ret_code;
}
-STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_exec_aq(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno);
-STATIC INLINE u8 i40e_nvmupd_get_module(u32 val)
-{
- return (u8)(val & I40E_NVM_MOD_PNT_MASK);
-}
-STATIC INLINE u8 i40e_nvmupd_get_transaction(u32 val)
-{
- return (u8)((val & I40E_NVM_TRANS_MASK) >> I40E_NVM_TRANS_SHIFT);
-}
-
-STATIC INLINE u8 i40e_nvmupd_get_preservation_flags(u32 val)
-{
- return (u8)((val & I40E_NVM_PRESERVATION_FLAGS_MASK) >>
- I40E_NVM_PRESERVATION_FLAGS_SHIFT);
-}
-
-STATIC const char *i40e_nvm_update_state_str[] = {
- "I40E_NVMUPD_INVALID",
- "I40E_NVMUPD_READ_CON",
- "I40E_NVMUPD_READ_SNT",
- "I40E_NVMUPD_READ_LCB",
- "I40E_NVMUPD_READ_SA",
- "I40E_NVMUPD_WRITE_ERA",
- "I40E_NVMUPD_WRITE_CON",
- "I40E_NVMUPD_WRITE_SNT",
- "I40E_NVMUPD_WRITE_LCB",
- "I40E_NVMUPD_WRITE_SA",
- "I40E_NVMUPD_CSUM_CON",
- "I40E_NVMUPD_CSUM_SA",
- "I40E_NVMUPD_CSUM_LCB",
- "I40E_NVMUPD_STATUS",
- "I40E_NVMUPD_EXEC_AQ",
- "I40E_NVMUPD_GET_AQ_RESULT",
- "I40E_NVMUPD_GET_AQ_EVENT",
- "I40E_NVMUPD_GET_FEATURES",
-};
-
-/**
- * i40e_nvmupd_command - Process an NVM update command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * Dispatches command depending on what update state is current
- **/
-enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- enum i40e_status_code status;
- enum i40e_nvmupd_cmd upd_cmd;
-
- DEBUGFUNC("i40e_nvmupd_command");
-
- /* assume success */
- *perrno = 0;
-
- /* early check for status command and debug msgs */
- upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
- i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d opc 0x%04x cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
- i40e_nvm_update_state_str[upd_cmd],
- hw->nvmupd_state,
- hw->nvm_release_on_done, hw->nvm_wait_opcode,
- cmd->command, cmd->config, cmd->offset, cmd->data_size);
-
- if (upd_cmd == I40E_NVMUPD_INVALID) {
- *perrno = -EFAULT;
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_validate_command returns %d errno %d\n",
- upd_cmd, *perrno);
- }
-
- /* a status request returns immediately rather than
- * going into the state machine
- */
- if (upd_cmd == I40E_NVMUPD_STATUS) {
- if (!cmd->data_size) {
- *perrno = -EFAULT;
- return I40E_ERR_BUF_TOO_SHORT;
- }
-
- bytes[0] = hw->nvmupd_state;
-
- if (cmd->data_size >= 4) {
- bytes[1] = 0;
- *((u16 *)&bytes[2]) = hw->nvm_wait_opcode;
- }
-
- /* Clear error status on read */
- if (hw->nvmupd_state == I40E_NVMUPD_STATE_ERROR)
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
-
- return I40E_SUCCESS;
- }
-
- /*
- * A supported features request returns immediately
- * rather than going into state machine
- */
- if (upd_cmd == I40E_NVMUPD_FEATURES) {
- if (cmd->data_size < hw->nvmupd_features.size) {
- *perrno = -EFAULT;
- return I40E_ERR_BUF_TOO_SHORT;
- }
-
- /*
- * If buffer is bigger than i40e_nvmupd_features structure,
- * make sure the trailing bytes are set to 0x0.
- */
- if (cmd->data_size > hw->nvmupd_features.size)
- i40e_memset(bytes + hw->nvmupd_features.size, 0x0,
- cmd->data_size - hw->nvmupd_features.size,
- I40E_NONDMA_MEM);
-
- i40e_memcpy(bytes, &hw->nvmupd_features,
- hw->nvmupd_features.size, I40E_NONDMA_MEM);
-
- return I40E_SUCCESS;
- }
-
- /* Clear status even it is not read and log */
- if (hw->nvmupd_state == I40E_NVMUPD_STATE_ERROR) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "Clearing I40E_NVMUPD_STATE_ERROR state without reading\n");
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- }
-
- /* Acquire lock to prevent race condition where adminq_task
- * can execute after i40e_nvmupd_nvm_read/write but before state
- * variables (nvm_wait_opcode, nvm_release_on_done) are updated.
- *
- * During NVMUpdate, it is observed that lock could be held for
- * ~5ms for most commands. However lock is held for ~60ms for
- * NVMUPD_CSUM_LCB command.
- */
- i40e_acquire_spinlock(&hw->aq.arq_spinlock);
- switch (hw->nvmupd_state) {
- case I40E_NVMUPD_STATE_INIT:
- status = i40e_nvmupd_state_init(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_STATE_READING:
- status = i40e_nvmupd_state_reading(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_STATE_WRITING:
- status = i40e_nvmupd_state_writing(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_STATE_INIT_WAIT:
- case I40E_NVMUPD_STATE_WRITE_WAIT:
- /* if we need to stop waiting for an event, clear
- * the wait info and return before doing anything else
- */
- if (cmd->offset == 0xffff) {
- i40e_nvmupd_clear_wait_state(hw);
- status = I40E_SUCCESS;
- break;
- }
-
- status = I40E_ERR_NOT_READY;
- *perrno = -EBUSY;
- break;
-
- default:
- /* invalid state, should never happen */
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: no such state %d\n", hw->nvmupd_state);
- status = I40E_NOT_SUPPORTED;
- *perrno = -ESRCH;
- break;
- }
-
- i40e_release_spinlock(&hw->aq.arq_spinlock);
- return status;
-}
-
-/**
- * i40e_nvmupd_state_init - Handle NVM update state Init
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * Process legitimate commands of the Init state and conditionally set next
- * state. Reject all other commands.
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- enum i40e_nvmupd_cmd upd_cmd;
-
- DEBUGFUNC("i40e_nvmupd_state_init");
-
- upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
- switch (upd_cmd) {
- case I40E_NVMUPD_READ_SA:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
- i40e_release_nvm(hw);
- }
- break;
-
- case I40E_NVMUPD_READ_SNT:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
- if (status)
- i40e_release_nvm(hw);
- else
- hw->nvmupd_state = I40E_NVMUPD_STATE_READING;
- }
- break;
-
- case I40E_NVMUPD_WRITE_ERA:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_nvmupd_nvm_erase(hw, cmd, perrno);
- if (status) {
- i40e_release_nvm(hw);
- } else {
- hw->nvm_release_on_done = true;
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_erase;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
- }
- break;
-
- case I40E_NVMUPD_WRITE_SA:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (status) {
- i40e_release_nvm(hw);
- } else {
- hw->nvm_release_on_done = true;
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
- }
- break;
-
- case I40E_NVMUPD_WRITE_SNT:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (status) {
- i40e_release_nvm(hw);
- } else {
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
- }
- }
- break;
-
- case I40E_NVMUPD_CSUM_SA:
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
- if (status) {
- *perrno = i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status);
- } else {
- status = i40e_update_nvm_checksum(hw);
- if (status) {
- *perrno = hw->aq.asq_last_status ?
- i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status) :
- -EIO;
- i40e_release_nvm(hw);
- } else {
- hw->nvm_release_on_done = true;
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
- }
- break;
-
- case I40E_NVMUPD_EXEC_AQ:
- status = i40e_nvmupd_exec_aq(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_GET_AQ_RESULT:
- status = i40e_nvmupd_get_aq_result(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_GET_AQ_EVENT:
- status = i40e_nvmupd_get_aq_event(hw, cmd, bytes, perrno);
- break;
-
- default:
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: bad cmd %s in init state\n",
- i40e_nvm_update_state_str[upd_cmd]);
- status = I40E_ERR_NVM;
- *perrno = -ESRCH;
- break;
- }
- return status;
-}
-
-/**
- * i40e_nvmupd_state_reading - Handle NVM update state Reading
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * NVM ownership is already held. Process legitimate commands and set any
- * change in state; reject all other commands.
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- enum i40e_nvmupd_cmd upd_cmd;
-
- DEBUGFUNC("i40e_nvmupd_state_reading");
-
- upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
- switch (upd_cmd) {
- case I40E_NVMUPD_READ_SA:
- case I40E_NVMUPD_READ_CON:
- status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
- break;
-
- case I40E_NVMUPD_READ_LCB:
- status = i40e_nvmupd_nvm_read(hw, cmd, bytes, perrno);
- i40e_release_nvm(hw);
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- break;
-
- default:
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: bad cmd %s in reading state.\n",
- i40e_nvm_update_state_str[upd_cmd]);
- status = I40E_NOT_SUPPORTED;
- *perrno = -ESRCH;
- break;
- }
- return status;
-}
-
-/**
- * i40e_nvmupd_state_writing - Handle NVM update state Writing
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * NVM ownership is already held. Process legitimate commands and set any
- * change in state; reject all other commands
- **/
-STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- enum i40e_nvmupd_cmd upd_cmd;
- bool retry_attempt = false;
-
- DEBUGFUNC("i40e_nvmupd_state_writing");
-
- upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
-
-retry:
- switch (upd_cmd) {
- case I40E_NVMUPD_WRITE_CON:
- status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (!status) {
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
- }
- break;
-
- case I40E_NVMUPD_WRITE_LCB:
- status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (status) {
- *perrno = hw->aq.asq_last_status ?
- i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status) :
- -EIO;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- } else {
- hw->nvm_release_on_done = true;
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
- break;
-
- case I40E_NVMUPD_CSUM_CON:
- /* Assumes the caller has acquired the nvm */
- status = i40e_update_nvm_checksum(hw);
- if (status) {
- *perrno = hw->aq.asq_last_status ?
- i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status) :
- -EIO;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- } else {
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
- }
- break;
-
- case I40E_NVMUPD_CSUM_LCB:
- /* Assumes the caller has acquired the nvm */
- status = i40e_update_nvm_checksum(hw);
- if (status) {
- *perrno = hw->aq.asq_last_status ?
- i40e_aq_rc_to_posix(status,
- hw->aq.asq_last_status) :
- -EIO;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- } else {
- hw->nvm_release_on_done = true;
- hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
- break;
-
- default:
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: bad cmd %s in writing state.\n",
- i40e_nvm_update_state_str[upd_cmd]);
- status = I40E_NOT_SUPPORTED;
- *perrno = -ESRCH;
- break;
- }
-
- /* In some circumstances, a multi-write transaction takes longer
- * than the default 3 minute timeout on the write semaphore. If
- * the write failed with an EBUSY status, this is likely the problem,
- * so here we try to reacquire the semaphore then retry the write.
- * We only do one retry, then give up.
- */
- if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) &&
- !retry_attempt) {
- enum i40e_status_code old_status = status;
- u32 old_asq_status = hw->aq.asq_last_status;
- u32 gtime;
-
- gtime = rd32(hw, I40E_GLVFGEN_TIMER);
- if (gtime >= hw->nvm.hw_semaphore_timeout) {
- i40e_debug(hw, I40E_DEBUG_ALL,
- "NVMUPD: write semaphore expired (%d >= %" PRIu64 "), retrying\n",
- gtime, hw->nvm.hw_semaphore_timeout);
- i40e_release_nvm(hw);
- status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);
- if (status) {
- i40e_debug(hw, I40E_DEBUG_ALL,
- "NVMUPD: write semaphore reacquire failed aq_err = %d\n",
- hw->aq.asq_last_status);
- status = old_status;
- hw->aq.asq_last_status = old_asq_status;
- } else {
- retry_attempt = true;
- goto retry;
- }
- }
- }
-
- return status;
-}
-
/**
* i40e_nvmupd_clear_wait_state - clear wait state on hw
* @hw: pointer to the hardware structure
@@ -1374,421 +804,3 @@ void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode,
i40e_nvmupd_clear_wait_state(hw);
}
}
-
-/**
- * i40e_nvmupd_validate_command - Validate given command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @perrno: pointer to return error code
- *
- * Return one of the valid command types or I40E_NVMUPD_INVALID
- **/
-STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno)
-{
- enum i40e_nvmupd_cmd upd_cmd;
- u8 module, transaction;
-
- DEBUGFUNC("i40e_nvmupd_validate_command\n");
-
- /* anything that doesn't match a recognized case is an error */
- upd_cmd = I40E_NVMUPD_INVALID;
-
- transaction = i40e_nvmupd_get_transaction(cmd->config);
- module = i40e_nvmupd_get_module(cmd->config);
-
- /* limits on data size */
- if ((cmd->data_size < 1) ||
- (cmd->data_size > I40E_NVMUPD_MAX_DATA)) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_validate_command data_size %d\n",
- cmd->data_size);
- *perrno = -EFAULT;
- return I40E_NVMUPD_INVALID;
- }
-
- switch (cmd->command) {
- case I40E_NVM_READ:
- switch (transaction) {
- case I40E_NVM_CON:
- upd_cmd = I40E_NVMUPD_READ_CON;
- break;
- case I40E_NVM_SNT:
- upd_cmd = I40E_NVMUPD_READ_SNT;
- break;
- case I40E_NVM_LCB:
- upd_cmd = I40E_NVMUPD_READ_LCB;
- break;
- case I40E_NVM_SA:
- upd_cmd = I40E_NVMUPD_READ_SA;
- break;
- case I40E_NVM_EXEC:
- switch (module) {
- case I40E_NVM_EXEC_GET_AQ_RESULT:
- upd_cmd = I40E_NVMUPD_GET_AQ_RESULT;
- break;
- case I40E_NVM_EXEC_FEATURES:
- upd_cmd = I40E_NVMUPD_FEATURES;
- break;
- case I40E_NVM_EXEC_STATUS:
- upd_cmd = I40E_NVMUPD_STATUS;
- break;
- default:
- *perrno = -EFAULT;
- return I40E_NVMUPD_INVALID;
- }
- break;
- case I40E_NVM_AQE:
- upd_cmd = I40E_NVMUPD_GET_AQ_EVENT;
- break;
- }
- break;
-
- case I40E_NVM_WRITE:
- switch (transaction) {
- case I40E_NVM_CON:
- upd_cmd = I40E_NVMUPD_WRITE_CON;
- break;
- case I40E_NVM_SNT:
- upd_cmd = I40E_NVMUPD_WRITE_SNT;
- break;
- case I40E_NVM_LCB:
- upd_cmd = I40E_NVMUPD_WRITE_LCB;
- break;
- case I40E_NVM_SA:
- upd_cmd = I40E_NVMUPD_WRITE_SA;
- break;
- case I40E_NVM_ERA:
- upd_cmd = I40E_NVMUPD_WRITE_ERA;
- break;
- case I40E_NVM_CSUM:
- upd_cmd = I40E_NVMUPD_CSUM_CON;
- break;
- case (I40E_NVM_CSUM|I40E_NVM_SA):
- upd_cmd = I40E_NVMUPD_CSUM_SA;
- break;
- case (I40E_NVM_CSUM|I40E_NVM_LCB):
- upd_cmd = I40E_NVMUPD_CSUM_LCB;
- break;
- case I40E_NVM_EXEC:
- if (module == 0)
- upd_cmd = I40E_NVMUPD_EXEC_AQ;
- break;
- }
- break;
- }
-
- return upd_cmd;
-}
-
-/**
- * i40e_nvmupd_exec_aq - Run an AQ command
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_exec_aq(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- struct i40e_asq_cmd_details cmd_details;
- enum i40e_status_code status;
- struct i40e_aq_desc *aq_desc;
- u32 buff_size = 0;
- u8 *buff = NULL;
- u32 aq_desc_len;
- u32 aq_data_len;
-
- i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
- if (cmd->offset == 0xffff)
- return I40E_SUCCESS;
-
- memset(&cmd_details, 0, sizeof(cmd_details));
- cmd_details.wb_desc = &hw->nvm_wb_desc;
-
- aq_desc_len = sizeof(struct i40e_aq_desc);
- memset(&hw->nvm_wb_desc, 0, aq_desc_len);
-
- /* get the aq descriptor */
- if (cmd->data_size < aq_desc_len) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: not enough aq desc bytes for exec, size %d < %d\n",
- cmd->data_size, aq_desc_len);
- *perrno = -EINVAL;
- return I40E_ERR_PARAM;
- }
- aq_desc = (struct i40e_aq_desc *)bytes;
-
- /* if data buffer needed, make sure it's ready */
- aq_data_len = cmd->data_size - aq_desc_len;
- buff_size = max(aq_data_len, (u32)LE16_TO_CPU(aq_desc->datalen));
- if (buff_size) {
- if (!hw->nvm_buff.va) {
- status = i40e_allocate_virt_mem(hw, &hw->nvm_buff,
- hw->aq.asq_buf_size);
- if (status)
- i40e_debug(hw, I40E_DEBUG_NVM,
- "NVMUPD: i40e_allocate_virt_mem for exec buff failed, %d\n",
- status);
- }
-
- if (hw->nvm_buff.va) {
- buff = hw->nvm_buff.va;
- i40e_memcpy(buff, &bytes[aq_desc_len], aq_data_len,
- I40E_NONDMA_TO_NONDMA);
- }
- }
-
- if (cmd->offset)
- memset(&hw->nvm_aq_event_desc, 0, aq_desc_len);
-
- /* and away we go! */
- status = i40e_asq_send_command(hw, aq_desc, buff,
- buff_size, &cmd_details);
- if (status) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_exec_aq err %s aq_err %s\n",
- i40e_stat_str(hw, status),
- i40e_aq_str(hw, hw->aq.asq_last_status));
- *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
- return status;
- }
-
- /* should we wait for a followup event? */
- if (cmd->offset) {
- hw->nvm_wait_opcode = cmd->offset;
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
- }
-
- return status;
-}
-
-/**
- * i40e_nvmupd_get_aq_result - Get the results from the previous exec_aq
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_result(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- u32 aq_total_len;
- u32 aq_desc_len;
- int remainder;
- u8 *buff;
-
- i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
-
- aq_desc_len = sizeof(struct i40e_aq_desc);
- aq_total_len = aq_desc_len + LE16_TO_CPU(hw->nvm_wb_desc.datalen);
-
- /* check offset range */
- if (cmd->offset > aq_total_len) {
- i40e_debug(hw, I40E_DEBUG_NVM, "%s: offset too big %d > %d\n",
- __func__, cmd->offset, aq_total_len);
- *perrno = -EINVAL;
- return I40E_ERR_PARAM;
- }
-
- /* check copylength range */
- if (cmd->data_size > (aq_total_len - cmd->offset)) {
- int new_len = aq_total_len - cmd->offset;
-
- i40e_debug(hw, I40E_DEBUG_NVM, "%s: copy length %d too big, trimming to %d\n",
- __func__, cmd->data_size, new_len);
- cmd->data_size = new_len;
- }
-
- remainder = cmd->data_size;
- if (cmd->offset < aq_desc_len) {
- u32 len = aq_desc_len - cmd->offset;
-
- len = min(len, cmd->data_size);
- i40e_debug(hw, I40E_DEBUG_NVM, "%s: aq_desc bytes %d to %d\n",
- __func__, cmd->offset, cmd->offset + len);
-
- buff = ((u8 *)&hw->nvm_wb_desc) + cmd->offset;
- i40e_memcpy(bytes, buff, len, I40E_NONDMA_TO_NONDMA);
-
- bytes += len;
- remainder -= len;
- buff = hw->nvm_buff.va;
- } else {
- buff = (u8 *)hw->nvm_buff.va + (cmd->offset - aq_desc_len);
- }
-
- if (remainder > 0) {
- int start_byte = buff - (u8 *)hw->nvm_buff.va;
-
- i40e_debug(hw, I40E_DEBUG_NVM, "%s: databuf bytes %d to %d\n",
- __func__, start_byte, start_byte + remainder);
- i40e_memcpy(bytes, buff, remainder, I40E_NONDMA_TO_NONDMA);
- }
-
- return I40E_SUCCESS;
-}
-
-/**
- * i40e_nvmupd_get_aq_event - Get the Admin Queue event from previous exec_aq
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_get_aq_event(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- u32 aq_total_len;
- u32 aq_desc_len;
-
- i40e_debug(hw, I40E_DEBUG_NVM, "NVMUPD: %s\n", __func__);
-
- aq_desc_len = sizeof(struct i40e_aq_desc);
- aq_total_len = aq_desc_len + LE16_TO_CPU(hw->nvm_aq_event_desc.datalen);
-
- /* check copylength range */
- if (cmd->data_size > aq_total_len) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "%s: copy length %d too big, trimming to %d\n",
- __func__, cmd->data_size, aq_total_len);
- cmd->data_size = aq_total_len;
- }
-
- i40e_memcpy(bytes, &hw->nvm_aq_event_desc, cmd->data_size,
- I40E_NONDMA_TO_NONDMA);
-
- return I40E_SUCCESS;
-}
-
-/**
- * i40e_nvmupd_nvm_read - Read NVM
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * cmd structure contains identifiers and data buffer
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- struct i40e_asq_cmd_details cmd_details;
- enum i40e_status_code status;
- u8 module, transaction;
- bool last;
-
- transaction = i40e_nvmupd_get_transaction(cmd->config);
- module = i40e_nvmupd_get_module(cmd->config);
- last = (transaction == I40E_NVM_LCB) || (transaction == I40E_NVM_SA);
-
- memset(&cmd_details, 0, sizeof(cmd_details));
- cmd_details.wb_desc = &hw->nvm_wb_desc;
-
- status = i40e_aq_read_nvm(hw, module, cmd->offset, (u16)cmd->data_size,
- bytes, last, &cmd_details);
- if (status) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_read mod 0x%x off 0x%x len 0x%x\n",
- module, cmd->offset, cmd->data_size);
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_read status %d aq %d\n",
- status, hw->aq.asq_last_status);
- *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
- }
-
- return status;
-}
-
-/**
- * i40e_nvmupd_nvm_erase - Erase an NVM module
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @perrno: pointer to return error code
- *
- * module, offset, data_size and data are in cmd structure
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- int *perrno)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- struct i40e_asq_cmd_details cmd_details;
- u8 module, transaction;
- bool last;
-
- transaction = i40e_nvmupd_get_transaction(cmd->config);
- module = i40e_nvmupd_get_module(cmd->config);
- last = (transaction & I40E_NVM_LCB);
-
- memset(&cmd_details, 0, sizeof(cmd_details));
- cmd_details.wb_desc = &hw->nvm_wb_desc;
-
- status = i40e_aq_erase_nvm(hw, module, cmd->offset, (u16)cmd->data_size,
- last, &cmd_details);
- if (status) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_erase mod 0x%x off 0x%x len 0x%x\n",
- module, cmd->offset, cmd->data_size);
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_erase status %d aq %d\n",
- status, hw->aq.asq_last_status);
- *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
- }
-
- return status;
-}
-
-/**
- * i40e_nvmupd_nvm_write - Write NVM
- * @hw: pointer to hardware structure
- * @cmd: pointer to nvm update command buffer
- * @bytes: pointer to the data buffer
- * @perrno: pointer to return error code
- *
- * module, offset, data_size and data are in cmd structure
- **/
-STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *perrno)
-{
- enum i40e_status_code status = I40E_SUCCESS;
- struct i40e_asq_cmd_details cmd_details;
- u8 module, transaction;
- u8 preservation_flags;
- bool last;
-
- transaction = i40e_nvmupd_get_transaction(cmd->config);
- module = i40e_nvmupd_get_module(cmd->config);
- last = (transaction & I40E_NVM_LCB);
- preservation_flags = i40e_nvmupd_get_preservation_flags(cmd->config);
-
- memset(&cmd_details, 0, sizeof(cmd_details));
- cmd_details.wb_desc = &hw->nvm_wb_desc;
-
- status = i40e_aq_update_nvm(hw, module, cmd->offset,
- (u16)cmd->data_size, bytes, last,
- preservation_flags, &cmd_details);
- if (status) {
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_write mod 0x%x off 0x%x len 0x%x\n",
- module, cmd->offset, cmd->data_size);
- i40e_debug(hw, I40E_DEBUG_NVM,
- "i40e_nvmupd_nvm_write status %d aq %d\n",
- status, hw->aq.asq_last_status);
- *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
- }
-
- return status;
-}
diff --git a/drivers/net/i40e/base/i40e_prototype.h b/drivers/net/i40e/base/i40e_prototype.h
index 124222e476..73ec0e340a 100644
--- a/drivers/net/i40e/base/i40e_prototype.h
+++ b/drivers/net/i40e/base/i40e_prototype.h
@@ -67,27 +67,12 @@ const char *i40e_stat_str(struct i40e_hw *hw, enum i40e_status_code stat_err);
u32 i40e_led_get(struct i40e_hw *hw);
void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink);
-enum i40e_status_code i40e_led_set_phy(struct i40e_hw *hw, bool on,
- u16 led_addr, u32 mode);
-enum i40e_status_code i40e_led_get_phy(struct i40e_hw *hw, u16 *led_addr,
- u16 *val);
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval);
enum i40e_status_code i40e_led_get_reg(struct i40e_hw *hw, u16 led_addr,
u32 *reg_val);
enum i40e_status_code i40e_led_set_reg(struct i40e_hw *hw, u16 led_addr,
u32 reg_val);
-enum i40e_status_code i40e_get_phy_lpi_status(struct i40e_hw *hw,
- struct i40e_hw_port_stats *stats);
enum i40e_status_code i40e_get_lpi_counters(struct i40e_hw *hw, u32 *tx_counter,
u32 *rx_counter, bool *is_clear);
-enum i40e_status_code i40e_lpi_stat_update(struct i40e_hw *hw,
- bool offset_loaded, u64 *tx_offset,
- u64 *tx_stat, u64 *rx_offset,
- u64 *rx_stat);
-enum i40e_status_code i40e_get_lpi_duration(struct i40e_hw *hw,
- struct i40e_hw_port_stats *stat,
- u64 *tx_duration, u64 *rx_duration);
/* admin send queue commands */
enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw,
@@ -101,12 +86,6 @@ enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw,
enum i40e_status_code i40e_aq_debug_read_register(struct i40e_hw *hw,
u32 reg_addr, u64 *reg_val,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_clear_default_vsi(struct i40e_hw *hw, u16 vsi_id,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw,
bool qualified_modules, bool report_init,
struct i40e_aq_get_phy_abilities_resp *abilities,
@@ -122,27 +101,13 @@ enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw,
u16 max_frame_size, bool crc_en, u16 pacing,
bool auto_drop_blocking_packets,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw,
- u64 *advt_reg,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw,
- u64 *advt_reg,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, u16 lb_modes,
struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw,
- bool enable_link, struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw,
bool enable_lse, struct i40e_link_status *link,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
- u64 advt_reg,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw,
- struct i40e_driver_version *dv,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw,
struct i40e_vsi_context *vsi_ctx,
struct i40e_asq_cmd_details *cmd_details);
@@ -154,18 +119,6 @@ enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
bool rx_only_promisc);
enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_full_promiscuous(struct i40e_hw *hw,
- u16 seid, bool set,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_vsi_bc_promisc_on_vlan(struct i40e_hw *hw,
- u16 seid, bool enable, u16 vid,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
u16 seid, bool enable,
struct i40e_asq_cmd_details *cmd_details);
@@ -191,15 +144,6 @@ enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id,
enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_remove_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 dest_vsi, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rule_id, u16 *rules_used, u16 *rules_free);
-enum i40e_status_code i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
- u16 rule_type, u16 rule_id, u16 count, __le16 *mr_list,
- struct i40e_asq_cmd_details *cmd_details,
- u16 *rules_used, u16 *rules_free);
-
enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_add_remove_vlan_element_data *v_list,
u8 count, struct i40e_asq_cmd_details *cmd_details);
@@ -232,21 +176,6 @@ enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer,
u32 offset, u16 length, bool last_command,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_read_nvm_config(struct i40e_hw *hw,
- u8 cmd_flags, u32 field_id, void *data,
- u16 buf_size, u16 *element_count,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_write_nvm_config(struct i40e_hw *hw,
- u8 cmd_flags, void *data, u16 buf_size,
- u16 element_count,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_min_rollback_rev_update(struct i40e_hw *hw, u8 mode, u8 module,
- u32 min_rrev,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_oem_post_update(struct i40e_hw *hw,
- void *buff, u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw,
void *buff, u16 buff_size, u16 *data_size,
enum i40e_admin_queue_opc list_type_opc,
@@ -255,13 +184,6 @@ enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
u32 offset, u16 length, void *data,
bool last_command, u8 preservation_flags,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_rearrange_nvm(struct i40e_hw *hw,
- u8 rearrange_nvm,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_nvm_update_in_process(struct i40e_hw *hw,
- bool update_flow_state,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
u8 mib_type, void *buff, u16 buff_size,
u16 *local_len, u16 *remote_len,
@@ -272,63 +194,25 @@ enum i40e_status_code i40e_aq_set_lldp_mib(struct i40e_hw *hw,
enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
bool enable_update,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code
-i40e_aq_restore_lldp(struct i40e_hw *hw, u8 *setting, bool restore,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
bool persist,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_dcb_parameters(struct i40e_hw *hw,
- bool dcb_enable,
- struct i40e_asq_cmd_details
- *cmd_details);
enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw,
bool persist,
struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_get_cee_dcb_config(struct i40e_hw *hw,
void *buff, u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_start_stop_dcbx(struct i40e_hw *hw,
- bool start_agent,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw,
u16 udp_port, u8 protocol_index,
u8 *filter_index,
struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw,
- u8 *num_entries,
- struct i40e_aqc_switch_resource_alloc_element_resp *buf,
- u16 count,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags,
- u16 mac_seid, u16 vsi_seid,
- u16 *ret_seid);
-enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue,
- u16 vsi_seid, u16 tag, u16 queue_num,
- u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid,
- u16 tag, u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pe_seid,
u16 etag, u8 num_tags_in_buf, void *buf,
u16 *tags_used, u16 *tags_free,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pe_seid,
- u16 etag, u16 *tags_used, u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid,
- u16 old_tag, u16 new_tag, u16 *tags_used,
- u16 *tags_free,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid,
- u16 vlan_id, u16 *stat_index,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid,
- u16 vlan_id, u16 stat_index,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw,
u16 bad_frame_vsi, bool save_bad_pac,
bool pad_short_pac, bool double_vlan,
@@ -341,22 +225,10 @@ enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw,
enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
u16 seid, u16 credit, u8 max_credit,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw,
- u8 tcmap, bool request, u8 *tcmap_ret,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit(
- struct i40e_hw *hw, u16 seid,
- struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
- u16 seid, u16 credit, u8 max_bw,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
struct i40e_asq_cmd_details *cmd_details);
@@ -381,16 +253,10 @@ enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw,
- u16 seid,
- struct i40e_aqc_query_port_ets_config_resp *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code
i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_cloud_filters_element_bb *filters,
@@ -415,38 +281,15 @@ enum i40e_status_code i40e_aq_replace_cloud_filters(struct i40e_hw *hw,
enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw,
u32 reg_addr0, u32 *reg_val0,
u32 reg_addr1, u32 *reg_val1);
-enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw,
- u32 addr, u32 dw_count, void *buffer);
-enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw,
- u32 reg_addr0, u32 reg_val0,
- u32 reg_addr1, u32 reg_val1);
-enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw,
- u32 addr, u32 dw_count, void *buffer);
-enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw);
-enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw,
- u8 bios_mode, bool *reset_needed);
-enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw,
- u8 oem_mode);
/* i40e_common */
enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw);
enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw);
void i40e_clear_hw(struct i40e_hw *hw);
void i40e_clear_pxe_mode(struct i40e_hw *hw);
-enum i40e_status_code i40e_get_link_status(struct i40e_hw *hw, bool *link_up);
enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw);
enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw,
- u32 *max_bw, u32 *min_bw, bool *min_valid, bool *max_valid);
-enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw,
- struct i40e_aqc_configure_partition_bw_data *bw_data,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_status_code i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size);
void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable);
-enum i40e_status_code i40e_get_san_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw);
/* prototype for functions used for NVM access */
enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw);
enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw,
@@ -466,24 +309,14 @@ enum i40e_status_code __i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
u16 *data);
enum i40e_status_code __i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
u16 *words, u16 *data);
-enum i40e_status_code __i40e_write_nvm_word(struct i40e_hw *hw, u32 offset,
- void *data);
-enum i40e_status_code __i40e_write_nvm_buffer(struct i40e_hw *hw, u8 module,
- u32 offset, u16 words, void *data);
enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum);
enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw);
enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw,
u16 *checksum);
-enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw,
- struct i40e_nvm_access *cmd,
- u8 *bytes, int *);
void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode,
struct i40e_aq_desc *desc);
void i40e_nvmupd_clear_wait_state(struct i40e_hw *hw);
-void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status);
#endif /* PF_DRIVER */
-enum i40e_status_code i40e_enable_eee(struct i40e_hw *hw, bool enable);
-
enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw);
extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[];
@@ -551,13 +384,6 @@ enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
u16 vsi_seid, u16 queue, bool is_add,
struct i40e_control_filter_stats *stats,
struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_debug_dump(struct i40e_hw *hw, u8 cluster_id,
- u8 table_id, u32 start_index, u16 buff_size,
- void *buff, u16 *ret_buff_size,
- u8 *ret_next_table, u32 *ret_next_index,
- struct i40e_asq_cmd_details *cmd_details);
-void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
- u16 vsi_seid);
enum i40e_status_code i40e_aq_rx_ctl_read_register(struct i40e_hw *hw,
u32 reg_addr, u32 *reg_val,
struct i40e_asq_cmd_details *cmd_details);
@@ -589,24 +415,6 @@ enum i40e_status_code
i40e_aq_run_phy_activity(struct i40e_hw *hw, u16 activity_id, u32 opcode,
u32 *cmd_status, u32 *data0, u32 *data1,
struct i40e_asq_cmd_details *cmd_details);
-
-enum i40e_status_code i40e_aq_set_arp_proxy_config(struct i40e_hw *hw,
- struct i40e_aqc_arp_proxy_data *proxy_config,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_ns_proxy_table_entry(struct i40e_hw *hw,
- struct i40e_aqc_ns_proxy_data *ns_proxy_table_entry,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_set_clear_wol_filter(struct i40e_hw *hw,
- u8 filter_index,
- struct i40e_aqc_set_wol_filter_data *filter,
- bool set_filter, bool no_wol_tco,
- bool filter_valid, bool no_wol_tco_valid,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_get_wake_event_reason(struct i40e_hw *hw,
- u16 *wake_reason,
- struct i40e_asq_cmd_details *cmd_details);
-enum i40e_status_code i40e_aq_clear_all_wol_filters(struct i40e_hw *hw,
- struct i40e_asq_cmd_details *cmd_details);
enum i40e_status_code i40e_read_phy_register_clause22(struct i40e_hw *hw,
u16 reg, u8 phy_addr, u16 *value);
enum i40e_status_code i40e_write_phy_register_clause22(struct i40e_hw *hw,
@@ -615,13 +423,7 @@ enum i40e_status_code i40e_read_phy_register_clause45(struct i40e_hw *hw,
u8 page, u16 reg, u8 phy_addr, u16 *value);
enum i40e_status_code i40e_write_phy_register_clause45(struct i40e_hw *hw,
u8 page, u16 reg, u8 phy_addr, u16 value);
-enum i40e_status_code i40e_read_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 *value);
-enum i40e_status_code i40e_write_phy_register(struct i40e_hw *hw,
- u8 page, u16 reg, u8 phy_addr, u16 value);
u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num);
-enum i40e_status_code i40e_blink_phy_link_led(struct i40e_hw *hw,
- u32 time, u32 interval);
enum i40e_status_code i40e_aq_write_ddp(struct i40e_hw *hw, void *buff,
u16 buff_size, u32 track_id,
u32 *error_offset, u32 *error_info,
@@ -643,8 +445,4 @@ i40e_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
enum i40e_status_code
i40e_rollback_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
u32 track_id);
-enum i40e_status_code
-i40e_add_pinfo_to_list(struct i40e_hw *hw,
- struct i40e_profile_segment *profile,
- u8 *profile_info_sec, u32 track_id);
#endif /* _I40E_PROTOTYPE_H_ */
diff --git a/drivers/net/i40e/base/meson.build b/drivers/net/i40e/base/meson.build
index 8bc6a0fa0b..1a07449fa5 100644
--- a/drivers/net/i40e/base/meson.build
+++ b/drivers/net/i40e/base/meson.build
@@ -5,7 +5,6 @@ sources = [
'i40e_adminq.c',
'i40e_common.c',
'i40e_dcb.c',
- 'i40e_diag.c',
'i40e_hmc.c',
'i40e_lan_hmc.c',
'i40e_nvm.c'
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 6d5912d8c1..db3dbbda48 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -293,8 +293,6 @@ int iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on);
int iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on);
-int iavf_enable_queues(struct iavf_adapter *adapter);
-int iavf_enable_queues_lv(struct iavf_adapter *adapter);
int iavf_disable_queues(struct iavf_adapter *adapter);
int iavf_disable_queues_lv(struct iavf_adapter *adapter);
int iavf_configure_rss_lut(struct iavf_adapter *adapter);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 33d03af653..badcd312cc 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -521,34 +521,6 @@ iavf_get_supported_rxdid(struct iavf_adapter *adapter)
return 0;
}
-int
-iavf_enable_queues(struct iavf_adapter *adapter)
-{
- struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_queue_select queue_select;
- struct iavf_cmd_info args;
- int err;
-
- memset(&queue_select, 0, sizeof(queue_select));
- queue_select.vsi_id = vf->vsi_res->vsi_id;
-
- queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
- queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
-
- args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
- args.in_args = (u8 *)&queue_select;
- args.in_args_size = sizeof(queue_select);
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd(adapter, &args);
- if (err) {
- PMD_DRV_LOG(ERR,
- "Failed to execute command of OP_ENABLE_QUEUES");
- return err;
- }
- return 0;
-}
-
int
iavf_disable_queues(struct iavf_adapter *adapter)
{
@@ -608,50 +580,6 @@ iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid,
return err;
}
-int
-iavf_enable_queues_lv(struct iavf_adapter *adapter)
-{
- struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
- struct iavf_cmd_info args;
- int err, len;
-
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
- queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
- queue_select->vport_id = vf->vsi_res->vsi_id;
-
- queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX;
- queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0;
- queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues =
- adapter->eth_dev->data->nb_tx_queues;
-
- queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX;
- queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0;
- queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues =
- adapter->eth_dev->data->nb_rx_queues;
-
- args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
- args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd(adapter, &args);
- if (err) {
- PMD_DRV_LOG(ERR,
- "Failed to execute command of OP_ENABLE_QUEUES_V2");
- return err;
- }
- return 0;
-}
-
int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c
index 763cd2af9e..0f73f4a0e7 100644
--- a/drivers/net/ice/base/ice_acl.c
+++ b/drivers/net/ice/base/ice_acl.c
@@ -115,79 +115,6 @@ ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
entry_idx, buf, cd);
}
-/**
- * ice_aq_query_acl_entry - query ACL entry
- * @hw: pointer to the HW struct
- * @tcam_idx: Updated TCAM block index
- * @entry_idx: updated entry index
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Query ACL entry (direct 0x0C24)
- *
- * NOTE: Caller of this API to parse 'buf' appropriately since it contains
- * response (key and key invert)
- */
-enum ice_status
-ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
- struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd)
-{
- return ice_aq_acl_entry(hw, ice_aqc_opc_query_acl_entry, tcam_idx,
- entry_idx, buf, cd);
-}
-
-/* Helper function to alloc/dealloc ACL action pair */
-static enum ice_status
-ice_aq_actpair_a_d(struct ice_hw *hw, u16 opcode, u16 alloc_id,
- struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
- struct ice_aqc_acl_tbl_actpair *cmd;
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, opcode);
- cmd = &desc.params.tbl_actpair;
- cmd->alloc_id = CPU_TO_LE16(alloc_id);
-
- return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
-}
-
-/**
- * ice_aq_alloc_actpair - allocate actionpair for specified ACL table
- * @hw: pointer to the HW struct
- * @alloc_id: allocation ID of the table being associated with the actionpair
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Allocate ACL actionpair (direct 0x0C12)
- *
- * This command doesn't need and doesn't have its own command buffer
- * but for response format is as specified in 'struct ice_aqc_acl_generic'
- */
-enum ice_status
-ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id,
- struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
- return ice_aq_actpair_a_d(hw, ice_aqc_opc_alloc_acl_actpair, alloc_id,
- buf, cd);
-}
-
-/**
- * ice_aq_dealloc_actpair - dealloc actionpair for specified ACL table
- * @hw: pointer to the HW struct
- * @alloc_id: allocation ID of the table being associated with the actionpair
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Deallocate ACL actionpair (direct 0x0C13)
- */
-enum ice_status
-ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id,
- struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
-{
- return ice_aq_actpair_a_d(hw, ice_aqc_opc_dealloc_acl_actpair, alloc_id,
- buf, cd);
-}
-
/* Helper function to program/query ACL action pair */
static enum ice_status
ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx,
@@ -227,41 +154,6 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
act_mem_idx, act_entry_idx, buf, cd);
}
-/**
- * ice_aq_query_actpair - query ACL actionpair
- * @hw: pointer to the HW struct
- * @act_mem_idx: action memory index to program/update/query
- * @act_entry_idx: the entry index in action memory to be programmed/updated
- * @buf: address of indirect data buffer
- * @cd: pointer to command details structure or NULL
- *
- * Query ACL actionpair (indirect 0x0C25)
- */
-enum ice_status
-ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
- struct ice_aqc_actpair *buf, struct ice_sq_cd *cd)
-{
- return ice_aq_actpair_p_q(hw, ice_aqc_opc_query_acl_actpair,
- act_mem_idx, act_entry_idx, buf, cd);
-}
-
-/**
- * ice_aq_dealloc_acl_res - deallocate ACL resources
- * @hw: pointer to the HW struct
- * @cd: pointer to command details structure or NULL
- *
- * De-allocate ACL resources (direct 0x0C1A). Used by SW to release all the
- * resources allocated for it using a single command
- */
-enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd)
-{
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_res);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
/**
* ice_acl_prof_aq_send - sending ACL profile AQ commands
* @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h
index 21aa5088f7..ef5a8245a3 100644
--- a/drivers/net/ice/base/ice_acl.h
+++ b/drivers/net/ice/base/ice_acl.h
@@ -142,22 +142,9 @@ enum ice_status
ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd);
enum ice_status
-ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
- struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id,
- struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id,
- struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd);
-enum ice_status
ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
enum ice_status
-ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
- struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
-enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd);
-enum ice_status
ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id,
struct ice_aqc_acl_prof_generic_frmt *buf,
struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 304e55e210..b6d80fd383 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -844,36 +844,6 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
return status;
}
-/**
- * ice_deinit_hw - unroll initialization operations done by ice_init_hw
- * @hw: pointer to the hardware structure
- *
- * This should be called only during nominal operation, not as a result of
- * ice_init_hw() failing since ice_init_hw() will take care of unrolling
- * applicable initializations if it fails for any reason.
- */
-void ice_deinit_hw(struct ice_hw *hw)
-{
- ice_free_fd_res_cntr(hw, hw->fd_ctr_base);
- ice_cleanup_fltr_mgmt_struct(hw);
-
- ice_sched_cleanup_all(hw);
- ice_sched_clear_agg(hw);
- ice_free_seg(hw);
- ice_free_hw_tbls(hw);
- ice_destroy_lock(&hw->tnl_lock);
-
- if (hw->port_info) {
- ice_free(hw, hw->port_info);
- hw->port_info = NULL;
- }
-
- ice_destroy_all_ctrlq(hw);
-
- /* Clear VSI contexts if not already cleared */
- ice_clear_all_vsi_ctx(hw);
-}
-
/**
* ice_check_reset - Check to see if a global reset is complete
* @hw: pointer to the hardware structure
@@ -1157,38 +1127,6 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
{ 0 }
};
-/**
- * ice_copy_tx_cmpltnq_ctx_to_hw
- * @hw: pointer to the hardware structure
- * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
- * @tx_cmpltnq_index: the index of the completion queue
- *
- * Copies Tx completion queue context from dense structure to HW register space
- */
-static enum ice_status
-ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
- u32 tx_cmpltnq_index)
-{
- u8 i;
-
- if (!ice_tx_cmpltnq_ctx)
- return ICE_ERR_BAD_PTR;
-
- if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
- return ICE_ERR_PARAM;
-
- /* Copy each dword separately to HW */
- for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
- wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
- *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
-
- ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
- *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
- }
-
- return ICE_SUCCESS;
-}
-
/* LAN Tx Completion Queue Context */
static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
/* Field Width LSB */
@@ -1205,80 +1143,6 @@ static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
{ 0 }
};
-/**
- * ice_write_tx_cmpltnq_ctx
- * @hw: pointer to the hardware structure
- * @tx_cmpltnq_ctx: pointer to the completion queue context
- * @tx_cmpltnq_index: the index of the completion queue
- *
- * Converts completion queue context from sparse to dense structure and then
- * writes it to HW register space
- */
-enum ice_status
-ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
- struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
- u32 tx_cmpltnq_index)
-{
- u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
-
- ice_set_ctx(hw, (u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
- return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
-}
-
-/**
- * ice_clear_tx_cmpltnq_ctx
- * @hw: pointer to the hardware structure
- * @tx_cmpltnq_index: the index of the completion queue to clear
- *
- * Clears Tx completion queue context in HW register space
- */
-enum ice_status
-ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
-{
- u8 i;
-
- if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
- return ICE_ERR_PARAM;
-
- /* Clear each dword register separately */
- for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
- wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
-
- return ICE_SUCCESS;
-}
-
-/**
- * ice_copy_tx_drbell_q_ctx_to_hw
- * @hw: pointer to the hardware structure
- * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
- * @tx_drbell_q_index: the index of the doorbell queue
- *
- * Copies doorbell queue context from dense structure to HW register space
- */
-static enum ice_status
-ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
- u32 tx_drbell_q_index)
-{
- u8 i;
-
- if (!ice_tx_drbell_q_ctx)
- return ICE_ERR_BAD_PTR;
-
- if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
- return ICE_ERR_PARAM;
-
- /* Copy each dword separately to HW */
- for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
- wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
- *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
-
- ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
- *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
- }
-
- return ICE_SUCCESS;
-}
-
/* LAN Tx Doorbell Queue Context info */
static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
/* Field Width LSB */
@@ -1296,49 +1160,6 @@ static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
{ 0 }
};
-/**
- * ice_write_tx_drbell_q_ctx
- * @hw: pointer to the hardware structure
- * @tx_drbell_q_ctx: pointer to the doorbell queue context
- * @tx_drbell_q_index: the index of the doorbell queue
- *
- * Converts doorbell queue context from sparse to dense structure and then
- * writes it to HW register space
- */
-enum ice_status
-ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
- struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
- u32 tx_drbell_q_index)
-{
- u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
-
- ice_set_ctx(hw, (u8 *)tx_drbell_q_ctx, ctx_buf,
- ice_tx_drbell_q_ctx_info);
- return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
-}
-
-/**
- * ice_clear_tx_drbell_q_ctx
- * @hw: pointer to the hardware structure
- * @tx_drbell_q_index: the index of the doorbell queue to clear
- *
- * Clears doorbell queue context in HW register space
- */
-enum ice_status
-ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
-{
- u8 i;
-
- if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
- return ICE_ERR_PARAM;
-
- /* Clear each dword register separately */
- for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
- wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
-
- return ICE_SUCCESS;
-}
-
/* FW Admin Queue command wrappers */
/**
@@ -2238,69 +2059,6 @@ ice_discover_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_caps)
return status;
}
-/**
- * ice_set_safe_mode_caps - Override dev/func capabilities when in safe mode
- * @hw: pointer to the hardware structure
- */
-void ice_set_safe_mode_caps(struct ice_hw *hw)
-{
- struct ice_hw_func_caps *func_caps = &hw->func_caps;
- struct ice_hw_dev_caps *dev_caps = &hw->dev_caps;
- struct ice_hw_common_caps cached_caps;
- u32 num_funcs;
-
- /* cache some func_caps values that should be restored after memset */
- cached_caps = func_caps->common_cap;
-
- /* unset func capabilities */
- memset(func_caps, 0, sizeof(*func_caps));
-
-#define ICE_RESTORE_FUNC_CAP(name) \
- func_caps->common_cap.name = cached_caps.name
-
- /* restore cached values */
- ICE_RESTORE_FUNC_CAP(valid_functions);
- ICE_RESTORE_FUNC_CAP(txq_first_id);
- ICE_RESTORE_FUNC_CAP(rxq_first_id);
- ICE_RESTORE_FUNC_CAP(msix_vector_first_id);
- ICE_RESTORE_FUNC_CAP(max_mtu);
- ICE_RESTORE_FUNC_CAP(nvm_unified_update);
-
- /* one Tx and one Rx queue in safe mode */
- func_caps->common_cap.num_rxq = 1;
- func_caps->common_cap.num_txq = 1;
-
- /* two MSIX vectors, one for traffic and one for misc causes */
- func_caps->common_cap.num_msix_vectors = 2;
- func_caps->guar_num_vsi = 1;
-
- /* cache some dev_caps values that should be restored after memset */
- cached_caps = dev_caps->common_cap;
- num_funcs = dev_caps->num_funcs;
-
- /* unset dev capabilities */
- memset(dev_caps, 0, sizeof(*dev_caps));
-
-#define ICE_RESTORE_DEV_CAP(name) \
- dev_caps->common_cap.name = cached_caps.name
-
- /* restore cached values */
- ICE_RESTORE_DEV_CAP(valid_functions);
- ICE_RESTORE_DEV_CAP(txq_first_id);
- ICE_RESTORE_DEV_CAP(rxq_first_id);
- ICE_RESTORE_DEV_CAP(msix_vector_first_id);
- ICE_RESTORE_DEV_CAP(max_mtu);
- ICE_RESTORE_DEV_CAP(nvm_unified_update);
- dev_caps->num_funcs = num_funcs;
-
- /* one Tx and one Rx queue per function in safe mode */
- dev_caps->common_cap.num_rxq = num_funcs;
- dev_caps->common_cap.num_txq = num_funcs;
-
- /* two MSIX vectors per function */
- dev_caps->common_cap.num_msix_vectors = 2 * num_funcs;
-}
-
/**
* ice_get_caps - get info about the HW
* @hw: pointer to the hardware structure
@@ -2370,182 +2128,6 @@ void ice_clear_pxe_mode(struct ice_hw *hw)
ice_aq_clear_pxe_mode(hw);
}
-/**
- * ice_get_link_speed_based_on_phy_type - returns link speed
- * @phy_type_low: lower part of phy_type
- * @phy_type_high: higher part of phy_type
- *
- * This helper function will convert an entry in PHY type structure
- * [phy_type_low, phy_type_high] to its corresponding link speed.
- * Note: In the structure of [phy_type_low, phy_type_high], there should
- * be one bit set, as this function will convert one PHY type to its
- * speed.
- * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
- * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
- */
-static u16
-ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
-{
- u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
- u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
-
- switch (phy_type_low) {
- case ICE_PHY_TYPE_LOW_100BASE_TX:
- case ICE_PHY_TYPE_LOW_100M_SGMII:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
- break;
- case ICE_PHY_TYPE_LOW_1000BASE_T:
- case ICE_PHY_TYPE_LOW_1000BASE_SX:
- case ICE_PHY_TYPE_LOW_1000BASE_LX:
- case ICE_PHY_TYPE_LOW_1000BASE_KX:
- case ICE_PHY_TYPE_LOW_1G_SGMII:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
- break;
- case ICE_PHY_TYPE_LOW_2500BASE_T:
- case ICE_PHY_TYPE_LOW_2500BASE_X:
- case ICE_PHY_TYPE_LOW_2500BASE_KX:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
- break;
- case ICE_PHY_TYPE_LOW_5GBASE_T:
- case ICE_PHY_TYPE_LOW_5GBASE_KR:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
- break;
- case ICE_PHY_TYPE_LOW_10GBASE_T:
- case ICE_PHY_TYPE_LOW_10G_SFI_DA:
- case ICE_PHY_TYPE_LOW_10GBASE_SR:
- case ICE_PHY_TYPE_LOW_10GBASE_LR:
- case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
- case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
- case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
- break;
- case ICE_PHY_TYPE_LOW_25GBASE_T:
- case ICE_PHY_TYPE_LOW_25GBASE_CR:
- case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
- case ICE_PHY_TYPE_LOW_25GBASE_CR1:
- case ICE_PHY_TYPE_LOW_25GBASE_SR:
- case ICE_PHY_TYPE_LOW_25GBASE_LR:
- case ICE_PHY_TYPE_LOW_25GBASE_KR:
- case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
- case ICE_PHY_TYPE_LOW_25GBASE_KR1:
- case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
- case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
- break;
- case ICE_PHY_TYPE_LOW_40GBASE_CR4:
- case ICE_PHY_TYPE_LOW_40GBASE_SR4:
- case ICE_PHY_TYPE_LOW_40GBASE_LR4:
- case ICE_PHY_TYPE_LOW_40GBASE_KR4:
- case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
- case ICE_PHY_TYPE_LOW_40G_XLAUI:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
- break;
- case ICE_PHY_TYPE_LOW_50GBASE_CR2:
- case ICE_PHY_TYPE_LOW_50GBASE_SR2:
- case ICE_PHY_TYPE_LOW_50GBASE_LR2:
- case ICE_PHY_TYPE_LOW_50GBASE_KR2:
- case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
- case ICE_PHY_TYPE_LOW_50G_LAUI2:
- case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
- case ICE_PHY_TYPE_LOW_50G_AUI2:
- case ICE_PHY_TYPE_LOW_50GBASE_CP:
- case ICE_PHY_TYPE_LOW_50GBASE_SR:
- case ICE_PHY_TYPE_LOW_50GBASE_FR:
- case ICE_PHY_TYPE_LOW_50GBASE_LR:
- case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
- case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
- case ICE_PHY_TYPE_LOW_50G_AUI1:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
- break;
- case ICE_PHY_TYPE_LOW_100GBASE_CR4:
- case ICE_PHY_TYPE_LOW_100GBASE_SR4:
- case ICE_PHY_TYPE_LOW_100GBASE_LR4:
- case ICE_PHY_TYPE_LOW_100GBASE_KR4:
- case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
- case ICE_PHY_TYPE_LOW_100G_CAUI4:
- case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
- case ICE_PHY_TYPE_LOW_100G_AUI4:
- case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
- case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
- case ICE_PHY_TYPE_LOW_100GBASE_CP2:
- case ICE_PHY_TYPE_LOW_100GBASE_SR2:
- case ICE_PHY_TYPE_LOW_100GBASE_DR:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
- break;
- default:
- speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
- break;
- }
-
- switch (phy_type_high) {
- case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
- case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
- case ICE_PHY_TYPE_HIGH_100G_CAUI2:
- case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
- case ICE_PHY_TYPE_HIGH_100G_AUI2:
- speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
- break;
- default:
- speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
- break;
- }
-
- if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
- speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
- return ICE_AQ_LINK_SPEED_UNKNOWN;
- else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
- speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
- return ICE_AQ_LINK_SPEED_UNKNOWN;
- else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
- speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
- return speed_phy_type_low;
- else
- return speed_phy_type_high;
-}
-
-/**
- * ice_update_phy_type
- * @phy_type_low: pointer to the lower part of phy_type
- * @phy_type_high: pointer to the higher part of phy_type
- * @link_speeds_bitmap: targeted link speeds bitmap
- *
- * Note: For the link_speeds_bitmap structure, you can check it at
- * [ice_aqc_get_link_status->link_speed]. Caller can pass in
- * link_speeds_bitmap include multiple speeds.
- *
- * Each entry in this [phy_type_low, phy_type_high] structure will
- * present a certain link speed. This helper function will turn on bits
- * in [phy_type_low, phy_type_high] structure based on the value of
- * link_speeds_bitmap input parameter.
- */
-void
-ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
- u16 link_speeds_bitmap)
-{
- u64 pt_high;
- u64 pt_low;
- int index;
- u16 speed;
-
- /* We first check with low part of phy_type */
- for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
- pt_low = BIT_ULL(index);
- speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
-
- if (link_speeds_bitmap & speed)
- *phy_type_low |= BIT_ULL(index);
- }
-
- /* We then check with high part of phy_type */
- for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
- pt_high = BIT_ULL(index);
- speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
-
- if (link_speeds_bitmap & speed)
- *phy_type_high |= BIT_ULL(index);
- }
-}
-
/**
* ice_aq_set_phy_cfg
* @hw: pointer to the HW struct
@@ -2642,787 +2224,279 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
}
/**
- * ice_cache_phy_user_req
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
* @pi: port information structure
- * @cache_data: PHY logging data
- * @cache_mode: PHY logging mode
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
*
- * Log the user request on (FC, FEC, SPEED) for later user.
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
*/
-static void
-ice_cache_phy_user_req(struct ice_port_info *pi,
- struct ice_phy_cache_mode_data cache_data,
- enum ice_phy_cache_mode cache_mode)
+void
+ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
+ struct ice_aqc_get_phy_caps_data *caps,
+ struct ice_aqc_set_phy_cfg_data *cfg)
{
- if (!pi)
+ if (!pi || !caps || !cfg)
return;
- switch (cache_mode) {
- case ICE_FC_MODE:
- pi->phy.curr_user_fc_req = cache_data.data.curr_user_fc_req;
- break;
- case ICE_SPEED_MODE:
- pi->phy.curr_user_speed_req =
- cache_data.data.curr_user_speed_req;
- break;
- case ICE_FEC_MODE:
- pi->phy.curr_user_fec_req = cache_data.data.curr_user_fec_req;
- break;
- default:
- break;
- }
-}
-
-/**
- * ice_caps_to_fc_mode
- * @caps: PHY capabilities
- *
- * Convert PHY FC capabilities to ice FC mode
- */
-enum ice_fc_mode ice_caps_to_fc_mode(u8 caps)
-{
- if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
- caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
- return ICE_FC_FULL;
+ ice_memset(cfg, 0, sizeof(*cfg), ICE_NONDMA_MEM);
+ cfg->phy_type_low = caps->phy_type_low;
+ cfg->phy_type_high = caps->phy_type_high;
+ cfg->caps = caps->caps;
+ cfg->low_power_ctrl_an = caps->low_power_ctrl_an;
+ cfg->eee_cap = caps->eee_cap;
+ cfg->eeer_value = caps->eeer_value;
+ cfg->link_fec_opt = caps->link_fec_options;
+ cfg->module_compliance_enforcement =
+ caps->module_compliance_enforcement;
- if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
- return ICE_FC_TX_PAUSE;
+ if (ice_fw_supports_link_override(pi->hw)) {
+ struct ice_link_default_override_tlv tlv;
- if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
- return ICE_FC_RX_PAUSE;
+ if (ice_get_link_default_override(&tlv, pi))
+ return;
- return ICE_FC_NONE;
+ if (tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE)
+ cfg->module_compliance_enforcement |=
+ ICE_LINK_OVERRIDE_STRICT_MODE;
+ }
}
/**
- * ice_caps_to_fec_mode
- * @caps: PHY capabilities
- * @fec_options: Link FEC options
+ * ice_aq_set_event_mask
+ * @hw: pointer to the HW struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
*
- * Convert PHY FEC capabilities to ice FEC mode
+ * Set event mask (0x0613)
*/
-enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options)
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+ struct ice_sq_cd *cd)
{
- if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
- return ICE_FEC_AUTO;
+ struct ice_aqc_set_event_mask *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.set_event_mask;
- if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
- ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
- ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
- ICE_AQC_PHY_FEC_25G_KR_REQ))
- return ICE_FEC_BASER;
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
- if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
- ICE_AQC_PHY_FEC_25G_RS_544_REQ |
- ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
- return ICE_FEC_RS;
+ cmd->lport_num = port_num;
- return ICE_FEC_NONE;
+ cmd->event_mask = CPU_TO_LE16(mask);
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
/**
- * ice_cfg_phy_fc - Configure PHY FC data based on FC mode
- * @pi: port information structure
- * @cfg: PHY configuration data to set FC mode
- * @req_mode: FC mode to configure
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @params: RSS LUT parameters
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
*/
static enum ice_status
-ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
- enum ice_fc_mode req_mode)
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set)
{
- struct ice_phy_cache_mode_data cache_data;
- u8 pause_mask = 0x0;
+ u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle;
+ struct ice_aqc_get_set_rss_lut *cmd_resp;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+ u8 *lut;
- if (!pi || !cfg)
- return ICE_ERR_BAD_PTR;
+ if (!params)
+ return ICE_ERR_PARAM;
- switch (req_mode) {
- case ICE_FC_AUTO:
- {
- struct ice_aqc_get_phy_caps_data *pcaps;
- enum ice_status status;
+ vsi_handle = params->vsi_handle;
+ lut = params->lut;
- pcaps = (struct ice_aqc_get_phy_caps_data *)
- ice_malloc(pi->hw, sizeof(*pcaps));
- if (!pcaps)
- return ICE_ERR_NO_MEMORY;
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+ return ICE_ERR_PARAM;
- /* Query the value of FC that both the NIC and attached media
- * can do.
- */
- status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
- pcaps, NULL);
- if (status) {
- ice_free(pi->hw, pcaps);
- return status;
- }
+ lut_size = params->lut_size;
+ lut_type = params->lut_type;
+ glob_lut_idx = params->global_lut_id;
+ vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
- pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE;
- pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+ cmd_resp = &desc.params.get_set_rss_lut;
- ice_free(pi->hw, pcaps);
- break;
- }
- case ICE_FC_FULL:
- pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
- pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
- break;
- case ICE_FC_RX_PAUSE:
- pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
- break;
- case ICE_FC_TX_PAUSE:
- pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
- break;
- default:
- break;
+ if (set) {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+ } else {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
}
- /* clear the old pause settings */
- cfg->caps &= ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
- ICE_AQC_PHY_EN_RX_LINK_PAUSE);
-
- /* set the new capabilities */
- cfg->caps |= pause_mask;
-
- /* Cache user FC request */
- cache_data.data.curr_user_fc_req = req_mode;
- ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE);
+ cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+ ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+ ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+ ICE_AQC_GSET_RSS_LUT_VSI_VALID);
- return ICE_SUCCESS;
-}
-
-/**
- * ice_set_fc
- * @pi: port information structure
- * @aq_failures: pointer to status code, specific to ice_set_fc routine
- * @ena_auto_link_update: enable automatic link update
- *
- * Set the requested flow control mode.
- */
-enum ice_status
-ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
-{
- struct ice_aqc_set_phy_cfg_data cfg = { 0 };
- struct ice_aqc_get_phy_caps_data *pcaps;
- enum ice_status status;
- struct ice_hw *hw;
-
- if (!pi || !aq_failures)
- return ICE_ERR_BAD_PTR;
-
- *aq_failures = 0;
- hw = pi->hw;
-
- pcaps = (struct ice_aqc_get_phy_caps_data *)
- ice_malloc(hw, sizeof(*pcaps));
- if (!pcaps)
- return ICE_ERR_NO_MEMORY;
-
- /* Get the current PHY config */
- status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
- NULL);
- if (status) {
- *aq_failures = ICE_SET_FC_AQ_FAIL_GET;
- goto out;
+ switch (lut_type) {
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+ case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+ flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+ break;
+ default:
+ status = ICE_ERR_PARAM;
+ goto ice_aq_get_set_rss_lut_exit;
}
- ice_copy_phy_caps_to_cfg(pi, pcaps, &cfg);
-
- /* Configure the set PHY data */
- status = ice_cfg_phy_fc(pi, &cfg, pi->fc.req_mode);
- if (status) {
- if (status != ICE_ERR_BAD_PTR)
- *aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+ if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+ flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+ ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
- goto out;
+ if (!set)
+ goto ice_aq_get_set_rss_lut_send;
+ } else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+ if (!set)
+ goto ice_aq_get_set_rss_lut_send;
+ } else {
+ goto ice_aq_get_set_rss_lut_send;
}
- /* If the capabilities have changed, then set the new config */
- if (cfg.caps != pcaps->caps) {
- int retry_count, retry_max = 10;
-
- /* Auto restart link so settings take effect */
- if (ena_auto_link_update)
- cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
-
- status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
- if (status) {
- *aq_failures = ICE_SET_FC_AQ_FAIL_SET;
- goto out;
- }
-
- /* Update the link info
- * It sometimes takes a really long time for link to
- * come back from the atomic reset. Thus, we wait a
- * little bit.
- */
- for (retry_count = 0; retry_count < retry_max; retry_count++) {
- status = ice_update_link_info(pi);
-
- if (status == ICE_SUCCESS)
- break;
-
- ice_msec_delay(100, true);
+ /* LUT size is only valid for Global and PF table types */
+ switch (lut_size) {
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+ break;
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+ break;
+ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+ if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+ break;
}
-
- if (status)
- *aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+ /* fall-through */
+ default:
+ status = ICE_ERR_PARAM;
+ goto ice_aq_get_set_rss_lut_exit;
}
-out:
- ice_free(hw, pcaps);
+ice_aq_get_set_rss_lut_send:
+ cmd_resp->flags = CPU_TO_LE16(flags);
+ status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
return status;
}
/**
- * ice_phy_caps_equals_cfg
- * @phy_caps: PHY capabilities
- * @phy_cfg: PHY configuration
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @get_params: RSS LUT parameters used to specify which RSS LUT to get
*
- * Helper function to determine if PHY capabilities matches PHY
- * configuration
+ * get the RSS lookup table, PF or VSI type
*/
-bool
-ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps,
- struct ice_aqc_set_phy_cfg_data *phy_cfg)
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params)
{
- u8 caps_mask, cfg_mask;
-
- if (!phy_caps || !phy_cfg)
- return false;
-
- /* These bits are not common between capabilities and configuration.
- * Do not use them to determine equality.
- */
- caps_mask = ICE_AQC_PHY_CAPS_MASK & ~(ICE_AQC_PHY_AN_MODE |
- ICE_AQC_PHY_EN_MOD_QUAL);
- cfg_mask = ICE_AQ_PHY_ENA_VALID_MASK & ~ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
-
- if (phy_caps->phy_type_low != phy_cfg->phy_type_low ||
- phy_caps->phy_type_high != phy_cfg->phy_type_high ||
- ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) ||
- phy_caps->low_power_ctrl_an != phy_cfg->low_power_ctrl_an ||
- phy_caps->eee_cap != phy_cfg->eee_cap ||
- phy_caps->eeer_value != phy_cfg->eeer_value ||
- phy_caps->link_fec_options != phy_cfg->link_fec_opt)
- return false;
-
- return true;
+ return __ice_aq_get_set_rss_lut(hw, get_params, false);
}
/**
- * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
- * @pi: port information structure
- * @caps: PHY ability structure to copy date from
- * @cfg: PHY configuration structure to copy data to
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @set_params: RSS LUT parameters used to specify how to set the RSS LUT
*
- * Helper function to copy AQC PHY get ability data to PHY set configuration
- * data structure
- */
-void
-ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
- struct ice_aqc_get_phy_caps_data *caps,
- struct ice_aqc_set_phy_cfg_data *cfg)
-{
- if (!pi || !caps || !cfg)
- return;
-
- ice_memset(cfg, 0, sizeof(*cfg), ICE_NONDMA_MEM);
- cfg->phy_type_low = caps->phy_type_low;
- cfg->phy_type_high = caps->phy_type_high;
- cfg->caps = caps->caps;
- cfg->low_power_ctrl_an = caps->low_power_ctrl_an;
- cfg->eee_cap = caps->eee_cap;
- cfg->eeer_value = caps->eeer_value;
- cfg->link_fec_opt = caps->link_fec_options;
- cfg->module_compliance_enforcement =
- caps->module_compliance_enforcement;
-
- if (ice_fw_supports_link_override(pi->hw)) {
- struct ice_link_default_override_tlv tlv;
-
- if (ice_get_link_default_override(&tlv, pi))
- return;
-
- if (tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE)
- cfg->module_compliance_enforcement |=
- ICE_LINK_OVERRIDE_STRICT_MODE;
- }
-}
-
-/**
- * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
- * @pi: port information structure
- * @cfg: PHY configuration data to set FEC mode
- * @fec: FEC mode to configure
+ * set the RSS lookup table, PF or VSI type
*/
enum ice_status
-ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
- enum ice_fec_mode fec)
+ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params)
{
- struct ice_aqc_get_phy_caps_data *pcaps;
- enum ice_status status = ICE_SUCCESS;
- struct ice_hw *hw;
-
- if (!pi || !cfg)
- return ICE_ERR_BAD_PTR;
-
- hw = pi->hw;
-
- pcaps = (struct ice_aqc_get_phy_caps_data *)
- ice_malloc(hw, sizeof(*pcaps));
- if (!pcaps)
- return ICE_ERR_NO_MEMORY;
-
- status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps,
- NULL);
- if (status)
- goto out;
-
- cfg->caps |= (pcaps->caps & ICE_AQC_PHY_EN_AUTO_FEC);
- cfg->link_fec_opt = pcaps->link_fec_options;
-
- switch (fec) {
- case ICE_FEC_BASER:
- /* Clear RS bits, and AND BASE-R ability
- * bits and OR request bits.
- */
- cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
- ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
- cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
- ICE_AQC_PHY_FEC_25G_KR_REQ;
- break;
- case ICE_FEC_RS:
- /* Clear BASE-R bits, and AND RS ability
- * bits and OR request bits.
- */
- cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
- cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
- ICE_AQC_PHY_FEC_25G_RS_544_REQ;
- break;
- case ICE_FEC_NONE:
- /* Clear all FEC option bits. */
- cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
- break;
- case ICE_FEC_AUTO:
- /* AND auto FEC bit, and all caps bits. */
- cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
- cfg->link_fec_opt |= pcaps->link_fec_options;
- break;
- default:
- status = ICE_ERR_PARAM;
- break;
- }
-
- if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(pi->hw)) {
- struct ice_link_default_override_tlv tlv;
-
- if (ice_get_link_default_override(&tlv, pi))
- goto out;
-
- if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) &&
- (tlv.options & ICE_LINK_OVERRIDE_EN))
- cfg->link_fec_opt = tlv.fec_options;
- }
-
-out:
- ice_free(hw, pcaps);
-
- return status;
+ return __ice_aq_get_set_rss_lut(hw, set_params, true);
}
/**
- * ice_get_link_status - get status of the HW network link
- * @pi: port information structure
- * @link_up: pointer to bool (true/false = linkup/linkdown)
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the HW struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
*
- * Variable link_up is true if link is up, false if link is down.
- * The variable link_up is invalid if status is non zero. As a
- * result of this call, link status reporting becomes enabled
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
*/
-enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+ struct ice_aqc_get_set_rss_keys *key,
+ bool set)
{
- struct ice_phy_info *phy_info;
- enum ice_status status = ICE_SUCCESS;
-
- if (!pi || !link_up)
- return ICE_ERR_PARAM;
-
- phy_info = &pi->phy;
+ struct ice_aqc_get_set_rss_key *cmd_resp;
+ u16 key_size = sizeof(*key);
+ struct ice_aq_desc desc;
- if (phy_info->get_link_info) {
- status = ice_update_link_info(pi);
+ cmd_resp = &desc.params.get_set_rss_key;
- if (status)
- ice_debug(pi->hw, ICE_DBG_LINK, "get link status error, status = %d\n",
- status);
+ if (set) {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+ desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+ } else {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
}
- *link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+ cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+ ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+ ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+ ICE_AQC_GSET_RSS_KEY_VSI_VALID);
- return status;
+ return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
}
/**
- * ice_aq_set_link_restart_an
- * @pi: pointer to the port information structure
- * @ena_link: if true: enable link, if false: disable link
- * @cd: pointer to command details structure or NULL
+ * ice_aq_get_rss_key
+ * @hw: pointer to the HW struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
*
- * Sets up the link and restarts the Auto-Negotiation over the link.
+ * get the RSS key per VSI
*/
enum ice_status
-ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
- struct ice_sq_cd *cd)
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *key)
{
- struct ice_aqc_restart_an *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.restart_an;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
-
- cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
- cmd->lport_num = pi->lport;
- if (ena_link)
- cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
- else
- cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+ return ICE_ERR_PARAM;
- return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+ return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ key, false);
}
/**
- * ice_aq_set_event_mask
+ * ice_aq_set_rss_key
* @hw: pointer to the HW struct
- * @port_num: port number of the physical function
- * @mask: event mask to be set
- * @cd: pointer to command details structure or NULL
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
*
- * Set event mask (0x0613)
+ * set the RSS key per VSI
*/
enum ice_status
-ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
- struct ice_sq_cd *cd)
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+ struct ice_aqc_get_set_rss_keys *keys)
{
- struct ice_aqc_set_event_mask *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.set_event_mask;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
-
- cmd->lport_num = port_num;
+ if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+ return ICE_ERR_PARAM;
- cmd->event_mask = CPU_TO_LE16(mask);
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+ return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+ keys, true);
}
/**
- * ice_aq_set_mac_loopback
- * @hw: pointer to the HW struct
- * @ena_lpbk: Enable or Disable loopback
- * @cd: pointer to command details structure or NULL
- *
- * Enable/disable loopback on a given port
- */
-enum ice_status
-ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
-{
- struct ice_aqc_set_mac_lb *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.set_mac_lb;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
- if (ena_lpbk)
- cmd->lb_mode = ICE_AQ_MAC_LB_EN;
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
-/**
- * ice_aq_set_port_id_led
- * @pi: pointer to the port information
- * @is_orig_mode: is this LED set to original mode (by the net-list)
- * @cd: pointer to command details structure or NULL
- *
- * Set LED value for the given port (0x06e9)
- */
-enum ice_status
-ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_set_port_id_led *cmd;
- struct ice_hw *hw = pi->hw;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.set_port_id_led;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
-
- if (is_orig_mode)
- cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
- else
- cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
-/**
- * ice_aq_sff_eeprom
- * @hw: pointer to the HW struct
- * @lport: bits [7:0] = logical port, bit [8] = logical port valid
- * @bus_addr: I2C bus address of the eeprom (typically 0xA0, 0=topo default)
- * @mem_addr: I2C offset. lower 8 bits for address, 8 upper bits zero padding.
- * @page: QSFP page
- * @set_page: set or ignore the page
- * @data: pointer to data buffer to be read/written to the I2C device.
- * @length: 1-16 for read, 1 for write.
- * @write: 0 read, 1 for write.
- * @cd: pointer to command details structure or NULL
- *
- * Read/Write SFF EEPROM (0x06EE)
- */
-enum ice_status
-ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
- u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
- bool write, struct ice_sq_cd *cd)
-{
- struct ice_aqc_sff_eeprom *cmd;
- struct ice_aq_desc desc;
- enum ice_status status;
-
- if (!data || (mem_addr & 0xff00))
- return ICE_ERR_PARAM;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_sff_eeprom);
- cmd = &desc.params.read_write_sff_param;
- desc.flags = CPU_TO_LE16(ICE_AQ_FLAG_RD);
- cmd->lport_num = (u8)(lport & 0xff);
- cmd->lport_num_valid = (u8)((lport >> 8) & 0x01);
- cmd->i2c_bus_addr = CPU_TO_LE16(((bus_addr >> 1) &
- ICE_AQC_SFF_I2CBUS_7BIT_M) |
- ((set_page <<
- ICE_AQC_SFF_SET_EEPROM_PAGE_S) &
- ICE_AQC_SFF_SET_EEPROM_PAGE_M));
- cmd->i2c_mem_addr = CPU_TO_LE16(mem_addr & 0xff);
- cmd->eeprom_page = CPU_TO_LE16((u16)page << ICE_AQC_SFF_EEPROM_PAGE_S);
- if (write)
- cmd->i2c_bus_addr |= CPU_TO_LE16(ICE_AQC_SFF_IS_WRITE);
-
- status = ice_aq_send_cmd(hw, &desc, data, length, cd);
- return status;
-}
-
-/**
- * __ice_aq_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @params: RSS LUT parameters
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
- */
-static enum ice_status
-__ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set)
-{
- u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle;
- struct ice_aqc_get_set_rss_lut *cmd_resp;
- struct ice_aq_desc desc;
- enum ice_status status;
- u8 *lut;
-
- if (!params)
- return ICE_ERR_PARAM;
-
- vsi_handle = params->vsi_handle;
- lut = params->lut;
-
- if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
- return ICE_ERR_PARAM;
-
- lut_size = params->lut_size;
- lut_type = params->lut_type;
- glob_lut_idx = params->global_lut_id;
- vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
-
- cmd_resp = &desc.params.get_set_rss_lut;
-
- if (set) {
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
- desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
- } else {
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
- }
-
- cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
- ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
- ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
- ICE_AQC_GSET_RSS_LUT_VSI_VALID);
-
- switch (lut_type) {
- case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
- case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
- case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
- flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
- ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
- break;
- default:
- status = ICE_ERR_PARAM;
- goto ice_aq_get_set_rss_lut_exit;
- }
-
- if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
- flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
- ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
-
- if (!set)
- goto ice_aq_get_set_rss_lut_send;
- } else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
- if (!set)
- goto ice_aq_get_set_rss_lut_send;
- } else {
- goto ice_aq_get_set_rss_lut_send;
- }
-
- /* LUT size is only valid for Global and PF table types */
- switch (lut_size) {
- case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
- flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
- break;
- case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
- flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
- break;
- case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
- if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
- flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
- break;
- }
- /* fall-through */
- default:
- status = ICE_ERR_PARAM;
- goto ice_aq_get_set_rss_lut_exit;
- }
-
-ice_aq_get_set_rss_lut_send:
- cmd_resp->flags = CPU_TO_LE16(flags);
- status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
-
-ice_aq_get_set_rss_lut_exit:
- return status;
-}
-
-/**
- * ice_aq_get_rss_lut
- * @hw: pointer to the hardware structure
- * @get_params: RSS LUT parameters used to specify which RSS LUT to get
- *
- * get the RSS lookup table, PF or VSI type
- */
-enum ice_status
-ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params)
-{
- return __ice_aq_get_set_rss_lut(hw, get_params, false);
-}
-
-/**
- * ice_aq_set_rss_lut
- * @hw: pointer to the hardware structure
- * @set_params: RSS LUT parameters used to specify how to set the RSS LUT
- *
- * set the RSS lookup table, PF or VSI type
- */
-enum ice_status
-ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params)
-{
- return __ice_aq_get_set_rss_lut(hw, set_params, true);
-}
-
-/**
- * __ice_aq_get_set_rss_key
- * @hw: pointer to the HW struct
- * @vsi_id: VSI FW index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get (0x0B04) or set (0x0B02) the RSS key per VSI
- */
-static enum
-ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
- struct ice_aqc_get_set_rss_keys *key,
- bool set)
-{
- struct ice_aqc_get_set_rss_key *cmd_resp;
- u16 key_size = sizeof(*key);
- struct ice_aq_desc desc;
-
- cmd_resp = &desc.params.get_set_rss_key;
-
- if (set) {
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
- desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
- } else {
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
- }
-
- cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
- ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
- ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
- ICE_AQC_GSET_RSS_KEY_VSI_VALID);
-
- return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
-}
-
-/**
- * ice_aq_get_rss_key
- * @hw: pointer to the HW struct
- * @vsi_handle: software VSI handle
- * @key: pointer to key info struct
- *
- * get the RSS key per VSI
- */
-enum ice_status
-ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
- struct ice_aqc_get_set_rss_keys *key)
-{
- if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
- return ICE_ERR_PARAM;
-
- return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
- key, false);
-}
-
-/**
- * ice_aq_set_rss_key
- * @hw: pointer to the HW struct
- * @vsi_handle: software VSI handle
- * @keys: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-enum ice_status
-ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
- struct ice_aqc_get_set_rss_keys *keys)
-{
- if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
- return ICE_ERR_PARAM;
-
- return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
- keys, true);
-}
-
-/**
- * ice_aq_add_lan_txq
- * @hw: pointer to the hardware structure
- * @num_qgrps: Number of added queue groups
- * @qg_list: list of queue groups to be added
- * @buf_size: size of buffer for indirect command
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
* @cd: pointer to command details structure or NULL
*
* Add Tx LAN queue (0x0C30)
@@ -3567,400 +2641,107 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
return status;
}
-/**
- * ice_aq_move_recfg_lan_txq
- * @hw: pointer to the hardware structure
- * @num_qs: number of queues to move/reconfigure
- * @is_move: true if this operation involves node movement
- * @is_tc_change: true if this operation involves a TC change
- * @subseq_call: true if this operation is a subsequent call
- * @flush_pipe: on timeout, true to flush pipe, false to return EAGAIN
- * @timeout: timeout in units of 100 usec (valid values 0-50)
- * @blocked_cgds: out param, bitmap of CGDs that timed out if returning EAGAIN
- * @buf: struct containing src/dest TEID and per-queue info
- * @buf_size: size of buffer for indirect command
- * @txqs_moved: out param, number of queues successfully moved
- * @cd: pointer to command details structure or NULL
- *
- * Move / Reconfigure Tx LAN queues (0x0C32)
- */
-enum ice_status
-ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move,
- bool is_tc_change, bool subseq_call, bool flush_pipe,
- u8 timeout, u32 *blocked_cgds,
- struct ice_aqc_move_txqs_data *buf, u16 buf_size,
- u8 *txqs_moved, struct ice_sq_cd *cd)
-{
- struct ice_aqc_move_txqs *cmd;
- struct ice_aq_desc desc;
- enum ice_status status;
-
- cmd = &desc.params.move_txqs;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_move_recfg_txqs);
-
-#define ICE_LAN_TXQ_MOVE_TIMEOUT_MAX 50
- if (timeout > ICE_LAN_TXQ_MOVE_TIMEOUT_MAX)
- return ICE_ERR_PARAM;
-
- if (is_tc_change && !flush_pipe && !blocked_cgds)
- return ICE_ERR_PARAM;
-
- if (!is_move && !is_tc_change)
- return ICE_ERR_PARAM;
-
- desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
- if (is_move)
- cmd->cmd_type |= ICE_AQC_Q_CMD_TYPE_MOVE;
-
- if (is_tc_change)
- cmd->cmd_type |= ICE_AQC_Q_CMD_TYPE_TC_CHANGE;
-
- if (subseq_call)
- cmd->cmd_type |= ICE_AQC_Q_CMD_SUBSEQ_CALL;
-
- if (flush_pipe)
- cmd->cmd_type |= ICE_AQC_Q_CMD_FLUSH_PIPE;
-
- cmd->num_qs = num_qs;
- cmd->timeout = ((timeout << ICE_AQC_Q_CMD_TIMEOUT_S) &
- ICE_AQC_Q_CMD_TIMEOUT_M);
-
- status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-
- if (!status && txqs_moved)
- *txqs_moved = cmd->num_qs;
-
- if (hw->adminq.sq_last_status == ICE_AQ_RC_EAGAIN &&
- is_tc_change && !flush_pipe)
- *blocked_cgds = LE32_TO_CPU(cmd->blocked_cgds);
-
- return status;
-}
-
/* End of FW Admin Queue command wrappers */
/**
- * ice_write_byte - write a byte to a packed context structure
- * @src_ctx: the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info: a description of the struct to be filled
- */
-static void
-ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
- u8 src_byte, dest_byte, mask;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = (u8)(BIT(ce_info->width) - 1);
-
- src_byte = *from;
- src_byte &= mask;
-
- /* shift to correct alignment */
- mask <<= shift_width;
- src_byte <<= shift_width;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
-
- dest_byte &= ~mask; /* get the bits not changing */
- dest_byte |= src_byte; /* add in the new bits */
-
- /* put it all back */
- ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_word - write a word to a packed context structure
- * @src_ctx: the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info: a description of the struct to be filled
- */
-static void
-ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
- u16 src_word, mask;
- __le16 dest_word;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = BIT(ce_info->width) - 1;
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_word = *(u16 *)from;
- src_word &= mask;
-
- /* shift to correct alignment */
- mask <<= shift_width;
- src_word <<= shift_width;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
-
- dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */
- dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */
-
- /* put it all back */
- ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_dword - write a dword to a packed context structure
- * @src_ctx: the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info: a description of the struct to be filled
- */
-static void
-ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
- u32 src_dword, mask;
- __le32 dest_dword;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
-
- /* if the field width is exactly 32 on an x86 machine, then the shift
- * operation will not work because the SHL instructions count is masked
- * to 5 bits so the shift will do nothing
- */
- if (ce_info->width < 32)
- mask = BIT(ce_info->width) - 1;
- else
- mask = (u32)~0;
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_dword = *(u32 *)from;
- src_dword &= mask;
-
- /* shift to correct alignment */
- mask <<= shift_width;
- src_dword <<= shift_width;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
-
- dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */
- dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */
-
- /* put it all back */
- ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_write_qword - write a qword to a packed context structure
- * @src_ctx: the context structure to read from
- * @dest_ctx: the context to be written to
- * @ce_info: a description of the struct to be filled
- */
-static void
-ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
-{
- u64 src_qword, mask;
- __le64 dest_qword;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
-
- /* if the field width is exactly 64 on an x86 machine, then the shift
- * operation will not work because the SHL instructions count is masked
- * to 6 bits so the shift will do nothing
- */
- if (ce_info->width < 64)
- mask = BIT_ULL(ce_info->width) - 1;
- else
- mask = (u64)~0;
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_qword = *(u64 *)from;
- src_qword &= mask;
-
- /* shift to correct alignment */
- mask <<= shift_width;
- src_qword <<= shift_width;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
-
- dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */
- dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */
-
- /* put it all back */
- ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
-}
-
-/**
- * ice_set_ctx - set context bits in packed structure
- * @hw: pointer to the hardware structure
- * @src_ctx: pointer to a generic non-packed context structure
- * @dest_ctx: pointer to memory for the packed structure
- * @ce_info: a description of the structure to be transformed
- */
-enum ice_status
-ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- int f;
-
- for (f = 0; ce_info[f].width; f++) {
- /* We have to deal with each element of the FW response
- * using the correct size so that we are correct regardless
- * of the endianness of the machine.
- */
- if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
- ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
- f, ce_info[f].width, ce_info[f].size_of);
- continue;
- }
- switch (ce_info[f].size_of) {
- case sizeof(u8):
- ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u16):
- ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u32):
- ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u64):
- ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
- break;
- default:
- return ICE_ERR_INVAL_SIZE;
- }
- }
-
- return ICE_SUCCESS;
-}
-
-/**
- * ice_read_byte - read context byte into struct
+ * ice_write_byte - write a byte to a packed context structure
* @src_ctx: the context structure to read from
* @dest_ctx: the context to be written to
* @ce_info: a description of the struct to be filled
*/
static void
-ice_read_byte(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
{
- u8 dest_byte, mask;
- u8 *src, *target;
+ u8 src_byte, dest_byte, mask;
+ u8 *from, *dest;
u16 shift_width;
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
/* prepare the bits and mask */
shift_width = ce_info->lsb % 8;
mask = (u8)(BIT(ce_info->width) - 1);
+ src_byte = *from;
+ src_byte &= mask;
+
/* shift to correct alignment */
mask <<= shift_width;
+ src_byte <<= shift_width;
- /* get the current bits from the src bit string */
- src = src_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&dest_byte, src, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
-
- dest_byte &= ~(mask);
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
- dest_byte >>= shift_width;
+ ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
- /* get the address from the struct field */
- target = dest_ctx + ce_info->offset;
+ dest_byte &= ~mask; /* get the bits not changing */
+ dest_byte |= src_byte; /* add in the new bits */
- /* put it back in the struct */
- ice_memcpy(target, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+ /* put it all back */
+ ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
}
/**
- * ice_read_word - read context word into struct
+ * ice_write_word - write a word to a packed context structure
* @src_ctx: the context structure to read from
* @dest_ctx: the context to be written to
* @ce_info: a description of the struct to be filled
*/
static void
-ice_read_word(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
{
- u16 dest_word, mask;
- u8 *src, *target;
- __le16 src_word;
+ u16 src_word, mask;
+ __le16 dest_word;
+ u8 *from, *dest;
u16 shift_width;
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
/* prepare the bits and mask */
shift_width = ce_info->lsb % 8;
mask = BIT(ce_info->width) - 1;
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_word = *(u16 *)from;
+ src_word &= mask;
+
/* shift to correct alignment */
mask <<= shift_width;
+ src_word <<= shift_width;
- /* get the current bits from the src bit string */
- src = src_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&src_word, src, sizeof(src_word), ICE_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_word &= ~(CPU_TO_LE16(mask));
-
- /* get the data back into host order before shifting */
- dest_word = LE16_TO_CPU(src_word);
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
- dest_word >>= shift_width;
+ ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
- /* get the address from the struct field */
- target = dest_ctx + ce_info->offset;
+ dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */
+ dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */
- /* put it back in the struct */
- ice_memcpy(target, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+ /* put it all back */
+ ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
}
/**
- * ice_read_dword - read context dword into struct
+ * ice_write_dword - write a dword to a packed context structure
* @src_ctx: the context structure to read from
* @dest_ctx: the context to be written to
* @ce_info: a description of the struct to be filled
*/
static void
-ice_read_dword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
{
- u32 dest_dword, mask;
- __le32 src_dword;
- u8 *src, *target;
+ u32 src_dword, mask;
+ __le32 dest_dword;
+ u8 *from, *dest;
u16 shift_width;
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
/* prepare the bits and mask */
shift_width = ce_info->lsb % 8;
@@ -3973,45 +2754,45 @@ ice_read_dword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
else
mask = (u32)~0;
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_dword = *(u32 *)from;
+ src_dword &= mask;
+
/* shift to correct alignment */
mask <<= shift_width;
+ src_dword <<= shift_width;
- /* get the current bits from the src bit string */
- src = src_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&src_dword, src, sizeof(src_dword), ICE_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_dword &= ~(CPU_TO_LE32(mask));
-
- /* get the data back into host order before shifting */
- dest_dword = LE32_TO_CPU(src_dword);
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
- dest_dword >>= shift_width;
+ ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
- /* get the address from the struct field */
- target = dest_ctx + ce_info->offset;
+ dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */
+ dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */
- /* put it back in the struct */
- ice_memcpy(target, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+ /* put it all back */
+ ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
}
/**
- * ice_read_qword - read context qword into struct
+ * ice_write_qword - write a qword to a packed context structure
* @src_ctx: the context structure to read from
* @dest_ctx: the context to be written to
* @ce_info: a description of the struct to be filled
*/
static void
-ice_read_qword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
{
- u64 dest_qword, mask;
- __le64 src_qword;
- u8 *src, *target;
+ u64 src_qword, mask;
+ __le64 dest_qword;
+ u8 *from, *dest;
u16 shift_width;
+ /* copy from the next struct field */
+ from = src_ctx + ce_info->offset;
+
/* prepare the bits and mask */
shift_width = ce_info->lsb % 8;
@@ -4024,59 +2805,66 @@ ice_read_qword(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
else
mask = (u64)~0;
+ /* don't swizzle the bits until after the mask because the mask bits
+ * will be in a different bit position on big endian machines
+ */
+ src_qword = *(u64 *)from;
+ src_qword &= mask;
+
/* shift to correct alignment */
mask <<= shift_width;
+ src_qword <<= shift_width;
- /* get the current bits from the src bit string */
- src = src_ctx + (ce_info->lsb / 8);
-
- ice_memcpy(&src_qword, src, sizeof(src_qword), ICE_DMA_TO_NONDMA);
-
- /* the data in the memory is stored as little endian so mask it
- * correctly
- */
- src_qword &= ~(CPU_TO_LE64(mask));
-
- /* get the data back into host order before shifting */
- dest_qword = LE64_TO_CPU(src_qword);
+ /* get the current bits from the target bit string */
+ dest = dest_ctx + (ce_info->lsb / 8);
- dest_qword >>= shift_width;
+ ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
- /* get the address from the struct field */
- target = dest_ctx + ce_info->offset;
+ dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */
+ dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */
- /* put it back in the struct */
- ice_memcpy(target, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+ /* put it all back */
+ ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
}
/**
- * ice_get_ctx - extract context bits from a packed structure
- * @src_ctx: pointer to a generic packed context structure
- * @dest_ctx: pointer to a generic non-packed context structure
- * @ce_info: a description of the structure to be read from
+ * ice_set_ctx - set context bits in packed structure
+ * @hw: pointer to the hardware structure
+ * @src_ctx: pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info: a description of the structure to be transformed
*/
enum ice_status
-ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
+ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
+ const struct ice_ctx_ele *ce_info)
{
int f;
for (f = 0; ce_info[f].width; f++) {
+ /* We have to deal with each element of the FW response
+ * using the correct size so that we are correct regardless
+ * of the endianness of the machine.
+ */
+ if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
+ ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
+ f, ce_info[f].width, ce_info[f].size_of);
+ continue;
+ }
switch (ce_info[f].size_of) {
- case 1:
- ice_read_byte(src_ctx, dest_ctx, &ce_info[f]);
+ case sizeof(u8):
+ ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
break;
- case 2:
- ice_read_word(src_ctx, dest_ctx, &ce_info[f]);
+ case sizeof(u16):
+ ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
break;
- case 4:
- ice_read_dword(src_ctx, dest_ctx, &ce_info[f]);
+ case sizeof(u32):
+ ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
break;
- case 8:
- ice_read_qword(src_ctx, dest_ctx, &ce_info[f]);
+ case sizeof(u64):
+ ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
break;
default:
- /* nothing to do, just keep going */
- break;
+ return ICE_ERR_INVAL_SIZE;
}
}
@@ -4350,224 +3138,6 @@ ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap,
ICE_SCHED_NODE_OWNER_LAN);
}
-/**
- * ice_is_main_vsi - checks whether the VSI is main VSI
- * @hw: pointer to the HW struct
- * @vsi_handle: VSI handle
- *
- * Checks whether the VSI is the main VSI (the first PF VSI created on
- * given PF).
- */
-static bool ice_is_main_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
- return vsi_handle == ICE_MAIN_VSI_HANDLE && hw->vsi_ctx[vsi_handle];
-}
-
-/**
- * ice_replay_pre_init - replay pre initialization
- * @hw: pointer to the HW struct
- * @sw: pointer to switch info struct for which function initializes filters
- *
- * Initializes required config data for VSI, FD, ACL, and RSS before replay.
- */
-static enum ice_status
-ice_replay_pre_init(struct ice_hw *hw, struct ice_switch_info *sw)
-{
- enum ice_status status;
- u8 i;
-
- /* Delete old entries from replay filter list head if there is any */
- ice_rm_sw_replay_rule_info(hw, sw);
- /* In start of replay, move entries into replay_rules list, it
- * will allow adding rules entries back to filt_rules list,
- * which is operational list.
- */
- for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
- LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
- &sw->recp_list[i].filt_replay_rules);
- ice_sched_replay_agg_vsi_preinit(hw);
-
- status = ice_sched_replay_root_node_bw(hw->port_info);
- if (status)
- return status;
-
- return ice_sched_replay_tc_node_bw(hw->port_info);
-}
-
-/**
- * ice_replay_vsi - replay VSI configuration
- * @hw: pointer to the HW struct
- * @vsi_handle: driver VSI handle
- *
- * Restore all VSI configuration after reset. It is required to call this
- * function with main VSI first.
- */
-enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
- struct ice_switch_info *sw = hw->switch_info;
- struct ice_port_info *pi = hw->port_info;
- enum ice_status status;
-
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return ICE_ERR_PARAM;
-
- /* Replay pre-initialization if there is any */
- if (ice_is_main_vsi(hw, vsi_handle)) {
- status = ice_replay_pre_init(hw, sw);
- if (status)
- return status;
- }
- /* Replay per VSI all RSS configurations */
- status = ice_replay_rss_cfg(hw, vsi_handle);
- if (status)
- return status;
- /* Replay per VSI all filters */
- status = ice_replay_vsi_all_fltr(hw, pi, vsi_handle);
- if (!status)
- status = ice_replay_vsi_agg(hw, vsi_handle);
- return status;
-}
-
-/**
- * ice_replay_post - post replay configuration cleanup
- * @hw: pointer to the HW struct
- *
- * Post replay cleanup.
- */
-void ice_replay_post(struct ice_hw *hw)
-{
- /* Delete old entries from replay filter list head */
- ice_rm_all_sw_replay_rule_info(hw);
- ice_sched_replay_agg(hw);
-}
-
-/**
- * ice_stat_update40 - read 40 bit stat from the chip and update stat values
- * @hw: ptr to the hardware info
- * @reg: offset of 64 bit HW register to read from
- * @prev_stat_loaded: bool to specify if previous stats are loaded
- * @prev_stat: ptr to previous loaded stat value
- * @cur_stat: ptr to current stat value
- */
-void
-ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
- u64 *prev_stat, u64 *cur_stat)
-{
- u64 new_data = rd64(hw, reg) & (BIT_ULL(40) - 1);
-
- /* device stats are not reset at PFR, they likely will not be zeroed
- * when the driver starts. Thus, save the value from the first read
- * without adding to the statistic value so that we report stats which
- * count up from zero.
- */
- if (!prev_stat_loaded) {
- *prev_stat = new_data;
- return;
- }
-
- /* Calculate the difference between the new and old values, and then
- * add it to the software stat value.
- */
- if (new_data >= *prev_stat)
- *cur_stat += new_data - *prev_stat;
- else
- /* to manage the potential roll-over */
- *cur_stat += (new_data + BIT_ULL(40)) - *prev_stat;
-
- /* Update the previously stored value to prepare for next read */
- *prev_stat = new_data;
-}
-
-/**
- * ice_stat_update32 - read 32 bit stat from the chip and update stat values
- * @hw: ptr to the hardware info
- * @reg: offset of HW register to read from
- * @prev_stat_loaded: bool to specify if previous stats are loaded
- * @prev_stat: ptr to previous loaded stat value
- * @cur_stat: ptr to current stat value
- */
-void
-ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
- u64 *prev_stat, u64 *cur_stat)
-{
- u32 new_data;
-
- new_data = rd32(hw, reg);
-
- /* device stats are not reset at PFR, they likely will not be zeroed
- * when the driver starts. Thus, save the value from the first read
- * without adding to the statistic value so that we report stats which
- * count up from zero.
- */
- if (!prev_stat_loaded) {
- *prev_stat = new_data;
- return;
- }
-
- /* Calculate the difference between the new and old values, and then
- * add it to the software stat value.
- */
- if (new_data >= *prev_stat)
- *cur_stat += new_data - *prev_stat;
- else
- /* to manage the potential roll-over */
- *cur_stat += (new_data + BIT_ULL(32)) - *prev_stat;
-
- /* Update the previously stored value to prepare for next read */
- *prev_stat = new_data;
-}
-
-/**
- * ice_stat_update_repc - read GLV_REPC stats from chip and update stat values
- * @hw: ptr to the hardware info
- * @vsi_handle: VSI handle
- * @prev_stat_loaded: bool to specify if the previous stat values are loaded
- * @cur_stats: ptr to current stats structure
- *
- * The GLV_REPC statistic register actually tracks two 16bit statistics, and
- * thus cannot be read using the normal ice_stat_update32 function.
- *
- * Read the GLV_REPC register associated with the given VSI, and update the
- * rx_no_desc and rx_error values in the ice_eth_stats structure.
- *
- * Because the statistics in GLV_REPC stick at 0xFFFF, the register must be
- * cleared each time it's read.
- *
- * Note that the GLV_RDPC register also counts the causes that would trigger
- * GLV_REPC. However, it does not give the finer grained detail about why the
- * packets are being dropped. The GLV_REPC values can be used to distinguish
- * whether Rx packets are dropped due to errors or due to no available
- * descriptors.
- */
-void
-ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
- struct ice_eth_stats *cur_stats)
-{
- u16 vsi_num, no_desc, error_cnt;
- u32 repc;
-
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return;
-
- vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
-
- /* If we haven't loaded stats yet, just clear the current value */
- if (!prev_stat_loaded) {
- wr32(hw, GLV_REPC(vsi_num), 0);
- return;
- }
-
- repc = rd32(hw, GLV_REPC(vsi_num));
- no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >> GLV_REPC_NO_DESC_CNT_S;
- error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >> GLV_REPC_ERROR_CNT_S;
-
- /* Clear the count by writing to the stats register */
- wr32(hw, GLV_REPC(vsi_num), 0);
-
- cur_stats->rx_no_desc += no_desc;
- cur_stats->rx_errors += error_cnt;
-}
-
/**
* ice_sched_query_elem - query element information from HW
* @hw: pointer to the HW struct
@@ -4711,21 +3281,6 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
return status;
}
-/**
- * ice_is_phy_caps_an_enabled - check if PHY capabilities autoneg is enabled
- * @caps: get PHY capability data
- */
-bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps)
-{
- if (caps->caps & ICE_AQC_PHY_AN_MODE ||
- caps->low_power_ctrl_an & (ICE_AQC_PHY_AN_EN_CLAUSE28 |
- ICE_AQC_PHY_AN_EN_CLAUSE73 |
- ICE_AQC_PHY_AN_EN_CLAUSE37))
- return true;
-
- return false;
-}
-
/**
* ice_aq_set_lldp_mib - Set the LLDP MIB
* @hw: pointer to the HW struct
@@ -4758,50 +3313,3 @@ ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
}
-
-/**
- * ice_fw_supports_lldp_fltr - check NVM version supports lldp_fltr_ctrl
- * @hw: pointer to HW struct
- */
-bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw)
-{
- if (hw->mac_type != ICE_MAC_E810)
- return false;
-
- if (hw->api_maj_ver == ICE_FW_API_LLDP_FLTR_MAJ) {
- if (hw->api_min_ver > ICE_FW_API_LLDP_FLTR_MIN)
- return true;
- if (hw->api_min_ver == ICE_FW_API_LLDP_FLTR_MIN &&
- hw->api_patch >= ICE_FW_API_LLDP_FLTR_PATCH)
- return true;
- } else if (hw->api_maj_ver > ICE_FW_API_LLDP_FLTR_MAJ) {
- return true;
- }
- return false;
-}
-
-/**
- * ice_lldp_fltr_add_remove - add or remove a LLDP Rx switch filter
- * @hw: pointer to HW struct
- * @vsi_num: absolute HW index for VSI
- * @add: boolean for if adding or removing a filter
- */
-enum ice_status
-ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add)
-{
- struct ice_aqc_lldp_filter_ctrl *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.lldp_filter_ctrl;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_filter_ctrl);
-
- if (add)
- cmd->cmd_flags = ICE_AQC_LLDP_FILTER_ACTION_ADD;
- else
- cmd->cmd_flags = ICE_AQC_LLDP_FILTER_ACTION_DELETE;
-
- cmd->vsi_num = CPU_TO_LE16(vsi_num);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 8c16c7a024..1cf03e52e7 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -21,7 +21,6 @@ enum ice_fw_modes {
enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw);
void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw);
enum ice_status ice_init_hw(struct ice_hw *hw);
-void ice_deinit_hw(struct ice_hw *hw);
enum ice_status ice_check_reset(struct ice_hw *hw);
enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
@@ -32,8 +31,6 @@ void ice_destroy_all_ctrlq(struct ice_hw *hw);
enum ice_status
ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
struct ice_rq_event_info *e, u16 *pending);
-enum ice_status
-ice_get_link_status(struct ice_port_info *pi, bool *link_up);
enum ice_status ice_update_link_info(struct ice_port_info *pi);
enum ice_status
ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
@@ -55,8 +52,6 @@ void ice_clear_pxe_mode(struct ice_hw *hw);
enum ice_status ice_get_caps(struct ice_hw *hw);
-void ice_set_safe_mode_caps(struct ice_hw *hw);
-
/* Define a macro that will align a pointer to point to the next memory address
* that falls on the given power of 2 (i.e., 2, 4, 8, 16, 32, 64...). For
* example, given the variable pointer = 0x1006, then after the following call:
@@ -72,18 +67,6 @@ enum ice_status
ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
u32 rxq_index);
enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
-enum ice_status
-ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
-enum ice_status
-ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
- struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
- u32 tx_cmpltnq_index);
-enum ice_status
-ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
-enum ice_status
-ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
- struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
- u32 tx_drbell_q_index);
enum ice_status
ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params);
@@ -99,13 +82,6 @@ enum ice_status
ice_aq_add_lan_txq(struct ice_hw *hw, u8 count,
struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move,
- bool is_tc_change, bool subseq_call, bool flush_pipe,
- u8 timeout, u32 *blocked_cgds,
- struct ice_aqc_move_txqs_data *buf, u16 buf_size,
- u8 *txqs_moved, struct ice_sq_cd *cd);
-
bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
@@ -126,9 +102,6 @@ enum ice_status
ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
struct ice_aqc_get_phy_caps_data *caps,
struct ice_sq_cd *cd);
-void
-ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
- u16 link_speeds_bitmap);
enum ice_status
ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
struct ice_sq_cd *cd);
@@ -141,27 +114,11 @@ bool ice_fw_supports_link_override(struct ice_hw *hw);
enum ice_status
ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
struct ice_port_info *pi);
-bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps);
-
-enum ice_fc_mode ice_caps_to_fc_mode(u8 caps);
-enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options);
-enum ice_status
-ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
- bool ena_auto_link_update);
-bool
-ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
- struct ice_aqc_set_phy_cfg_data *cfg);
void
ice_copy_phy_caps_to_cfg(struct ice_port_info *pi,
struct ice_aqc_get_phy_caps_data *caps,
struct ice_aqc_set_phy_cfg_data *cfg);
enum ice_status
-ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
- enum ice_fec_mode fec);
-enum ice_status
-ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
- struct ice_sq_cd *cd);
-enum ice_status
ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd);
enum ice_status
ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
@@ -170,19 +127,6 @@ enum ice_status
ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
struct ice_sq_cd *cd);
enum ice_status
-ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
-
-enum ice_status
-ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
- struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
- u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
- bool write, struct ice_sq_cd *cd);
-
-enum ice_status
-ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info);
-enum ice_status
ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
u16 *q_handle, u16 *q_ids, u32 *q_teids,
enum ice_disq_rst_src rst_src, u16 vmvf_num,
@@ -194,19 +138,8 @@ enum ice_status
ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
struct ice_sq_cd *cd);
-enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
-void ice_replay_post(struct ice_hw *hw);
struct ice_q_ctx *
ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
-void
-ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
- u64 *prev_stat, u64 *cur_stat);
-void
-ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
- u64 *prev_stat, u64 *cur_stat);
-void
-ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
- struct ice_eth_stats *cur_stats);
enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw);
void ice_print_rollback_msg(struct ice_hw *hw);
enum ice_status
@@ -215,7 +148,4 @@ ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
enum ice_status
ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
struct ice_sq_cd *cd);
-bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw);
-enum ice_status
-ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add);
#endif /* _ICE_COMMON_H_ */
diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 351038528b..09b5d89bc0 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -109,32 +109,6 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
-/**
- * ice_aq_start_lldp
- * @hw: pointer to the HW struct
- * @persist: True if Start of LLDP Agent needs to be persistent across reboots
- * @cd: pointer to command details structure or NULL
- *
- * Start the embedded LLDP Agent on all ports. (0x0A06)
- */
-enum ice_status
-ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
-{
- struct ice_aqc_lldp_start *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.lldp_start;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_start);
-
- cmd->command = ICE_AQ_LLDP_AGENT_START;
-
- if (persist)
- cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
/**
* ice_get_dcbx_status
* @hw: pointer to the HW struct
@@ -672,49 +646,6 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
return ret;
}
-/**
- * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
- * @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBX Agent needs to be started
- * False if DCBX Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBX agent status
- * True if DCBX Agent is active
- * False if DCBX Agent is stopped
- * @cd: pointer to command details structure or NULL
- *
- * Start/Stop the embedded dcbx Agent. In case that this wrapper function
- * returns ICE_SUCCESS, caller will need to check if FW returns back the same
- * value as stated in dcbx_agent_status, and react accordingly. (0x0A09)
- */
-enum ice_status
-ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
- bool *dcbx_agent_status, struct ice_sq_cd *cd)
-{
- struct ice_aqc_lldp_stop_start_specific_agent *cmd;
- enum ice_status status;
- struct ice_aq_desc desc;
- u16 opcode;
-
- cmd = &desc.params.lldp_agent_ctrl;
-
- opcode = ice_aqc_opc_lldp_stop_start_specific_agent;
-
- ice_fill_dflt_direct_cmd_desc(&desc, opcode);
-
- if (start_dcbx_agent)
- cmd->command = ICE_AQC_START_STOP_AGENT_START_DCBX;
-
- status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-
- *dcbx_agent_status = false;
-
- if (status == ICE_SUCCESS &&
- cmd->command == ICE_AQC_START_STOP_AGENT_START_DCBX)
- *dcbx_agent_status = true;
-
- return status;
-}
-
/**
* ice_aq_get_cee_dcb_cfg
* @hw: pointer to the HW struct
@@ -969,34 +900,6 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
return ret;
}
-/**
- * ice_cfg_lldp_mib_change
- * @hw: pointer to the HW struct
- * @ena_mib: enable/disable MIB change event
- *
- * Configure (disable/enable) MIB
- */
-enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
-{
- struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
- enum ice_status ret;
-
- if (!hw->func_caps.common_cap.dcb)
- return ICE_ERR_NOT_SUPPORTED;
-
- /* Get DCBX status */
- qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
-
- if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS)
- return ICE_ERR_NOT_READY;
-
- ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
- if (!ret)
- qos_cfg->is_sw_lldp = !ena_mib;
-
- return ret;
-}
-
/**
* ice_add_ieee_ets_common_tlv
* @buf: Data buffer to be populated with ice_dcb_ets_cfg data
@@ -1269,45 +1172,6 @@ void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg)
*miblen = offset;
}
-/**
- * ice_set_dcb_cfg - Set the local LLDP MIB to FW
- * @pi: port information structure
- *
- * Set DCB configuration to the Firmware
- */
-enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
-{
- u8 mib_type, *lldpmib = NULL;
- struct ice_dcbx_cfg *dcbcfg;
- enum ice_status ret;
- struct ice_hw *hw;
- u16 miblen;
-
- if (!pi)
- return ICE_ERR_PARAM;
-
- hw = pi->hw;
-
- /* update the HW local config */
- dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
- /* Allocate the LLDPDU */
- lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
- if (!lldpmib)
- return ICE_ERR_NO_MEMORY;
-
- mib_type = SET_LOCAL_MIB_TYPE_LOCAL_MIB;
- if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
- mib_type |= SET_LOCAL_MIB_TYPE_CEE_NON_WILLING;
-
- ice_dcb_cfg_to_lldp(lldpmib, &miblen, dcbcfg);
- ret = ice_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, miblen,
- NULL);
-
- ice_free(hw, lldpmib);
-
- return ret;
-}
-
/**
* ice_aq_query_port_ets - query port ETS configuration
* @pi: port information structure
@@ -1400,28 +1264,3 @@ ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
}
return status;
}
-
-/**
- * ice_query_port_ets - query port ETS configuration
- * @pi: port information structure
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * query current port ETS configuration and update the
- * SW DB with the TC changes
- */
-enum ice_status
-ice_query_port_ets(struct ice_port_info *pi,
- struct ice_aqc_port_ets_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd)
-{
- enum ice_status status;
-
- ice_acquire_lock(&pi->sched_lock);
- status = ice_aq_query_port_ets(pi, buf, buf_size, cd);
- if (!status)
- status = ice_update_port_tc_tree_cfg(pi, buf);
- ice_release_lock(&pi->sched_lock);
- return status;
-}
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index 8f0e09d50a..157845d592 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -186,14 +186,9 @@ enum ice_status
ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
struct ice_dcbx_cfg *dcbcfg);
enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
-enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
enum ice_status
-ice_query_port_ets(struct ice_port_info *pi,
- struct ice_aqc_port_ets_elem *buf, u16 buf_size,
- struct ice_sq_cd *cmd_details);
-enum ice_status
ice_aq_query_port_ets(struct ice_port_info *pi,
struct ice_aqc_port_ets_elem *buf, u16 buf_size,
struct ice_sq_cd *cd);
@@ -204,12 +199,6 @@ enum ice_status
ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
struct ice_sq_cd *cd);
enum ice_status
-ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
- bool *dcbx_agent_status, struct ice_sq_cd *cd);
-enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib);
-enum ice_status
ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
struct ice_sq_cd *cd);
#endif /* _ICE_DCB_H_ */
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index aeff7af55d..dfc46ade5d 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -816,20 +816,6 @@ ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr)
cntr_id);
}
-/**
- * ice_free_fd_guar_item - Free flow director guaranteed entries
- * @hw: pointer to the hardware structure
- * @cntr_id: counter index that needs to be freed
- * @num_fltr: number of filters to be freed
- */
-enum ice_status
-ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr)
-{
- return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES,
- ICE_AQC_RES_TYPE_FLAG_DEDICATED, num_fltr,
- cntr_id);
-}
-
/**
* ice_alloc_fd_shrd_item - allocate resource for flow director shared entries
* @hw: pointer to the hardware structure
@@ -844,31 +830,6 @@ ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr)
cntr_id);
}
-/**
- * ice_free_fd_shrd_item - Free flow director shared entries
- * @hw: pointer to the hardware structure
- * @cntr_id: counter index that needs to be freed
- * @num_fltr: number of filters to be freed
- */
-enum ice_status
-ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr)
-{
- return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES,
- ICE_AQC_RES_TYPE_FLAG_DEDICATED, num_fltr,
- cntr_id);
-}
-
-/**
- * ice_get_fdir_cnt_all - get the number of Flow Director filters
- * @hw: hardware data structure
- *
- * Returns the number of filters available on device
- */
-int ice_get_fdir_cnt_all(struct ice_hw *hw)
-{
- return hw->func_caps.fd_fltr_guar + hw->func_caps.fd_fltr_best_effort;
-}
-
/**
* ice_pkt_insert_ipv6_addr - insert a be32 IPv6 address into a memory buffer.
* @pkt: packet buffer
@@ -1254,226 +1215,3 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
return ICE_SUCCESS;
}
-
-/**
- * ice_fdir_get_prgm_pkt - generate a training packet
- * @input: flow director filter data structure
- * @pkt: pointer to return filter packet
- * @frag: generate a fragment packet
- */
-enum ice_status
-ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag)
-{
- return ice_fdir_get_gen_prgm_pkt(NULL, input, pkt, frag, false);
-}
-
-/**
- * ice_fdir_has_frag - does flow type have 2 ptypes
- * @flow: flow ptype
- *
- * returns true is there is a fragment packet for this ptype
- */
-bool ice_fdir_has_frag(enum ice_fltr_ptype flow)
-{
- if (flow == ICE_FLTR_PTYPE_NONF_IPV4_OTHER)
- return true;
- else
- return false;
-}
-
-/**
- * ice_fdir_find_by_idx - find filter with idx
- * @hw: pointer to hardware structure
- * @fltr_idx: index to find.
- *
- * Returns pointer to filter if found or null
- */
-struct ice_fdir_fltr *
-ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx)
-{
- struct ice_fdir_fltr *rule;
-
- LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
- fltr_node) {
- /* rule ID found in the list */
- if (fltr_idx == rule->fltr_id)
- return rule;
- if (fltr_idx < rule->fltr_id)
- break;
- }
- return NULL;
-}
-
-/**
- * ice_fdir_list_add_fltr - add a new node to the flow director filter list
- * @hw: hardware structure
- * @fltr: filter node to add to structure
- */
-void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *fltr)
-{
- struct ice_fdir_fltr *rule, *parent = NULL;
-
- LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
- fltr_node) {
- /* rule ID found or pass its spot in the list */
- if (rule->fltr_id >= fltr->fltr_id)
- break;
- parent = rule;
- }
-
- if (parent)
- LIST_ADD_AFTER(&fltr->fltr_node, &parent->fltr_node);
- else
- LIST_ADD(&fltr->fltr_node, &hw->fdir_list_head);
-}
-
-/**
- * ice_fdir_update_cntrs - increment / decrement filter counter
- * @hw: pointer to hardware structure
- * @flow: filter flow type
- * @acl_fltr: true indicates an ACL filter
- * @add: true implies filters added
- */
-void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
- bool acl_fltr, bool add)
-{
- int incr;
-
- incr = add ? 1 : -1;
- hw->fdir_active_fltr += incr;
- if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) {
- ice_debug(hw, ICE_DBG_SW, "Unknown filter type %d\n", flow);
- } else {
- if (acl_fltr)
- hw->acl_fltr_cnt[flow] += incr;
- else
- hw->fdir_fltr_cnt[flow] += incr;
- }
-}
-
-/**
- * ice_cmp_ipv6_addr - compare 2 IP v6 addresses
- * @a: IP v6 address
- * @b: IP v6 address
- *
- * Returns 0 on equal, returns non-0 if different
- */
-static int ice_cmp_ipv6_addr(__be32 *a, __be32 *b)
-{
- return memcmp(a, b, 4 * sizeof(__be32));
-}
-
-/**
- * ice_fdir_comp_rules - compare 2 filters
- * @a: a Flow Director filter data structure
- * @b: a Flow Director filter data structure
- * @v6: bool true if v6 filter
- *
- * Returns true if the filters match
- */
-static bool
-ice_fdir_comp_rules(struct ice_fdir_fltr *a, struct ice_fdir_fltr *b, bool v6)
-{
- enum ice_fltr_ptype flow_type = a->flow_type;
-
- /* The calling function already checks that the two filters have the
- * same flow_type.
- */
- if (!v6) {
- if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP) {
- if (a->ip.v4.dst_ip == b->ip.v4.dst_ip &&
- a->ip.v4.src_ip == b->ip.v4.src_ip &&
- a->ip.v4.dst_port == b->ip.v4.dst_port &&
- a->ip.v4.src_port == b->ip.v4.src_port)
- return true;
- } else if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER) {
- if (a->ip.v4.dst_ip == b->ip.v4.dst_ip &&
- a->ip.v4.src_ip == b->ip.v4.src_ip &&
- a->ip.v4.l4_header == b->ip.v4.l4_header &&
- a->ip.v4.proto == b->ip.v4.proto &&
- a->ip.v4.ip_ver == b->ip.v4.ip_ver &&
- a->ip.v4.tos == b->ip.v4.tos)
- return true;
- }
- } else {
- if (flow_type == ICE_FLTR_PTYPE_NONF_IPV6_UDP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV6_TCP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV6_SCTP) {
- if (a->ip.v6.dst_port == b->ip.v6.dst_port &&
- a->ip.v6.src_port == b->ip.v6.src_port &&
- !ice_cmp_ipv6_addr(a->ip.v6.dst_ip,
- b->ip.v6.dst_ip) &&
- !ice_cmp_ipv6_addr(a->ip.v6.src_ip,
- b->ip.v6.src_ip))
- return true;
- } else if (flow_type == ICE_FLTR_PTYPE_NONF_IPV6_OTHER) {
- if (a->ip.v6.dst_port == b->ip.v6.dst_port &&
- a->ip.v6.src_port == b->ip.v6.src_port)
- return true;
- }
- }
-
- return false;
-}
-
-/**
- * ice_fdir_is_dup_fltr - test if filter is already in list for PF
- * @hw: hardware data structure
- * @input: Flow Director filter data structure
- *
- * Returns true if the filter is found in the list
- */
-bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input)
-{
- struct ice_fdir_fltr *rule;
- bool ret = false;
-
- LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr,
- fltr_node) {
- enum ice_fltr_ptype flow_type;
-
- if (rule->flow_type != input->flow_type)
- continue;
-
- flow_type = input->flow_type;
- if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP ||
- flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER)
- ret = ice_fdir_comp_rules(rule, input, false);
- else
- ret = ice_fdir_comp_rules(rule, input, true);
- if (ret) {
- if (rule->fltr_id == input->fltr_id &&
- rule->q_index != input->q_index)
- ret = false;
- else
- break;
- }
- }
-
- return ret;
-}
-
-/**
- * ice_clear_pf_fd_table - admin command to clear FD table for PF
- * @hw: hardware data structure
- *
- * Clears FD table entries for a PF by issuing admin command (direct, 0x0B06)
- */
-enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw)
-{
- struct ice_aqc_clear_fd_table *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.clear_fd_table;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_fd_table);
- cmd->clear_type = CL_FD_VM_VF_TYPE_PF_IDX;
- /* vsi_index must be 0 to clear FD table for a PF */
- cmd->vsi_index = CPU_TO_LE16(0);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index d363de385d..1f0f5bda7d 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -234,27 +234,11 @@ enum ice_status ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id);
enum ice_status
ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
enum ice_status
-ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr);
-enum ice_status
ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
-enum ice_status
-ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr);
-enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw);
void
ice_fdir_get_prgm_desc(struct ice_hw *hw, struct ice_fdir_fltr *input,
struct ice_fltr_desc *fdesc, bool add);
enum ice_status
ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
u8 *pkt, bool frag, bool tun);
-enum ice_status
-ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag);
-int ice_get_fdir_cnt_all(struct ice_hw *hw);
-bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
-bool ice_fdir_has_frag(enum ice_fltr_ptype flow);
-struct ice_fdir_fltr *
-ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx);
-void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
- bool acl_fltr, bool add);
-void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
#endif /* _ICE_FDIR_H_ */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 7594df1696..aec2c63c30 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1950,54 +1950,6 @@ static bool ice_tunnel_port_in_use_hlpr(struct ice_hw *hw, u16 port, u16 *index)
return false;
}
-/**
- * ice_tunnel_port_in_use
- * @hw: pointer to the HW structure
- * @port: port to search for
- * @index: optionally returns index
- *
- * Returns whether a port is already in use as a tunnel, and optionally its
- * index
- */
-bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index)
-{
- bool res;
-
- ice_acquire_lock(&hw->tnl_lock);
- res = ice_tunnel_port_in_use_hlpr(hw, port, index);
- ice_release_lock(&hw->tnl_lock);
-
- return res;
-}
-
-/**
- * ice_tunnel_get_type
- * @hw: pointer to the HW structure
- * @port: port to search for
- * @type: returns tunnel index
- *
- * For a given port number, will return the type of tunnel.
- */
-bool
-ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type)
-{
- bool res = false;
- u16 i;
-
- ice_acquire_lock(&hw->tnl_lock);
-
- for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
- if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
- *type = hw->tnl.tbl[i].type;
- res = true;
- break;
- }
-
- ice_release_lock(&hw->tnl_lock);
-
- return res;
-}
-
/**
* ice_find_free_tunnel_entry
* @hw: pointer to the HW structure
@@ -3797,61 +3749,6 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
}
-/**
- * ice_clear_hw_tbls - clear HW tables and flow profiles
- * @hw: pointer to the hardware structure
- */
-void ice_clear_hw_tbls(struct ice_hw *hw)
-{
- u8 i;
-
- for (i = 0; i < ICE_BLK_COUNT; i++) {
- struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
- struct ice_prof_tcam *prof = &hw->blk[i].prof;
- struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
- struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
- struct ice_es *es = &hw->blk[i].es;
-
- if (hw->blk[i].is_list_init) {
- ice_free_prof_map(hw, i);
- ice_free_flow_profs(hw, i);
- }
-
- ice_free_vsig_tbl(hw, (enum ice_block)i);
-
- ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
- ICE_NONDMA_MEM);
- ice_memset(xlt1->ptg_tbl, 0,
- ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
- ICE_NONDMA_MEM);
- ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
- ICE_NONDMA_MEM);
-
- ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
- ICE_NONDMA_MEM);
- ice_memset(xlt2->vsig_tbl, 0,
- xlt2->count * sizeof(*xlt2->vsig_tbl),
- ICE_NONDMA_MEM);
- ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
- ICE_NONDMA_MEM);
-
- ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
- ICE_NONDMA_MEM);
- ice_memset(prof_redir->t, 0,
- prof_redir->count * sizeof(*prof_redir->t),
- ICE_NONDMA_MEM);
-
- ice_memset(es->t, 0, es->count * sizeof(*es->t) * es->fvw,
- ICE_NONDMA_MEM);
- ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
- ICE_NONDMA_MEM);
- ice_memset(es->written, 0, es->count * sizeof(*es->written),
- ICE_NONDMA_MEM);
- ice_memset(es->mask_ena, 0, es->count * sizeof(*es->mask_ena),
- ICE_NONDMA_MEM);
- }
-}
-
/**
* ice_init_hw_tbls - init hardware table memory
* @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 214c7a2837..257351adfe 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -44,9 +44,6 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type,
enum ice_status
ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
-bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
-bool
-ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type);
/* XLT2/VSI group functions */
enum ice_status
@@ -71,7 +68,6 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
void ice_free_seg(struct ice_hw *hw);
void ice_fill_blk_tbls(struct ice_hw *hw);
-void ice_clear_hw_tbls(struct ice_hw *hw);
void ice_free_hw_tbls(struct ice_hw *hw);
enum ice_status
ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 1b36c2b897..312e9b1ba4 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1576,26 +1576,6 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
return prof;
}
-/**
- * ice_flow_find_prof - Look up a profile matching headers and matched fields
- * @hw: pointer to the HW struct
- * @blk: classification stage
- * @dir: flow direction
- * @segs: array of one or more packet segments that describe the flow
- * @segs_cnt: number of packet segments provided
- */
-u64
-ice_flow_find_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
- struct ice_flow_seg_info *segs, u8 segs_cnt)
-{
- struct ice_flow_prof *p;
-
- p = ice_flow_find_prof_conds(hw, blk, dir, segs, segs_cnt,
- ICE_MAX_VSI, ICE_FLOW_FIND_PROF_CHK_FLDS);
-
- return p ? p->id : ICE_FLOW_PROF_ID_INVAL;
-}
-
/**
* ice_flow_find_prof_id - Look up a profile with given profile ID
* @hw: pointer to the HW struct
@@ -2087,34 +2067,6 @@ ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof)
return status;
}
-/**
- * ice_flow_assoc_vsig_vsi - associate a VSI with VSIG
- * @hw: pointer to the hardware structure
- * @blk: classification stage
- * @vsi_handle: software VSI handle
- * @vsig: target VSI group
- *
- * Assumption: the caller has already verified that the VSI to
- * be added has the same characteristics as the VSIG and will
- * thereby have access to all resources added to that VSIG.
- */
-enum ice_status
-ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
- u16 vsig)
-{
- enum ice_status status;
-
- if (!ice_is_vsi_valid(hw, vsi_handle) || blk >= ICE_BLK_COUNT)
- return ICE_ERR_PARAM;
-
- ice_acquire_lock(&hw->fl_profs_locks[blk]);
- status = ice_add_vsi_flow(hw, blk, ice_get_hw_vsi_num(hw, vsi_handle),
- vsig);
- ice_release_lock(&hw->fl_profs_locks[blk]);
-
- return status;
-}
-
/**
* ice_flow_assoc_prof - associate a VSI with a flow profile
* @hw: pointer to the hardware structure
@@ -2256,44 +2208,6 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
return status;
}
-/**
- * ice_flow_find_entry - look for a flow entry using its unique ID
- * @hw: pointer to the HW struct
- * @blk: classification stage
- * @entry_id: unique ID to identify this flow entry
- *
- * This function looks for the flow entry with the specified unique ID in all
- * flow profiles of the specified classification stage. If the entry is found,
- * and it returns the handle to the flow entry. Otherwise, it returns
- * ICE_FLOW_ENTRY_ID_INVAL.
- */
-u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id)
-{
- struct ice_flow_entry *found = NULL;
- struct ice_flow_prof *p;
-
- ice_acquire_lock(&hw->fl_profs_locks[blk]);
-
- LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
- struct ice_flow_entry *e;
-
- ice_acquire_lock(&p->entries_lock);
- LIST_FOR_EACH_ENTRY(e, &p->entries, ice_flow_entry, l_entry)
- if (e->id == entry_id) {
- found = e;
- break;
- }
- ice_release_lock(&p->entries_lock);
-
- if (found)
- break;
- }
-
- ice_release_lock(&hw->fl_profs_locks[blk]);
-
- return found ? ICE_FLOW_ENTRY_HNDL(found) : ICE_FLOW_ENTRY_HANDLE_INVAL;
-}
-
/**
* ice_flow_acl_check_actions - Checks the ACL rule's actions
* @hw: pointer to the hardware structure
@@ -3162,71 +3076,6 @@ ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
ice_flow_set_fld_ext(seg, fld, t, val_loc, mask_loc, last_loc);
}
-/**
- * ice_flow_set_fld_prefix - sets locations of prefix field from entry's buf
- * @seg: packet segment the field being set belongs to
- * @fld: field to be set
- * @val_loc: if not ICE_FLOW_FLD_OFF_INVAL, location of the value to match from
- * entry's input buffer
- * @pref_loc: location of prefix value from entry's input buffer
- * @pref_sz: size of the location holding the prefix value
- *
- * This function specifies the locations, in the form of byte offsets from the
- * start of the input buffer for a flow entry, from where the value to match
- * and the IPv4 prefix value can be extracted. These locations are then stored
- * in the flow profile. When adding flow entries to the associated flow profile,
- * these locations can be used to quickly extract the values to create the
- * content of a match entry. This function should only be used for fixed-size
- * data structures.
- */
-void
-ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
- u16 val_loc, u16 pref_loc, u8 pref_sz)
-{
- /* For this type of field, the "mask" location is for the prefix value's
- * location and the "last" location is for the size of the location of
- * the prefix value.
- */
- ice_flow_set_fld_ext(seg, fld, ICE_FLOW_FLD_TYPE_PREFIX, val_loc,
- pref_loc, (u16)pref_sz);
-}
-
-/**
- * ice_flow_add_fld_raw - sets locations of a raw field from entry's input buf
- * @seg: packet segment the field being set belongs to
- * @off: offset of the raw field from the beginning of the segment in bytes
- * @len: length of the raw pattern to be matched
- * @val_loc: location of the value to match from entry's input buffer
- * @mask_loc: location of mask value from entry's input buffer
- *
- * This function specifies the offset of the raw field to be match from the
- * beginning of the specified packet segment, and the locations, in the form of
- * byte offsets from the start of the input buffer for a flow entry, from where
- * the value to match and the mask value to be extracted. These locations are
- * then stored in the flow profile. When adding flow entries to the associated
- * flow profile, these locations can be used to quickly extract the values to
- * create the content of a match entry. This function should only be used for
- * fixed-size data structures.
- */
-void
-ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
- u16 val_loc, u16 mask_loc)
-{
- if (seg->raws_cnt < ICE_FLOW_SEG_RAW_FLD_MAX) {
- seg->raws[seg->raws_cnt].off = off;
- seg->raws[seg->raws_cnt].info.type = ICE_FLOW_FLD_TYPE_SIZE;
- seg->raws[seg->raws_cnt].info.src.val = val_loc;
- seg->raws[seg->raws_cnt].info.src.mask = mask_loc;
- /* The "last" field is used to store the length of the field */
- seg->raws[seg->raws_cnt].info.src.last = len;
- }
-
- /* Overflows of "raws" will be handled as an error condition later in
- * the flow when this information is processed.
- */
- seg->raws_cnt++;
-}
-
#define ICE_FLOW_RSS_SEG_HDR_L2_MASKS \
(ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_VLAN)
@@ -3293,31 +3142,6 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt,
return ICE_SUCCESS;
}
-/**
- * ice_rem_vsi_rss_list - remove VSI from RSS list
- * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- *
- * Remove the VSI from all RSS configurations in the list.
- */
-void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle)
-{
- struct ice_rss_cfg *r, *tmp;
-
- if (LIST_EMPTY(&hw->rss_list_head))
- return;
-
- ice_acquire_lock(&hw->rss_locks);
- LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
- ice_rss_cfg, l_entry)
- if (ice_test_and_clear_bit(vsi_handle, r->vsis))
- if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
- LIST_DEL(&r->l_entry);
- ice_free(hw, r);
- }
- ice_release_lock(&hw->rss_locks);
-}
-
/**
* ice_rem_vsi_rss_cfg - remove RSS configurations associated with VSI
* @hw: pointer to the hardware structure
@@ -3880,34 +3704,3 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
return status;
}
-
-/**
- * ice_get_rss_cfg - returns hashed fields for the given header types
- * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- * @hdrs: protocol header type
- *
- * This function will return the match fields of the first instance of flow
- * profile having the given header types and containing input VSI
- */
-u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
-{
- u64 rss_hash = ICE_HASH_INVALID;
- struct ice_rss_cfg *r;
-
- /* verify if the protocol header is non zero and VSI is valid */
- if (hdrs == ICE_FLOW_SEG_HDR_NONE || !ice_is_vsi_valid(hw, vsi_handle))
- return ICE_HASH_INVALID;
-
- ice_acquire_lock(&hw->rss_locks);
- LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
- ice_rss_cfg, l_entry)
- if (ice_is_bit_set(r->vsis, vsi_handle) &&
- r->hash.addl_hdrs == hdrs) {
- rss_hash = r->hash.hash_flds;
- break;
- }
- ice_release_lock(&hw->rss_locks);
-
- return rss_hash;
-}
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 2a9ae66454..2675202240 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -504,9 +504,6 @@ struct ice_flow_action {
} data;
};
-u64
-ice_flow_find_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
- struct ice_flow_seg_info *segs, u8 segs_cnt);
enum ice_status
ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt,
@@ -518,13 +515,9 @@ enum ice_status
ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk,
struct ice_flow_prof *prof, u16 vsi_handle);
enum ice_status
-ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
- u16 vsig);
-enum ice_status
ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
u8 *hw_prof);
-u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id);
enum ice_status
ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
u64 entry_id, u16 vsi, enum ice_flow_priority prio,
@@ -535,13 +528,6 @@ ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_h);
void
ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
u16 val_loc, u16 mask_loc, u16 last_loc, bool range);
-void
-ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
- u16 val_loc, u16 prefix_loc, u8 prefix_sz);
-void
-ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
- u16 val_loc, u16 mask_loc);
-void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle);
enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
enum ice_status
ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
@@ -552,5 +538,4 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
enum ice_status
ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
const struct ice_rss_hash_cfg *cfg);
-u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs);
#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index 7b76af7b6f..75ff992b9c 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -145,39 +145,6 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
return ICE_SUCCESS;
}
-/**
- * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
- * @words: (in) number of words to read; (out) number of words actually read
- * @data: words read from the Shadow RAM
- *
- * Reads 16 bit words (data buf) from the Shadow RAM. Ownership of the NVM is
- * taken before reading the buffer and later released.
- */
-static enum ice_status
-ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
-{
- u32 bytes = *words * 2, i;
- enum ice_status status;
-
- ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
- /* ice_read_flat_nvm takes into account the 4KB AdminQ and Shadow RAM
- * sector restrictions necessary when reading from the NVM.
- */
- status = ice_read_flat_nvm(hw, offset * 2, &bytes, (u8 *)data, true);
-
- /* Report the number of words successfully read */
- *words = bytes / 2;
-
- /* Byte swap the words up to the amount we actually read */
- for (i = 0; i < *words; i++)
- data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
-
- return status;
-}
-
/**
* ice_acquire_nvm - Generic request for acquiring the NVM ownership
* @hw: pointer to the HW structure
@@ -400,65 +367,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
return ICE_ERR_DOES_NOT_EXIST;
}
-/**
- * ice_read_pba_string - Reads part number string from NVM
- * @hw: pointer to hardware structure
- * @pba_num: stores the part number string from the NVM
- * @pba_num_size: part number string buffer length
- *
- * Reads the part number string from the NVM.
- */
-enum ice_status
-ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
- u16 pba_tlv, pba_tlv_len;
- enum ice_status status;
- u16 pba_word, pba_size;
- u16 i;
-
- status = ice_get_pfa_module_tlv(hw, &pba_tlv, &pba_tlv_len,
- ICE_SR_PBA_BLOCK_PTR);
- if (status != ICE_SUCCESS) {
- ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block TLV.\n");
- return status;
- }
-
- /* pba_size is the next word */
- status = ice_read_sr_word(hw, (pba_tlv + 2), &pba_size);
- if (status != ICE_SUCCESS) {
- ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Section size.\n");
- return status;
- }
-
- if (pba_tlv_len < pba_size) {
- ice_debug(hw, ICE_DBG_INIT, "Invalid PBA Block TLV size.\n");
- return ICE_ERR_INVAL_SIZE;
- }
-
- /* Subtract one to get PBA word count (PBA Size word is included in
- * total size)
- */
- pba_size--;
- if (pba_num_size < (((u32)pba_size * 2) + 1)) {
- ice_debug(hw, ICE_DBG_INIT, "Buffer too small for PBA data.\n");
- return ICE_ERR_PARAM;
- }
-
- for (i = 0; i < pba_size; i++) {
- status = ice_read_sr_word(hw, (pba_tlv + 2 + 1) + i, &pba_word);
- if (status != ICE_SUCCESS) {
- ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block word %d.\n", i);
- return status;
- }
-
- pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
- pba_num[(i * 2) + 1] = pba_word & 0xFF;
- }
- pba_num[(pba_size * 2)] = '\0';
-
- return status;
-}
-
/**
* ice_get_nvm_srev - Read the security revision from the NVM CSS header
* @hw: pointer to the HW struct
@@ -884,62 +792,6 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
return ICE_SUCCESS;
}
-/**
- * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
- * @hw: pointer to the HW structure
- * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
- * @words: (in) number of words to read; (out) number of words actually read
- * @data: words read from the Shadow RAM
- *
- * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
- * method. The buf read is preceded by the NVM ownership take
- * and followed by the release.
- */
-enum ice_status
-ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
-{
- enum ice_status status;
-
- status = ice_acquire_nvm(hw, ICE_RES_READ);
- if (!status) {
- status = ice_read_sr_buf_aq(hw, offset, words, data);
- ice_release_nvm(hw);
- }
-
- return status;
-}
-
-/**
- * ice_nvm_validate_checksum
- * @hw: pointer to the HW struct
- *
- * Verify NVM PFA checksum validity (0x0706)
- */
-enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
-{
- struct ice_aqc_nvm_checksum *cmd;
- struct ice_aq_desc desc;
- enum ice_status status;
-
- status = ice_acquire_nvm(hw, ICE_RES_READ);
- if (status)
- return status;
-
- cmd = &desc.params.nvm_checksum;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
- cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
-
- status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
- ice_release_nvm(hw);
-
- if (!status)
- if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
- status = ICE_ERR_NVM_CHECKSUM;
-
- return status;
-}
-
/**
* ice_nvm_access_get_features - Return the NVM access features structure
* @cmd: NVM access command to process
@@ -1129,55 +981,3 @@ ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
return ICE_SUCCESS;
}
-
-/**
- * ice_handle_nvm_access - Handle an NVM access request
- * @hw: pointer to the HW struct
- * @cmd: NVM access command info
- * @data: pointer to read or return data
- *
- * Process an NVM access request. Read the command structure information and
- * determine if it is valid. If not, report an error indicating the command
- * was invalid.
- *
- * For valid commands, perform the necessary function, copying the data into
- * the provided data buffer.
- */
-enum ice_status
-ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
- union ice_nvm_access_data *data)
-{
- u32 module, flags, adapter_info;
-
- ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
- /* Extended flags are currently reserved and must be zero */
- if ((cmd->config & ICE_NVM_CFG_EXT_FLAGS_M) != 0)
- return ICE_ERR_PARAM;
-
- /* Adapter info must match the HW device ID */
- adapter_info = ice_nvm_access_get_adapter(cmd);
- if (adapter_info != hw->device_id)
- return ICE_ERR_PARAM;
-
- switch (cmd->command) {
- case ICE_NVM_CMD_READ:
- module = ice_nvm_access_get_module(cmd);
- flags = ice_nvm_access_get_flags(cmd);
-
- /* Getting the driver's NVM features structure shares the same
- * command type as reading a register. Read the config field
- * to determine if this is a request to get features.
- */
- if (module == ICE_NVM_GET_FEATURES_MODULE &&
- flags == ICE_NVM_GET_FEATURES_FLAGS &&
- cmd->offset == 0)
- return ice_nvm_access_get_features(cmd, data);
- else
- return ice_nvm_access_read(hw, cmd, data);
- case ICE_NVM_CMD_WRITE:
- return ice_nvm_access_write(hw, cmd, data);
- default:
- return ICE_ERR_PARAM;
- }
-}
diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h
index 8e2eb4df1b..e46562f862 100644
--- a/drivers/net/ice/base/ice_nvm.h
+++ b/drivers/net/ice/base/ice_nvm.h
@@ -82,9 +82,6 @@ enum ice_status
ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd,
union ice_nvm_access_data *data);
enum ice_status
-ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd,
- union ice_nvm_access_data *data);
-enum ice_status
ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access);
void ice_release_nvm(struct ice_hw *hw);
enum ice_status
@@ -97,11 +94,6 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data,
enum ice_status
ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
u16 module_type);
-enum ice_status
-ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size);
enum ice_status ice_init_nvm(struct ice_hw *hw);
enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
-enum ice_status
-ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
-enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
#endif /* _ICE_NVM_H_ */
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index ac48bbe279..d7f0866dac 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -644,25 +644,6 @@ ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
buf, buf_size, num_profiles_added, cd);
}
-/**
- * ice_aq_query_rl_profile - query rate limiting profile(s)
- * @hw: pointer to the HW struct
- * @num_profiles: the number of profile(s) to query
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure
- *
- * Query RL profile (0x0411)
- */
-enum ice_status
-ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
- struct ice_aqc_rl_profile_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd)
-{
- return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
- num_profiles, buf, buf_size, NULL, cd);
-}
-
/**
* ice_aq_remove_rl_profile - removes RL profile(s)
* @hw: pointer to the HW struct
@@ -839,32 +820,6 @@ void ice_sched_cleanup_all(struct ice_hw *hw)
hw->max_cgds = 0;
}
-/**
- * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
- * @hw: pointer to the HW struct
- * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * Configure L2 Node CGD (0x0414)
- */
-enum ice_status
-ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
- struct ice_aqc_cfg_l2_node_cgd_elem *buf,
- u16 buf_size, struct ice_sq_cd *cd)
-{
- struct ice_aqc_cfg_l2_node_cgd *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.cfg_l2_node_cgd;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
- desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
- cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
- return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-}
-
/**
* ice_sched_add_elems - add nodes to HW and SW DB
* @pi: port information structure
@@ -1959,137 +1914,6 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
return status;
}
-/**
- * ice_sched_rm_agg_vsi_entry - remove aggregator related VSI info entry
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function removes single aggregator VSI info entry from
- * aggregator list.
- */
-static void ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
-{
- struct ice_sched_agg_info *agg_info;
- struct ice_sched_agg_info *atmp;
-
- LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
- ice_sched_agg_info,
- list_entry) {
- struct ice_sched_agg_vsi_info *agg_vsi_info;
- struct ice_sched_agg_vsi_info *vtmp;
-
- LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
- &agg_info->agg_vsi_list,
- ice_sched_agg_vsi_info, list_entry)
- if (agg_vsi_info->vsi_handle == vsi_handle) {
- LIST_DEL(&agg_vsi_info->list_entry);
- ice_free(pi->hw, agg_vsi_info);
- return;
- }
- }
-}
-
-/**
- * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
- * @node: pointer to the sub-tree node
- *
- * This function checks for a leaf node presence in a given sub-tree node.
- */
-static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
-{
- u8 i;
-
- for (i = 0; i < node->num_children; i++)
- if (ice_sched_is_leaf_node_present(node->children[i]))
- return true;
- /* check for a leaf node */
- return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
-}
-
-/**
- * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @owner: LAN or RDMA
- *
- * This function removes the VSI and its LAN or RDMA children nodes from the
- * scheduler tree.
- */
-static enum ice_status
-ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
-{
- enum ice_status status = ICE_ERR_PARAM;
- struct ice_vsi_ctx *vsi_ctx;
- u8 i;
-
- ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return status;
- ice_acquire_lock(&pi->sched_lock);
- vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
- if (!vsi_ctx)
- goto exit_sched_rm_vsi_cfg;
-
- ice_for_each_traffic_class(i) {
- struct ice_sched_node *vsi_node, *tc_node;
- u8 j = 0;
-
- tc_node = ice_sched_get_tc_node(pi, i);
- if (!tc_node)
- continue;
-
- vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
- if (!vsi_node)
- continue;
-
- if (ice_sched_is_leaf_node_present(vsi_node)) {
- ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", i);
- status = ICE_ERR_IN_USE;
- goto exit_sched_rm_vsi_cfg;
- }
- while (j < vsi_node->num_children) {
- if (vsi_node->children[j]->owner == owner) {
- ice_free_sched_node(pi, vsi_node->children[j]);
-
- /* reset the counter again since the num
- * children will be updated after node removal
- */
- j = 0;
- } else {
- j++;
- }
- }
- /* remove the VSI if it has no children */
- if (!vsi_node->num_children) {
- ice_free_sched_node(pi, vsi_node);
- vsi_ctx->sched.vsi_node[i] = NULL;
-
- /* clean up aggregator related VSI info if any */
- ice_sched_rm_agg_vsi_info(pi, vsi_handle);
- }
- if (owner == ICE_SCHED_NODE_OWNER_LAN)
- vsi_ctx->sched.max_lanq[i] = 0;
- }
- status = ICE_SUCCESS;
-
-exit_sched_rm_vsi_cfg:
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_rm_vsi_lan_cfg - remove VSI and its LAN children nodes
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function clears the VSI and its LAN children nodes from scheduler tree
- * for all TCs.
- */
-enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
-{
- return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
-}
-
/**
* ice_sched_is_tree_balanced - Check tree nodes are identical or not
* @hw: pointer to the HW struct
@@ -2114,31 +1938,6 @@ bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
return ice_sched_check_node(hw, node);
}
-/**
- * ice_aq_query_node_to_root - retrieve the tree topology for a given node TEID
- * @hw: pointer to the HW struct
- * @node_teid: node TEID
- * @buf: pointer to buffer
- * @buf_size: buffer size in bytes
- * @cd: pointer to command details structure or NULL
- *
- * This function retrieves the tree topology from the firmware for a given
- * node TEID to the root node.
- */
-enum ice_status
-ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
- struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_query_node_to_root *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.query_node_to_root;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
- cmd->teid = CPU_TO_LE32(node_teid);
- return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-}
-
/**
* ice_get_agg_info - get the aggregator ID
* @hw: pointer to the hardware structure
@@ -2526,29 +2325,6 @@ ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
return status;
}
-/**
- * ice_save_agg_tc_bitmap - save aggregator TC bitmap
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc_bitmap: 8 bits TC bitmap
- *
- * Save aggregator TC bitmap. This function needs to be called with scheduler
- * lock held.
- */
-static enum ice_status
-ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
- ice_bitmap_t *tc_bitmap)
-{
- struct ice_sched_agg_info *agg_info;
-
- agg_info = ice_get_agg_info(pi->hw, agg_id);
- if (!agg_info)
- return ICE_ERR_PARAM;
- ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
- ICE_MAX_TRAFFIC_CLASS);
- return ICE_SUCCESS;
-}
-
/**
* ice_sched_add_agg_cfg - create an aggregator node
* @pi: port information structure
@@ -2701,32 +2477,6 @@ ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
return status;
}
-/**
- * ice_cfg_agg - config aggregator node
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @agg_type: aggregator type queue, VSI, or aggregator group
- * @tc_bitmap: bits TC bitmap
- *
- * This function configures aggregator node(s).
- */
-enum ice_status
-ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
- u8 tc_bitmap)
-{
- ice_bitmap_t bitmap = tc_bitmap;
- enum ice_status status;
-
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_cfg_agg(pi, agg_id, agg_type,
- (ice_bitmap_t *)&bitmap);
- if (!status)
- status = ice_save_agg_tc_bitmap(pi, agg_id,
- (ice_bitmap_t *)&bitmap);
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
/**
* ice_get_agg_vsi_info - get the aggregator ID
* @agg_info: aggregator info
@@ -2773,35 +2523,6 @@ ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
return NULL;
}
-/**
- * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @vsi_handle: software VSI handle
- * @tc_bitmap: TC bitmap of enabled TC(s)
- *
- * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
- * lock held.
- */
-static enum ice_status
-ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
- ice_bitmap_t *tc_bitmap)
-{
- struct ice_sched_agg_vsi_info *agg_vsi_info;
- struct ice_sched_agg_info *agg_info;
-
- agg_info = ice_get_agg_info(pi->hw, agg_id);
- if (!agg_info)
- return ICE_ERR_PARAM;
- /* check if entry already exist */
- agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
- if (!agg_vsi_info)
- return ICE_ERR_PARAM;
- ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
- ICE_MAX_TRAFFIC_CLASS);
- return ICE_SUCCESS;
-}
-
/**
* ice_sched_assoc_vsi_to_agg - associate/move VSI to new/default aggregator
* @pi: port information structure
@@ -2959,124 +2680,75 @@ ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
}
/**
- * ice_move_vsi_to_agg - moves VSI to new or default aggregator
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @vsi_handle: software VSI handle
- * @tc_bitmap: TC bitmap of enabled TC(s)
- *
- * Move or associate VSI to a new or default aggregator node.
- */
-enum ice_status
-ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
- u8 tc_bitmap)
-{
- ice_bitmap_t bitmap = tc_bitmap;
- enum ice_status status;
-
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
- (ice_bitmap_t *)&bitmap);
- if (!status)
- status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
- (ice_bitmap_t *)&bitmap);
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_rm_agg_cfg - remove aggregator configuration
- * @pi: port information structure
- * @agg_id: aggregator ID
+ * ice_set_clear_cir_bw - set or clear CIR BW
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
*
- * This function removes aggregator reference to VSI and delete aggregator ID
- * info. It removes the aggregator configuration completely.
+ * Save or clear CIR bandwidth (BW) in the passed param bw_t_info.
*/
-enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+static void ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
{
- struct ice_sched_agg_info *agg_info;
- enum ice_status status = ICE_SUCCESS;
- u8 tc;
-
- ice_acquire_lock(&pi->sched_lock);
- agg_info = ice_get_agg_info(pi->hw, agg_id);
- if (!agg_info) {
- status = ICE_ERR_DOES_NOT_EXIST;
- goto exit_ice_rm_agg_cfg;
- }
-
- ice_for_each_traffic_class(tc) {
- status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
- if (status)
- goto exit_ice_rm_agg_cfg;
- }
-
- if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
- status = ICE_ERR_IN_USE;
- goto exit_ice_rm_agg_cfg;
+ if (bw == ICE_SCHED_DFLT_BW) {
+ ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+ bw_t_info->cir_bw.bw = 0;
+ } else {
+ /* Save type of BW information */
+ ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+ bw_t_info->cir_bw.bw = bw;
}
-
- /* Safe to delete entry now */
- LIST_DEL(&agg_info->list_entry);
- ice_free(pi->hw, agg_info);
-
- /* Remove unused RL profile IDs from HW and SW DB */
- ice_sched_rm_unused_rl_prof(pi->hw);
-
-exit_ice_rm_agg_cfg:
- ice_release_lock(&pi->sched_lock);
- return status;
}
/**
- * ice_set_clear_cir_bw_alloc - set or clear CIR BW alloc information
+ * ice_set_clear_eir_bw - set or clear EIR BW
* @bw_t_info: bandwidth type information structure
- * @bw_alloc: Bandwidth allocation information
+ * @bw: bandwidth in Kbps - Kilo bits per sec
*
- * Save or clear CIR BW alloc information (bw_alloc) in the passed param
- * bw_t_info.
+ * Save or clear EIR bandwidth (BW) in the passed param bw_t_info.
*/
-static void
-ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+static void ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
{
- bw_t_info->cir_bw.bw_alloc = bw_alloc;
- if (bw_t_info->cir_bw.bw_alloc)
- ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
- else
- ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+ if (bw == ICE_SCHED_DFLT_BW) {
+ ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+ bw_t_info->eir_bw.bw = 0;
+ } else {
+ /* save EIR BW information */
+ ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+ bw_t_info->eir_bw.bw = bw;
+ }
}
/**
- * ice_set_clear_eir_bw_alloc - set or clear EIR BW alloc information
+ * ice_set_clear_shared_bw - set or clear shared BW
* @bw_t_info: bandwidth type information structure
- * @bw_alloc: Bandwidth allocation information
+ * @bw: bandwidth in Kbps - Kilo bits per sec
*
- * Save or clear EIR BW alloc information (bw_alloc) in the passed param
- * bw_t_info.
+ * Save or clear shared bandwidth (BW) in the passed param bw_t_info.
*/
-static void
-ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
{
- bw_t_info->eir_bw.bw_alloc = bw_alloc;
- if (bw_t_info->eir_bw.bw_alloc)
- ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
- else
- ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+ if (bw == ICE_SCHED_DFLT_BW) {
+ ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+ bw_t_info->shared_bw = 0;
+ } else {
+ /* save shared BW information */
+ ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+ bw_t_info->shared_bw = bw;
+ }
}
/**
- * ice_sched_save_vsi_bw_alloc - save VSI node's BW alloc information
+ * ice_sched_save_vsi_bw - save VSI node's BW information
* @pi: port information structure
* @vsi_handle: sw VSI handle
* @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: Bandwidth allocation information
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
*
- * Save BW alloc information of VSI type node for post replay use.
+ * Save BW information of VSI type node for post replay use.
*/
static enum ice_status
-ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type, u16 bw_alloc)
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ enum ice_rl_type rl_type, u32 bw)
{
struct ice_vsi_ctx *vsi_ctx;
@@ -3087,100 +2759,7 @@ ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
return ICE_ERR_PARAM;
switch (rl_type) {
case ICE_MIN_BW:
- ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
- bw_alloc);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
- bw_alloc);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_set_clear_cir_bw - set or clear CIR BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear CIR bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
- if (bw == ICE_SCHED_DFLT_BW) {
- ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
- bw_t_info->cir_bw.bw = 0;
- } else {
- /* Save type of BW information */
- ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
- bw_t_info->cir_bw.bw = bw;
- }
-}
-
-/**
- * ice_set_clear_eir_bw - set or clear EIR BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear EIR bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
- if (bw == ICE_SCHED_DFLT_BW) {
- ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
- bw_t_info->eir_bw.bw = 0;
- } else {
- /* save EIR BW information */
- ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
- bw_t_info->eir_bw.bw = bw;
- }
-}
-
-/**
- * ice_set_clear_shared_bw - set or clear shared BW
- * @bw_t_info: bandwidth type information structure
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save or clear shared bandwidth (BW) in the passed param bw_t_info.
- */
-static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
-{
- if (bw == ICE_SCHED_DFLT_BW) {
- ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
- bw_t_info->shared_bw = 0;
- } else {
- /* save shared BW information */
- ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
- bw_t_info->shared_bw = bw;
- }
-}
-
-/**
- * ice_sched_save_vsi_bw - save VSI node's BW information
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @rl_type: rate limit type min, max, or shared
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save BW information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- struct ice_vsi_ctx *vsi_ctx;
-
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return ICE_ERR_PARAM;
- vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
- if (!vsi_ctx)
- return ICE_ERR_PARAM;
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+ ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
break;
case ICE_MAX_BW:
ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
@@ -3194,82 +2773,6 @@ ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
return ICE_SUCCESS;
}
-/**
- * ice_set_clear_prio - set or clear priority information
- * @bw_t_info: bandwidth type information structure
- * @prio: priority to save
- *
- * Save or clear priority (prio) in the passed param bw_t_info.
- */
-static void ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
-{
- bw_t_info->generic = prio;
- if (bw_t_info->generic)
- ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
- else
- ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
-}
-
-/**
- * ice_sched_save_vsi_prio - save VSI node's priority information
- * @pi: port information structure
- * @vsi_handle: Software VSI handle
- * @tc: traffic class
- * @prio: priority to save
- *
- * Save priority information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u8 prio)
-{
- struct ice_vsi_ctx *vsi_ctx;
-
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return ICE_ERR_PARAM;
- vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
- if (!vsi_ctx)
- return ICE_ERR_PARAM;
- if (tc >= ICE_MAX_TRAFFIC_CLASS)
- return ICE_ERR_PARAM;
- ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
- return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_save_agg_bw_alloc - save aggregator node's BW alloc information
- * @pi: port information structure
- * @agg_id: node aggregator ID
- * @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: bandwidth alloc information
- *
- * Save BW alloc information of AGG type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- enum ice_rl_type rl_type, u16 bw_alloc)
-{
- struct ice_sched_agg_info *agg_info;
-
- agg_info = ice_get_agg_info(pi->hw, agg_id);
- if (!agg_info)
- return ICE_ERR_PARAM;
- if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
- return ICE_ERR_PARAM;
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
/**
* ice_sched_save_agg_bw - save aggregator node's BW information
* @pi: port information structure
@@ -3284,490 +2787,27 @@ static enum ice_status
ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
enum ice_rl_type rl_type, u32 bw)
{
- struct ice_sched_agg_info *agg_info;
-
- agg_info = ice_get_agg_info(pi->hw, agg_id);
- if (!agg_info)
- return ICE_ERR_PARAM;
- if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
- return ICE_ERR_PARAM;
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
- break;
- case ICE_SHARED_BW:
- ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_cfg_vsi_bw_lmt_per_tc - configure VSI BW limit per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @tc: traffic class
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of VSI scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- enum ice_status status;
-
- status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
- ICE_AGG_TYPE_VSI,
- tc, rl_type, bw);
- if (!status) {
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
- ice_release_lock(&pi->sched_lock);
- }
- return status;
-}
-
-/**
- * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI BW limit per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @tc: traffic class
- * @rl_type: min or max
- *
- * This function configures default BW limit of VSI scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type)
-{
- enum ice_status status;
-
- status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
- ICE_AGG_TYPE_VSI,
- tc, rl_type,
- ICE_SCHED_DFLT_BW);
- if (!status) {
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
- ICE_SCHED_DFLT_BW);
- ice_release_lock(&pi->sched_lock);
- }
- return status;
-}
-
-/**
- * ice_cfg_agg_bw_lmt_per_tc - configure aggregator BW limit per TC
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function applies BW limit to aggregator scheduling node based on TC
- * information.
- */
-enum ice_status
-ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- enum ice_status status;
-
- status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
- tc, rl_type, bw);
- if (!status) {
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
- ice_release_lock(&pi->sched_lock);
- }
- return status;
-}
-
-/**
- * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator BW default limit per TC
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @rl_type: min or max
- *
- * This function applies default BW limit to aggregator scheduling node based
- * on TC information.
- */
-enum ice_status
-ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- enum ice_rl_type rl_type)
-{
- enum ice_status status;
-
- status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
- tc, rl_type,
- ICE_SCHED_DFLT_BW);
- if (!status) {
- ice_acquire_lock(&pi->sched_lock);
- status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
- ICE_SCHED_DFLT_BW);
- ice_release_lock(&pi->sched_lock);
- }
- return status;
-}
-
-/**
- * ice_cfg_vsi_bw_shared_lmt - configure VSI BW shared limit
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * Configure shared rate limiter(SRL) of all VSI type nodes across all traffic
- * classes for VSI matching handle.
- */
-enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
- u32 max_bw, u32 shared_bw)
-{
- return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, min_bw, max_bw,
- shared_bw);
-}
-
-/**
- * ice_cfg_vsi_bw_no_shared_lmt - configure VSI BW for no shared limiter
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function removes the shared rate limiter(SRL) of all VSI type nodes
- * across all traffic classes for VSI matching handle.
- */
-enum ice_status
-ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
-{
- return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
- ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt - configure aggregator BW shared limit
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
- u32 max_bw, u32 shared_bw)
-{
- return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, min_bw, max_bw,
- shared_bw);
-}
-
-/**
- * ice_cfg_agg_bw_no_shared_lmt - configure aggregator BW for no shared limiter
- * @pi: port information structure
- * @agg_id: aggregator ID
- *
- * This function removes the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
-{
- return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- * @min_bw: minimum bandwidth in Kbps
- * @max_bw: maximum bandwidth in Kbps
- * @shared_bw: shared bandwidth in Kbps
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- u32 min_bw, u32 max_bw, u32 shared_bw)
-{
- return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc, min_bw,
- max_bw, shared_bw);
-}
-
-/**
- * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @tc: traffic class
- *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id.
- */
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc)
-{
- return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc,
- ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW,
- ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_config_vsi_queue_priority - config VSI queue priority of node
- * @pi: port information structure
- * @num_qs: number of VSI queues
- * @q_ids: queue IDs array
- * @q_prio: queue priority array
- *
- * This function configures the queue node priority (Sibling Priority) of the
- * passed in VSI's queue(s) for a given traffic class (TC).
- */
-enum ice_status
-ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
- u8 *q_prio)
-{
- enum ice_status status = ICE_ERR_PARAM;
- u16 i;
-
- ice_acquire_lock(&pi->sched_lock);
-
- for (i = 0; i < num_qs; i++) {
- struct ice_sched_node *node;
-
- node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
- if (!node || node->info.data.elem_type !=
- ICE_AQC_ELEM_TYPE_LEAF) {
- status = ICE_ERR_PARAM;
- break;
- }
- /* Configure Priority */
- status = ice_sched_cfg_sibl_node_prio(pi, node, q_prio[i]);
- if (status)
- break;
- }
-
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
- * @pi: port information structure
- * @agg_id: Aggregator ID
- * @num_vsis: number of VSI(s)
- * @vsi_handle_arr: array of software VSI handles
- * @node_prio: pointer to node priority
- * @tc: traffic class
- *
- * This function configures the node priority (Sibling Priority) of the
- * passed in VSI's for a given traffic class (TC) of an Aggregator ID.
- */
-enum ice_status
-ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
- u16 num_vsis, u16 *vsi_handle_arr,
- u8 *node_prio, u8 tc)
-{
- struct ice_sched_agg_vsi_info *agg_vsi_info;
- struct ice_sched_node *tc_node, *agg_node;
- enum ice_status status = ICE_ERR_PARAM;
- struct ice_sched_agg_info *agg_info;
- bool agg_id_present = false;
- struct ice_hw *hw = pi->hw;
- u16 i;
-
- ice_acquire_lock(&pi->sched_lock);
- LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
- list_entry)
- if (agg_info->agg_id == agg_id) {
- agg_id_present = true;
- break;
- }
- if (!agg_id_present)
- goto exit_agg_priority_per_tc;
-
- tc_node = ice_sched_get_tc_node(pi, tc);
- if (!tc_node)
- goto exit_agg_priority_per_tc;
-
- agg_node = ice_sched_get_agg_node(pi, tc_node, agg_id);
- if (!agg_node)
- goto exit_agg_priority_per_tc;
-
- if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
- goto exit_agg_priority_per_tc;
-
- for (i = 0; i < num_vsis; i++) {
- struct ice_sched_node *vsi_node;
- bool vsi_handle_valid = false;
- u16 vsi_handle;
-
- status = ICE_ERR_PARAM;
- vsi_handle = vsi_handle_arr[i];
- if (!ice_is_vsi_valid(hw, vsi_handle))
- goto exit_agg_priority_per_tc;
- /* Verify child nodes before applying settings */
- LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
- ice_sched_agg_vsi_info, list_entry)
- if (agg_vsi_info->vsi_handle == vsi_handle) {
- /* cppcheck-suppress unreadVariable */
- vsi_handle_valid = true;
- break;
- }
-
- if (!vsi_handle_valid)
- goto exit_agg_priority_per_tc;
-
- vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
- if (!vsi_node)
- goto exit_agg_priority_per_tc;
-
- if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
- /* Configure Priority */
- status = ice_sched_cfg_sibl_node_prio(pi, vsi_node,
- node_prio[i]);
- if (status)
- break;
- status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
- node_prio[i]);
- if (status)
- break;
- }
- }
-
-exit_agg_priority_per_tc:
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_vsi_bw_alloc - config VSI BW alloc per TC
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- * @ena_tcmap: enabled TC map
- * @rl_type: Rate limit type CIR/EIR
- * @bw_alloc: Array of BW alloc
- *
- * This function configures the BW allocation of the passed in VSI's
- * node(s) for enabled traffic class.
- */
-enum ice_status
-ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
- enum ice_rl_type rl_type, u8 *bw_alloc)
-{
- enum ice_status status = ICE_SUCCESS;
- u8 tc;
-
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return ICE_ERR_PARAM;
-
- ice_acquire_lock(&pi->sched_lock);
-
- /* Return success if no nodes are present across TC */
- ice_for_each_traffic_class(tc) {
- struct ice_sched_node *tc_node, *vsi_node;
-
- if (!ice_is_tc_ena(ena_tcmap, tc))
- continue;
-
- tc_node = ice_sched_get_tc_node(pi, tc);
- if (!tc_node)
- continue;
-
- vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
- if (!vsi_node)
- continue;
-
- status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
- bw_alloc[tc]);
- if (status)
- break;
- status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
- rl_type, bw_alloc[tc]);
- if (status)
- break;
- }
-
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_agg_bw_alloc - config aggregator BW alloc
- * @pi: port information structure
- * @agg_id: aggregator ID
- * @ena_tcmap: enabled TC map
- * @rl_type: rate limit type CIR/EIR
- * @bw_alloc: array of BW alloc
- *
- * This function configures the BW allocation of passed in aggregator for
- * enabled traffic class(s).
- */
-enum ice_status
-ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
- enum ice_rl_type rl_type, u8 *bw_alloc)
-{
- struct ice_sched_agg_info *agg_info;
- bool agg_id_present = false;
- enum ice_status status = ICE_SUCCESS;
- struct ice_hw *hw = pi->hw;
- u8 tc;
-
- ice_acquire_lock(&pi->sched_lock);
- LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
- list_entry)
- if (agg_info->agg_id == agg_id) {
- agg_id_present = true;
- break;
- }
- if (!agg_id_present) {
- status = ICE_ERR_PARAM;
- goto exit_cfg_agg_bw_alloc;
- }
-
- /* Return success if no nodes are present across TC */
- ice_for_each_traffic_class(tc) {
- struct ice_sched_node *tc_node, *agg_node;
-
- if (!ice_is_tc_ena(ena_tcmap, tc))
- continue;
-
- tc_node = ice_sched_get_tc_node(pi, tc);
- if (!tc_node)
- continue;
-
- agg_node = ice_sched_get_agg_node(pi, tc_node, agg_id);
- if (!agg_node)
- continue;
+ struct ice_sched_agg_info *agg_info;
- status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
- bw_alloc[tc]);
- if (status)
- break;
- status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
- bw_alloc[tc]);
- if (status)
- break;
+ agg_info = ice_get_agg_info(pi->hw, agg_id);
+ if (!agg_info)
+ return ICE_ERR_PARAM;
+ if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+ return ICE_ERR_PARAM;
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+ break;
+ case ICE_SHARED_BW:
+ ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+ break;
+ default:
+ return ICE_ERR_PARAM;
}
-
-exit_cfg_agg_bw_alloc:
- ice_release_lock(&pi->sched_lock);
- return status;
+ return ICE_SUCCESS;
}
/**
@@ -4328,362 +3368,6 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
return ICE_ERR_CFG;
}
-/**
- * ice_sched_save_q_bw - save queue node's BW information
- * @q_ctx: queue context structure
- * @rl_type: rate limit type min, max, or shared
- * @bw: bandwidth in Kbps - Kilo bits per sec
- *
- * Save BW information of queue type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw)
-{
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw(&q_ctx->bw_t_info, bw);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw(&q_ctx->bw_t_info, bw);
- break;
- case ICE_SHARED_BW:
- ice_set_clear_shared_bw(&q_ctx->bw_t_info, bw);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_q_bw_lmt - sets queue BW limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- * @bw: bandwidth in Kbps
- *
- * This function sets BW limit of queue scheduling node.
- */
-static enum ice_status
-ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u16 q_handle, enum ice_rl_type rl_type, u32 bw)
-{
- enum ice_status status = ICE_ERR_PARAM;
- struct ice_sched_node *node;
- struct ice_q_ctx *q_ctx;
-
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return ICE_ERR_PARAM;
- ice_acquire_lock(&pi->sched_lock);
- q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
- if (!q_ctx)
- goto exit_q_bw_lmt;
- node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
- if (!node) {
- ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
- goto exit_q_bw_lmt;
- }
-
- /* Return error if it is not a leaf node */
- if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
- goto exit_q_bw_lmt;
-
- /* SRL bandwidth layer selection */
- if (rl_type == ICE_SHARED_BW) {
- u8 sel_layer; /* selected layer */
-
- sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
- node->tx_sched_layer);
- if (sel_layer >= pi->hw->num_tx_sched_layers) {
- status = ICE_ERR_PARAM;
- goto exit_q_bw_lmt;
- }
- status = ice_sched_validate_srl_node(node, sel_layer);
- if (status)
- goto exit_q_bw_lmt;
- }
-
- if (bw == ICE_SCHED_DFLT_BW)
- status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
- else
- status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
-
- if (!status)
- status = ice_sched_save_q_bw(q_ctx, rl_type, bw);
-
-exit_q_bw_lmt:
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_q_bw_lmt - configure queue BW limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of queue scheduling node.
- */
-enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u16 q_handle, enum ice_rl_type rl_type, u32 bw)
-{
- return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
- bw);
-}
-
-/**
- * ice_cfg_q_bw_dflt_lmt - configure queue BW default limit
- * @pi: port information structure
- * @vsi_handle: sw VSI handle
- * @tc: traffic class
- * @q_handle: software queue handle
- * @rl_type: min, max, or shared
- *
- * This function configures BW default limit of queue scheduling node.
- */
-enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u16 q_handle, enum ice_rl_type rl_type)
-{
- return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
- ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_sched_save_tc_node_bw - save TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function saves the modified values of bandwidth settings for later
- * replay purpose (restore) after reset.
- */
-static enum ice_status
-ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- if (tc >= ICE_MAX_TRAFFIC_CLASS)
- return ICE_ERR_PARAM;
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw(&pi->tc_node_bw_t_info[tc], bw);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw(&pi->tc_node_bw_t_info[tc], bw);
- break;
- case ICE_SHARED_BW:
- ice_set_clear_shared_bw(&pi->tc_node_bw_t_info[tc], bw);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_tc_node_bw_lmt - sets TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures bandwidth limit of TC node.
- */
-static enum ice_status
-ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- enum ice_status status = ICE_ERR_PARAM;
- struct ice_sched_node *tc_node;
-
- if (tc >= ICE_MAX_TRAFFIC_CLASS)
- return status;
- ice_acquire_lock(&pi->sched_lock);
- tc_node = ice_sched_get_tc_node(pi, tc);
- if (!tc_node)
- goto exit_set_tc_node_bw;
- if (bw == ICE_SCHED_DFLT_BW)
- status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
- else
- status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
- if (!status)
- status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
-
-exit_set_tc_node_bw:
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_tc_node_bw_lmt - configure TC node BW limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw: bandwidth in Kbps
- *
- * This function configures BW limit of TC node.
- * Note: The minimum guaranteed reservation is done via DCBX.
- */
-enum ice_status
-ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u32 bw)
-{
- return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
-}
-
-/**
- * ice_cfg_tc_node_bw_dflt_lmt - configure TC node BW default limit
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- *
- * This function configures BW default limit of TC node.
- */
-enum ice_status
-ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type)
-{
- return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
-}
-
-/**
- * ice_sched_save_tc_node_bw_alloc - save TC node's BW alloc information
- * @pi: port information structure
- * @tc: traffic class
- * @rl_type: rate limit type min or max
- * @bw_alloc: Bandwidth allocation information
- *
- * Save BW alloc information of VSI type node for post replay use.
- */
-static enum ice_status
-ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u16 bw_alloc)
-{
- if (tc >= ICE_MAX_TRAFFIC_CLASS)
- return ICE_ERR_PARAM;
- switch (rl_type) {
- case ICE_MIN_BW:
- ice_set_clear_cir_bw_alloc(&pi->tc_node_bw_t_info[tc],
- bw_alloc);
- break;
- case ICE_MAX_BW:
- ice_set_clear_eir_bw_alloc(&pi->tc_node_bw_t_info[tc],
- bw_alloc);
- break;
- default:
- return ICE_ERR_PARAM;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_sched_set_tc_node_bw_alloc - set TC node BW alloc
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw_alloc: bandwidth alloc
- *
- * This function configures bandwidth alloc of TC node, also saves the
- * changed settings for replay purpose, and return success if it succeeds
- * in modifying bandwidth alloc setting.
- */
-static enum ice_status
-ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u8 bw_alloc)
-{
- enum ice_status status = ICE_ERR_PARAM;
- struct ice_sched_node *tc_node;
-
- if (tc >= ICE_MAX_TRAFFIC_CLASS)
- return status;
- ice_acquire_lock(&pi->sched_lock);
- tc_node = ice_sched_get_tc_node(pi, tc);
- if (!tc_node)
- goto exit_set_tc_node_bw_alloc;
- status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
- bw_alloc);
- if (status)
- goto exit_set_tc_node_bw_alloc;
- status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
-
-exit_set_tc_node_bw_alloc:
- ice_release_lock(&pi->sched_lock);
- return status;
-}
-
-/**
- * ice_cfg_tc_node_bw_alloc - configure TC node BW alloc
- * @pi: port information structure
- * @tc: TC number
- * @rl_type: min or max
- * @bw_alloc: bandwidth alloc
- *
- * This function configures BW limit of TC node.
- * Note: The minimum guaranteed reservation is done via DCBX.
- */
-enum ice_status
-ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u8 bw_alloc)
-{
- return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
-}
-
-/**
- * ice_sched_set_agg_bw_dflt_lmt - set aggregator node's BW limit to default
- * @pi: port information structure
- * @vsi_handle: software VSI handle
- *
- * This function retrieves the aggregator ID based on VSI ID and TC,
- * and sets node's BW limit to default. This function needs to be
- * called with the scheduler lock held.
- */
-enum ice_status
-ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
-{
- struct ice_vsi_ctx *vsi_ctx;
- enum ice_status status = ICE_SUCCESS;
- u8 tc;
-
- if (!ice_is_vsi_valid(pi->hw, vsi_handle))
- return ICE_ERR_PARAM;
- vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
- if (!vsi_ctx)
- return ICE_ERR_PARAM;
-
- ice_for_each_traffic_class(tc) {
- struct ice_sched_node *node;
-
- node = vsi_ctx->sched.ag_node[tc];
- if (!node)
- continue;
-
- /* Set min profile to default */
- status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
- if (status)
- break;
-
- /* Set max profile to default */
- status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
- if (status)
- break;
-
- /* Remove shared profile, if there is one */
- status = ice_sched_set_node_bw_dflt_lmt(pi, node,
- ICE_SHARED_BW);
- if (status)
- break;
- }
-
- return status;
-}
-
/**
* ice_sched_get_node_by_id_type - get node from ID type
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 8b275637a4..cd8b0c065a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -74,14 +74,6 @@ struct ice_sched_agg_info {
/* FW AQ command calls */
enum ice_status
-ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
- struct ice_aqc_rl_profile_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
- struct ice_aqc_cfg_l2_node_cgd_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd);
-enum ice_status
ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
u16 *elems_ret, struct ice_sq_cd *cd);
@@ -110,83 +102,16 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
enum ice_status
ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
u8 owner, bool enable);
-enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
struct ice_sched_node *
ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
u16 vsi_handle);
bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
-enum ice_status
-ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
- struct ice_aqc_txsched_elem_data *buf, u16 buf_size,
- struct ice_sq_cd *cd);
/* Tx scheduler rate limiter functions */
-enum ice_status
-ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
- enum ice_agg_type agg_type, u8 tc_bitmap);
-enum ice_status
-ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
- u8 tc_bitmap);
-enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
-enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u16 q_handle, enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- u16 q_handle, enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
- enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- enum ice_rl_type rl_type, u32 bw);
-enum ice_status
-ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- enum ice_rl_type rl_type);
-enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
- u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
-enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
- u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
-enum ice_status
-ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
- u32 min_bw, u32 max_bw, u32 shared_bw);
-enum ice_status
-ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
- u8 tc);
-enum ice_status
-ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
- u8 *q_prio);
-enum ice_status
-ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
- enum ice_rl_type rl_type, u8 *bw_alloc);
-enum ice_status
-ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
- u16 num_vsis, u16 *vsi_handle_arr,
- u8 *node_prio, u8 tc);
-enum ice_status
-ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
- enum ice_rl_type rl_type, u8 *bw_alloc);
bool
ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
struct ice_sched_node *node);
enum ice_status
-ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
-enum ice_status
ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
enum ice_agg_type agg_type, u8 tc,
enum ice_rl_type rl_type, u32 bw);
@@ -203,9 +128,6 @@ ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
enum ice_status
ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
-enum ice_status
-ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
- enum ice_rl_type rl_type, u8 bw_alloc);
enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
void ice_sched_replay_agg(struct ice_hw *hw);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index dc55d7e3ce..45ebf3c136 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -1848,219 +1848,6 @@ ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp_elem *buf,
return status;
}
-/**
- * ice_alloc_rss_global_lut - allocate a RSS global LUT
- * @hw: pointer to the HW struct
- * @shared_res: true to allocate as a shared resource and false to allocate as a dedicated resource
- * @global_lut_id: output parameter for the RSS global LUT's ID
- */
-enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id)
-{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
- enum ice_status status;
- u16 buf_len;
-
- buf_len = ice_struct_size(sw_buf, elem, 1);
- sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
- if (!sw_buf)
- return ICE_ERR_NO_MEMORY;
-
- sw_buf->num_elems = CPU_TO_LE16(1);
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH |
- (shared_res ? ICE_AQC_RES_TYPE_FLAG_SHARED :
- ICE_AQC_RES_TYPE_FLAG_DEDICATED));
-
- status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, ice_aqc_opc_alloc_res, NULL);
- if (status) {
- ice_debug(hw, ICE_DBG_RES, "Failed to allocate %s RSS global LUT, status %d\n",
- shared_res ? "shared" : "dedicated", status);
- goto ice_alloc_global_lut_exit;
- }
-
- *global_lut_id = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
-
-ice_alloc_global_lut_exit:
- ice_free(hw, sw_buf);
- return status;
-}
-
-/**
- * ice_free_global_lut - free a RSS global LUT
- * @hw: pointer to the HW struct
- * @global_lut_id: ID of the RSS global LUT to free
- */
-enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id)
-{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
- u16 buf_len, num_elems = 1;
- enum ice_status status;
-
- buf_len = ice_struct_size(sw_buf, elem, num_elems);
- sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
- if (!sw_buf)
- return ICE_ERR_NO_MEMORY;
-
- sw_buf->num_elems = CPU_TO_LE16(num_elems);
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH);
- sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(global_lut_id);
-
- status = ice_aq_alloc_free_res(hw, num_elems, sw_buf, buf_len, ice_aqc_opc_free_res, NULL);
- if (status)
- ice_debug(hw, ICE_DBG_RES, "Failed to free RSS global LUT %d, status %d\n",
- global_lut_id, status);
-
- ice_free(hw, sw_buf);
- return status;
-}
-
-/**
- * ice_alloc_sw - allocate resources specific to switch
- * @hw: pointer to the HW struct
- * @ena_stats: true to turn on VEB stats
- * @shared_res: true for shared resource, false for dedicated resource
- * @sw_id: switch ID returned
- * @counter_id: VEB counter ID returned
- *
- * allocates switch resources (SWID and VEB counter) (0x0208)
- */
-enum ice_status
-ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id,
- u16 *counter_id)
-{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
- struct ice_aqc_res_elem *sw_ele;
- enum ice_status status;
- u16 buf_len;
-
- buf_len = ice_struct_size(sw_buf, elem, 1);
- sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
- if (!sw_buf)
- return ICE_ERR_NO_MEMORY;
-
- /* Prepare buffer for switch ID.
- * The number of resource entries in buffer is passed as 1 since only a
- * single switch/VEB instance is allocated, and hence a single sw_id
- * is requested.
- */
- sw_buf->num_elems = CPU_TO_LE16(1);
- sw_buf->res_type =
- CPU_TO_LE16(ICE_AQC_RES_TYPE_SWID |
- (shared_res ? ICE_AQC_RES_TYPE_FLAG_SHARED :
- ICE_AQC_RES_TYPE_FLAG_DEDICATED));
-
- status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
- ice_aqc_opc_alloc_res, NULL);
-
- if (status)
- goto ice_alloc_sw_exit;
-
- sw_ele = &sw_buf->elem[0];
- *sw_id = LE16_TO_CPU(sw_ele->e.sw_resp);
-
- if (ena_stats) {
- /* Prepare buffer for VEB Counter */
- enum ice_adminq_opc opc = ice_aqc_opc_alloc_res;
- struct ice_aqc_alloc_free_res_elem *counter_buf;
- struct ice_aqc_res_elem *counter_ele;
-
- counter_buf = (struct ice_aqc_alloc_free_res_elem *)
- ice_malloc(hw, buf_len);
- if (!counter_buf) {
- status = ICE_ERR_NO_MEMORY;
- goto ice_alloc_sw_exit;
- }
-
- /* The number of resource entries in buffer is passed as 1 since
- * only a single switch/VEB instance is allocated, and hence a
- * single VEB counter is requested.
- */
- counter_buf->num_elems = CPU_TO_LE16(1);
- counter_buf->res_type =
- CPU_TO_LE16(ICE_AQC_RES_TYPE_VEB_COUNTER |
- ICE_AQC_RES_TYPE_FLAG_DEDICATED);
- status = ice_aq_alloc_free_res(hw, 1, counter_buf, buf_len,
- opc, NULL);
-
- if (status) {
- ice_free(hw, counter_buf);
- goto ice_alloc_sw_exit;
- }
- counter_ele = &counter_buf->elem[0];
- *counter_id = LE16_TO_CPU(counter_ele->e.sw_resp);
- ice_free(hw, counter_buf);
- }
-
-ice_alloc_sw_exit:
- ice_free(hw, sw_buf);
- return status;
-}
-
-/**
- * ice_free_sw - free resources specific to switch
- * @hw: pointer to the HW struct
- * @sw_id: switch ID returned
- * @counter_id: VEB counter ID returned
- *
- * free switch resources (SWID and VEB counter) (0x0209)
- *
- * NOTE: This function frees multiple resources. It continues
- * releasing other resources even after it encounters error.
- * The error code returned is the last error it encountered.
- */
-enum ice_status ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id)
-{
- struct ice_aqc_alloc_free_res_elem *sw_buf, *counter_buf;
- enum ice_status status, ret_status;
- u16 buf_len;
-
- buf_len = ice_struct_size(sw_buf, elem, 1);
- sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
- if (!sw_buf)
- return ICE_ERR_NO_MEMORY;
-
- /* Prepare buffer to free for switch ID res.
- * The number of resource entries in buffer is passed as 1 since only a
- * single switch/VEB instance is freed, and hence a single sw_id
- * is released.
- */
- sw_buf->num_elems = CPU_TO_LE16(1);
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_SWID);
- sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(sw_id);
-
- ret_status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
- ice_aqc_opc_free_res, NULL);
-
- if (ret_status)
- ice_debug(hw, ICE_DBG_SW, "CQ CMD Buffer:\n");
-
- /* Prepare buffer to free for VEB Counter resource */
- counter_buf = (struct ice_aqc_alloc_free_res_elem *)
- ice_malloc(hw, buf_len);
- if (!counter_buf) {
- ice_free(hw, sw_buf);
- return ICE_ERR_NO_MEMORY;
- }
-
- /* The number of resource entries in buffer is passed as 1 since only a
- * single switch/VEB instance is freed, and hence a single VEB counter
- * is released
- */
- counter_buf->num_elems = CPU_TO_LE16(1);
- counter_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VEB_COUNTER);
- counter_buf->elem[0].e.sw_resp = CPU_TO_LE16(counter_id);
-
- status = ice_aq_alloc_free_res(hw, 1, counter_buf, buf_len,
- ice_aqc_opc_free_res, NULL);
- if (status) {
- ice_debug(hw, ICE_DBG_SW, "VEB counter resource could not be freed\n");
- ret_status = status;
- }
-
- ice_free(hw, counter_buf);
- ice_free(hw, sw_buf);
- return ret_status;
-}
-
/**
* ice_aq_add_vsi
* @hw: pointer to the HW struct
@@ -2366,173 +2153,6 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
return ice_aq_update_vsi(hw, vsi_ctx, cd);
}
-/**
- * ice_aq_get_vsi_params
- * @hw: pointer to the HW struct
- * @vsi_ctx: pointer to a VSI context struct
- * @cd: pointer to command details structure or NULL
- *
- * Get VSI context info from hardware (0x0212)
- */
-enum ice_status
-ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_add_get_update_free_vsi *cmd;
- struct ice_aqc_get_vsi_resp *resp;
- struct ice_aq_desc desc;
- enum ice_status status;
-
- cmd = &desc.params.vsi_cmd;
- resp = &desc.params.get_vsi_resp;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_vsi_params);
-
- cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
-
- status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
- sizeof(vsi_ctx->info), cd);
- if (!status) {
- vsi_ctx->vsi_num = LE16_TO_CPU(resp->vsi_num) &
- ICE_AQ_VSI_NUM_M;
- vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
- vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
- }
-
- return status;
-}
-
-/**
- * ice_aq_add_update_mir_rule - add/update a mirror rule
- * @hw: pointer to the HW struct
- * @rule_type: Rule Type
- * @dest_vsi: VSI number to which packets will be mirrored
- * @count: length of the list
- * @mr_buf: buffer for list of mirrored VSI numbers
- * @cd: pointer to command details structure or NULL
- * @rule_id: Rule ID
- *
- * Add/Update Mirror Rule (0x260).
- */
-enum ice_status
-ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
- u16 count, struct ice_mir_rule_buf *mr_buf,
- struct ice_sq_cd *cd, u16 *rule_id)
-{
- struct ice_aqc_add_update_mir_rule *cmd;
- struct ice_aq_desc desc;
- enum ice_status status;
- __le16 *mr_list = NULL;
- u16 buf_size = 0;
-
- switch (rule_type) {
- case ICE_AQC_RULE_TYPE_VPORT_INGRESS:
- case ICE_AQC_RULE_TYPE_VPORT_EGRESS:
- /* Make sure count and mr_buf are set for these rule_types */
- if (!(count && mr_buf))
- return ICE_ERR_PARAM;
-
- buf_size = count * sizeof(__le16);
- mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
- if (!mr_list)
- return ICE_ERR_NO_MEMORY;
- break;
- case ICE_AQC_RULE_TYPE_PPORT_INGRESS:
- case ICE_AQC_RULE_TYPE_PPORT_EGRESS:
- /* Make sure count and mr_buf are not set for these
- * rule_types
- */
- if (count || mr_buf)
- return ICE_ERR_PARAM;
- break;
- default:
- ice_debug(hw, ICE_DBG_SW, "Error due to unsupported rule_type %u\n", rule_type);
- return ICE_ERR_OUT_OF_RANGE;
- }
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_update_mir_rule);
-
- /* Pre-process 'mr_buf' items for add/update of virtual port
- * ingress/egress mirroring (but not physical port ingress/egress
- * mirroring)
- */
- if (mr_buf) {
- int i;
-
- for (i = 0; i < count; i++) {
- u16 id;
-
- id = mr_buf[i].vsi_idx & ICE_AQC_RULE_MIRRORED_VSI_M;
-
- /* Validate specified VSI number, make sure it is less
- * than ICE_MAX_VSI, if not return with error.
- */
- if (id >= ICE_MAX_VSI) {
- ice_debug(hw, ICE_DBG_SW, "Error VSI index (%u) out-of-range\n",
- id);
- ice_free(hw, mr_list);
- return ICE_ERR_OUT_OF_RANGE;
- }
-
- /* add VSI to mirror rule */
- if (mr_buf[i].add)
- mr_list[i] =
- CPU_TO_LE16(id | ICE_AQC_RULE_ACT_M);
- else /* remove VSI from mirror rule */
- mr_list[i] = CPU_TO_LE16(id);
- }
- }
-
- cmd = &desc.params.add_update_rule;
- if ((*rule_id) != ICE_INVAL_MIRROR_RULE_ID)
- cmd->rule_id = CPU_TO_LE16(((*rule_id) & ICE_AQC_RULE_ID_M) |
- ICE_AQC_RULE_ID_VALID_M);
- cmd->rule_type = CPU_TO_LE16(rule_type & ICE_AQC_RULE_TYPE_M);
- cmd->num_entries = CPU_TO_LE16(count);
- cmd->dest = CPU_TO_LE16(dest_vsi);
-
- status = ice_aq_send_cmd(hw, &desc, mr_list, buf_size, cd);
- if (!status)
- *rule_id = LE16_TO_CPU(cmd->rule_id) & ICE_AQC_RULE_ID_M;
-
- ice_free(hw, mr_list);
-
- return status;
-}
-
-/**
- * ice_aq_delete_mir_rule - delete a mirror rule
- * @hw: pointer to the HW struct
- * @rule_id: Mirror rule ID (to be deleted)
- * @keep_allocd: if set, the VSI stays part of the PF allocated res,
- * otherwise it is returned to the shared pool
- * @cd: pointer to command details structure or NULL
- *
- * Delete Mirror Rule (0x261).
- */
-enum ice_status
-ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_delete_mir_rule *cmd;
- struct ice_aq_desc desc;
-
- /* rule_id should be in the range 0...63 */
- if (rule_id >= ICE_MAX_NUM_MIRROR_RULES)
- return ICE_ERR_OUT_OF_RANGE;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_del_mir_rule);
-
- cmd = &desc.params.del_rule;
- rule_id |= ICE_AQC_RULE_ID_VALID_M;
- cmd->rule_id = CPU_TO_LE16(rule_id);
-
- if (keep_allocd)
- cmd->flags = CPU_TO_LE16(ICE_AQC_FLAG_KEEP_ALLOCD_M);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
/**
* ice_aq_alloc_free_vsi_list
* @hw: pointer to the HW struct
@@ -2591,68 +2211,6 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
return status;
}
-/**
- * ice_aq_set_storm_ctrl - Sets storm control configuration
- * @hw: pointer to the HW struct
- * @bcast_thresh: represents the upper threshold for broadcast storm control
- * @mcast_thresh: represents the upper threshold for multicast storm control
- * @ctl_bitmask: storm control knobs
- *
- * Sets the storm control configuration (0x0280)
- */
-enum ice_status
-ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh,
- u32 ctl_bitmask)
-{
- struct ice_aqc_storm_cfg *cmd;
- struct ice_aq_desc desc;
-
- cmd = &desc.params.storm_conf;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_storm_cfg);
-
- cmd->bcast_thresh_size = CPU_TO_LE32(bcast_thresh & ICE_AQ_THRESHOLD_M);
- cmd->mcast_thresh_size = CPU_TO_LE32(mcast_thresh & ICE_AQ_THRESHOLD_M);
- cmd->storm_ctrl_ctrl = CPU_TO_LE32(ctl_bitmask);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
-}
-
-/**
- * ice_aq_get_storm_ctrl - gets storm control configuration
- * @hw: pointer to the HW struct
- * @bcast_thresh: represents the upper threshold for broadcast storm control
- * @mcast_thresh: represents the upper threshold for multicast storm control
- * @ctl_bitmask: storm control knobs
- *
- * Gets the storm control configuration (0x0281)
- */
-enum ice_status
-ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh,
- u32 *ctl_bitmask)
-{
- enum ice_status status;
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_storm_cfg);
-
- status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
- if (!status) {
- struct ice_aqc_storm_cfg *resp = &desc.params.storm_conf;
-
- if (bcast_thresh)
- *bcast_thresh = LE32_TO_CPU(resp->bcast_thresh_size) &
- ICE_AQ_THRESHOLD_M;
- if (mcast_thresh)
- *mcast_thresh = LE32_TO_CPU(resp->mcast_thresh_size) &
- ICE_AQ_THRESHOLD_M;
- if (ctl_bitmask)
- *ctl_bitmask = LE32_TO_CPU(resp->storm_ctrl_ctrl);
- }
-
- return status;
-}
-
/**
* ice_aq_sw_rules - add/update/remove switch rules
* @hw: pointer to the HW struct
@@ -3261,119 +2819,31 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
}
/**
- * ice_add_counter_act - add/update filter rule with counter action
+ * ice_create_vsi_list_map
* @hw: pointer to the hardware structure
- * @m_ent: the management entry for which counter needs to be added
- * @counter_id: VLAN counter ID returned as part of allocate resource
- * @l_id: large action resource ID
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list ID generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list ID to VSI mapping
+ * using the given VSI list ID
*/
-static enum ice_status
-ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
- u16 counter_id, u16 l_id)
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+ u16 vsi_list_id)
{
- struct ice_aqc_sw_rules_elem *lg_act;
- struct ice_aqc_sw_rules_elem *rx_tx;
- enum ice_status status;
- /* 2 actions will be added while adding a large action counter */
- const int num_acts = 2;
- u16 lg_act_size;
- u16 rules_size;
- u16 f_rule_id;
- u32 act;
- u16 id;
+ struct ice_switch_info *sw = hw->switch_info;
+ struct ice_vsi_list_map_info *v_map;
+ int i;
- if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
- return ICE_ERR_PARAM;
+ v_map = (struct ice_vsi_list_map_info *)ice_malloc(hw, sizeof(*v_map));
+ if (!v_map)
+ return NULL;
- /* Create two back-to-back switch rules and submit them to the HW using
- * one memory buffer:
- * 1. Large Action
- * 2. Look up Tx Rx
- */
- lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_acts);
- rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
- lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
- if (!lg_act)
- return ICE_ERR_NO_MEMORY;
-
- rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
-
- /* Fill in the first switch rule i.e. large action */
- lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
- lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
- lg_act->pdata.lg_act.size = CPU_TO_LE16(num_acts);
-
- /* First action VSI forwarding or VSI list forwarding depending on how
- * many VSIs
- */
- id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
- m_ent->fltr_info.fwd_id.hw_vsi_id;
-
- act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
- act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
- ICE_LG_ACT_VSI_LIST_ID_M;
- if (m_ent->vsi_count > 1)
- act |= ICE_LG_ACT_VSI_LIST;
- lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
-
- /* Second action counter ID */
- act = ICE_LG_ACT_STAT_COUNT;
- act |= (counter_id << ICE_LG_ACT_STAT_COUNT_S) &
- ICE_LG_ACT_STAT_COUNT_M;
- lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
-
- /* call the fill switch rule to fill the lookup Tx Rx structure */
- ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
- ice_aqc_opc_update_sw_rules);
-
- act = ICE_SINGLE_ACT_PTR;
- act |= (l_id << ICE_SINGLE_ACT_PTR_VAL_S) & ICE_SINGLE_ACT_PTR_VAL_M;
- rx_tx->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
-
- /* Use the filter rule ID of the previously created rule with single
- * act. Once the update happens, hardware will treat this as large
- * action
- */
- f_rule_id = m_ent->fltr_info.fltr_rule_id;
- rx_tx->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_rule_id);
-
- status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
- ice_aqc_opc_update_sw_rules, NULL);
- if (!status) {
- m_ent->lg_act_idx = l_id;
- m_ent->counter_index = counter_id;
- }
-
- ice_free(hw, lg_act);
- return status;
-}
-
-/**
- * ice_create_vsi_list_map
- * @hw: pointer to the hardware structure
- * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
- * @num_vsi: number of VSI handles in the array
- * @vsi_list_id: VSI list ID generated as part of allocate resource
- *
- * Helper function to create a new entry of VSI list ID to VSI mapping
- * using the given VSI list ID
- */
-static struct ice_vsi_list_map_info *
-ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
- u16 vsi_list_id)
-{
- struct ice_switch_info *sw = hw->switch_info;
- struct ice_vsi_list_map_info *v_map;
- int i;
-
- v_map = (struct ice_vsi_list_map_info *)ice_malloc(hw, sizeof(*v_map));
- if (!v_map)
- return NULL;
-
- v_map->vsi_list_id = vsi_list_id;
- v_map->ref_cnt = 1;
- for (i = 0; i < num_vsi; i++)
- ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+ v_map->vsi_list_id = vsi_list_id;
+ v_map->ref_cnt = 1;
+ for (i = 0; i < num_vsi; i++)
+ ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
return v_map;
@@ -3564,48 +3034,6 @@ ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
return status;
}
-/**
- * ice_update_sw_rule_bridge_mode
- * @hw: pointer to the HW struct
- *
- * Updates unicast switch filter rules based on VEB/VEPA mode
- */
-enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
-{
- struct ice_switch_info *sw = hw->switch_info;
- struct ice_fltr_mgmt_list_entry *fm_entry;
- enum ice_status status = ICE_SUCCESS;
- struct LIST_HEAD_TYPE *rule_head;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
-
- rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
- rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
-
- ice_acquire_lock(rule_lock);
- LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
- list_entry) {
- struct ice_fltr_info *fi = &fm_entry->fltr_info;
- u8 *addr = fi->l_data.mac.mac_addr;
-
- /* Update unicast Tx rules to reflect the selected
- * VEB/VEPA mode
- */
- if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
- (fi->fltr_act == ICE_FWD_TO_VSI ||
- fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
- fi->fltr_act == ICE_FWD_TO_Q ||
- fi->fltr_act == ICE_FWD_TO_QGRP)) {
- status = ice_update_pkt_fwd_rule(hw, fi);
- if (status)
- break;
- }
- }
-
- ice_release_lock(rule_lock);
-
- return status;
-}
-
/**
* ice_add_update_vsi_list
* @hw: pointer to the hardware structure
@@ -4049,88 +3477,6 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list,
return status;
}
-/**
- * ice_aq_get_res_alloc - get allocated resources
- * @hw: pointer to the HW struct
- * @num_entries: pointer to u16 to store the number of resource entries returned
- * @buf: pointer to buffer
- * @buf_size: size of buf
- * @cd: pointer to command details structure or NULL
- *
- * The caller-supplied buffer must be large enough to store the resource
- * information for all resource types. Each resource type is an
- * ice_aqc_get_res_resp_elem structure.
- */
-enum ice_status
-ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries,
- struct ice_aqc_get_res_resp_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd)
-{
- struct ice_aqc_get_res_alloc *resp;
- enum ice_status status;
- struct ice_aq_desc desc;
-
- if (!buf)
- return ICE_ERR_BAD_PTR;
-
- if (buf_size < ICE_AQ_GET_RES_ALLOC_BUF_LEN)
- return ICE_ERR_INVAL_SIZE;
-
- resp = &desc.params.get_res;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_res_alloc);
- status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
-
- if (!status && num_entries)
- *num_entries = LE16_TO_CPU(resp->resp_elem_num);
-
- return status;
-}
-
-/**
- * ice_aq_get_res_descs - get allocated resource descriptors
- * @hw: pointer to the hardware structure
- * @num_entries: number of resource entries in buffer
- * @buf: structure to hold response data buffer
- * @buf_size: size of buffer
- * @res_type: resource type
- * @res_shared: is resource shared
- * @desc_id: input - first desc ID to start; output - next desc ID
- * @cd: pointer to command details structure or NULL
- */
-enum ice_status
-ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
- struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type,
- bool res_shared, u16 *desc_id, struct ice_sq_cd *cd)
-{
- struct ice_aqc_get_allocd_res_desc *cmd;
- struct ice_aq_desc desc;
- enum ice_status status;
-
- ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-
- cmd = &desc.params.get_res_desc;
-
- if (!buf)
- return ICE_ERR_PARAM;
-
- if (buf_size != (num_entries * sizeof(*buf)))
- return ICE_ERR_PARAM;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_allocd_res_desc);
-
- cmd->ops.cmd.res = CPU_TO_LE16(((res_type << ICE_AQC_RES_TYPE_S) &
- ICE_AQC_RES_TYPE_M) | (res_shared ?
- ICE_AQC_RES_TYPE_FLAG_SHARED : 0));
- cmd->ops.cmd.first_desc = CPU_TO_LE16(*desc_id);
-
- status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
- if (!status)
- *desc_id = LE16_TO_CPU(cmd->ops.resp.next_desc);
-
- return status;
-}
-
/**
* ice_add_mac_rule - Add a MAC address based filter rule
* @hw: pointer to the hardware structure
@@ -4499,63 +3845,6 @@ enum ice_status ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
return ice_add_vlan_rule(hw, v_list, hw->switch_info);
}
-/**
- * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
- * @hw: pointer to the hardware structure
- * @mv_list: list of MAC and VLAN filters
- * @sw: pointer to switch info struct for which function add rule
- * @lport: logic port number on which function add rule
- *
- * If the VSI on which the MAC-VLAN pair has to be added has Rx and Tx VLAN
- * pruning bits enabled, then it is the responsibility of the caller to make
- * sure to add a VLAN only filter on the same VSI. Packets belonging to that
- * VLAN won't be received on that VSI otherwise.
- */
-static enum ice_status
-ice_add_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list,
- struct ice_switch_info *sw, u8 lport)
-{
- struct ice_fltr_list_entry *mv_list_itr;
- struct ice_sw_recipe *recp_list;
-
- if (!mv_list || !hw)
- return ICE_ERR_PARAM;
-
- recp_list = &sw->recp_list[ICE_SW_LKUP_MAC_VLAN];
- LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
- list_entry) {
- enum ice_sw_lkup_type l_type =
- mv_list_itr->fltr_info.lkup_type;
-
- if (l_type != ICE_SW_LKUP_MAC_VLAN)
- return ICE_ERR_PARAM;
- mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
- mv_list_itr->status =
- ice_add_rule_internal(hw, recp_list, lport,
- mv_list_itr);
- if (mv_list_itr->status)
- return mv_list_itr->status;
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_add_mac_vlan - Add a MAC VLAN address based filter rule
- * @hw: pointer to the hardware structure
- * @mv_list: list of MAC VLAN addresses and forwarding information
- *
- * Function add MAC VLAN rule for logical port from HW struct
- */
-enum ice_status
-ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
-{
- if (!mv_list || !hw)
- return ICE_ERR_PARAM;
-
- return ice_add_mac_vlan_rule(hw, mv_list, hw->switch_info,
- hw->port_info->lport);
-}
-
/**
* ice_add_eth_mac_rule - Add ethertype and MAC based filter rule
* @hw: pointer to the hardware structure
@@ -4700,118 +3989,6 @@ ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
}
}
-/**
- * ice_rem_all_sw_rules_info
- * @hw: pointer to the hardware structure
- */
-void ice_rem_all_sw_rules_info(struct ice_hw *hw)
-{
- struct ice_switch_info *sw = hw->switch_info;
- u8 i;
-
- for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
- struct LIST_HEAD_TYPE *rule_head;
-
- rule_head = &sw->recp_list[i].filt_rules;
- if (!sw->recp_list[i].adv_rule)
- ice_rem_sw_rule_info(hw, rule_head);
- else
- ice_rem_adv_rule_info(hw, rule_head);
- if (sw->recp_list[i].adv_rule &&
- LIST_EMPTY(&sw->recp_list[i].filt_rules))
- sw->recp_list[i].adv_rule = false;
- }
-}
-
-/**
- * ice_cfg_dflt_vsi - change state of VSI to set/clear default
- * @pi: pointer to the port_info structure
- * @vsi_handle: VSI handle to set as default
- * @set: true to add the above mentioned switch rule, false to remove it
- * @direction: ICE_FLTR_RX or ICE_FLTR_TX
- *
- * add filter rule to set/unset given VSI as default VSI for the switch
- * (represented by swid)
- */
-enum ice_status
-ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
- u8 direction)
-{
- struct ice_aqc_sw_rules_elem *s_rule;
- struct ice_fltr_info f_info;
- struct ice_hw *hw = pi->hw;
- enum ice_adminq_opc opcode;
- enum ice_status status;
- u16 s_rule_size;
- u16 hw_vsi_id;
-
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return ICE_ERR_PARAM;
- hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
-
- s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
- ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
-
- s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
- if (!s_rule)
- return ICE_ERR_NO_MEMORY;
-
- ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
-
- f_info.lkup_type = ICE_SW_LKUP_DFLT;
- f_info.flag = direction;
- f_info.fltr_act = ICE_FWD_TO_VSI;
- f_info.fwd_id.hw_vsi_id = hw_vsi_id;
-
- if (f_info.flag & ICE_FLTR_RX) {
- f_info.src = pi->lport;
- f_info.src_id = ICE_SRC_ID_LPORT;
- if (!set)
- f_info.fltr_rule_id =
- pi->dflt_rx_vsi_rule_id;
- } else if (f_info.flag & ICE_FLTR_TX) {
- f_info.src_id = ICE_SRC_ID_VSI;
- f_info.src = hw_vsi_id;
- if (!set)
- f_info.fltr_rule_id =
- pi->dflt_tx_vsi_rule_id;
- }
-
- if (set)
- opcode = ice_aqc_opc_add_sw_rules;
- else
- opcode = ice_aqc_opc_remove_sw_rules;
-
- ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
-
- status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
- if (status || !(f_info.flag & ICE_FLTR_TX_RX))
- goto out;
- if (set) {
- u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
-
- if (f_info.flag & ICE_FLTR_TX) {
- pi->dflt_tx_vsi_num = hw_vsi_id;
- pi->dflt_tx_vsi_rule_id = index;
- } else if (f_info.flag & ICE_FLTR_RX) {
- pi->dflt_rx_vsi_num = hw_vsi_id;
- pi->dflt_rx_vsi_rule_id = index;
- }
- } else {
- if (f_info.flag & ICE_FLTR_TX) {
- pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
- pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
- } else if (f_info.flag & ICE_FLTR_RX) {
- pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
- pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
- }
- }
-
-out:
- ice_free(hw, s_rule);
- return status;
-}
-
/**
* ice_find_ucast_rule_entry - Search for a unicast MAC filter rule entry
* @list_head: head of rule list
@@ -5063,47 +4240,6 @@ ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
return ICE_SUCCESS;
}
-/**
- * ice_add_to_vsi_fltr_list - Add VSI filters to the list
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @lkup_list_head: pointer to the list that has certain lookup type filters
- * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
- *
- * Locates all filters in lkup_list_head that are used by the given VSI,
- * and adds COPIES of those entries to vsi_list_head (intended to be used
- * to remove the listed filters).
- * Note that this means all entries in vsi_list_head must be explicitly
- * deallocated by the caller when done with list.
- */
-static enum ice_status
-ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
- struct LIST_HEAD_TYPE *lkup_list_head,
- struct LIST_HEAD_TYPE *vsi_list_head)
-{
- struct ice_fltr_mgmt_list_entry *fm_entry;
- enum ice_status status = ICE_SUCCESS;
-
- /* check to make sure VSI ID is valid and within boundary */
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return ICE_ERR_PARAM;
-
- LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
- ice_fltr_mgmt_list_entry, list_entry) {
- struct ice_fltr_info *fi;
-
- fi = &fm_entry->fltr_info;
- if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
- continue;
-
- status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
- vsi_list_head, fi);
- if (status)
- return status;
- }
- return status;
-}
-
/**
* ice_determine_promisc_mask
* @fi: filter info to parse
@@ -5137,116 +4273,6 @@ static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
return promisc_mask;
}
-/**
- * _ice_get_vsi_promisc - get promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- * @sw: pointer to switch info struct for which function add rule
- */
-static enum ice_status
-_ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid, struct ice_switch_info *sw)
-{
- struct ice_fltr_mgmt_list_entry *itr;
- struct LIST_HEAD_TYPE *rule_head;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
-
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return ICE_ERR_PARAM;
-
- *vid = 0;
- *promisc_mask = 0;
- rule_head = &sw->recp_list[ICE_SW_LKUP_PROMISC].filt_rules;
- rule_lock = &sw->recp_list[ICE_SW_LKUP_PROMISC].filt_rule_lock;
-
- ice_acquire_lock(rule_lock);
- LIST_FOR_EACH_ENTRY(itr, rule_head,
- ice_fltr_mgmt_list_entry, list_entry) {
- /* Continue if this filter doesn't apply to this VSI or the
- * VSI ID is not in the VSI map for this filter
- */
- if (!ice_vsi_uses_fltr(itr, vsi_handle))
- continue;
-
- *promisc_mask |= ice_determine_promisc_mask(&itr->fltr_info);
- }
- ice_release_lock(rule_lock);
-
- return ICE_SUCCESS;
-}
-
-/**
- * ice_get_vsi_promisc - get promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- */
-enum ice_status
-ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid)
-{
- return _ice_get_vsi_promisc(hw, vsi_handle, promisc_mask,
- vid, hw->switch_info);
-}
-
-/**
- * ice_get_vsi_vlan_promisc - get VLAN promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- * @sw: pointer to switch info struct for which function add rule
- */
-static enum ice_status
-_ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid, struct ice_switch_info *sw)
-{
- struct ice_fltr_mgmt_list_entry *itr;
- struct LIST_HEAD_TYPE *rule_head;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
-
- if (!ice_is_vsi_valid(hw, vsi_handle))
- return ICE_ERR_PARAM;
-
- *vid = 0;
- *promisc_mask = 0;
- rule_head = &sw->recp_list[ICE_SW_LKUP_PROMISC_VLAN].filt_rules;
- rule_lock = &sw->recp_list[ICE_SW_LKUP_PROMISC_VLAN].filt_rule_lock;
-
- ice_acquire_lock(rule_lock);
- LIST_FOR_EACH_ENTRY(itr, rule_head, ice_fltr_mgmt_list_entry,
- list_entry) {
- /* Continue if this filter doesn't apply to this VSI or the
- * VSI ID is not in the VSI map for this filter
- */
- if (!ice_vsi_uses_fltr(itr, vsi_handle))
- continue;
-
- *promisc_mask |= ice_determine_promisc_mask(&itr->fltr_info);
- }
- ice_release_lock(rule_lock);
-
- return ICE_SUCCESS;
-}
-
-/**
- * ice_get_vsi_vlan_promisc - get VLAN promiscuous mode of given VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to retrieve info from
- * @promisc_mask: pointer to mask to be filled in
- * @vid: VLAN ID of promisc VLAN VSI
- */
-enum ice_status
-ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid)
-{
- return _ice_get_vsi_vlan_promisc(hw, vsi_handle, promisc_mask,
- vid, hw->switch_info);
-}
-
/**
* ice_remove_promisc - Remove promisc based filter rules
* @hw: pointer to the hardware structure
@@ -5460,219 +4486,42 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
new_fltr.flag = 0;
if (is_tx_fltr) {
new_fltr.flag |= ICE_FLTR_TX;
- new_fltr.src = hw_vsi_id;
- } else {
- new_fltr.flag |= ICE_FLTR_RX;
- new_fltr.src = lport;
- }
-
- new_fltr.fltr_act = ICE_FWD_TO_VSI;
- new_fltr.vsi_handle = vsi_handle;
- new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
- f_list_entry.fltr_info = new_fltr;
- recp_list = &sw->recp_list[recipe_id];
-
- status = ice_add_rule_internal(hw, recp_list, lport,
- &f_list_entry);
- if (status != ICE_SUCCESS)
- goto set_promisc_exit;
- }
-
-set_promisc_exit:
- return status;
-}
-
-/**
- * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @vid: VLAN ID to set VLAN promiscuous
- */
-enum ice_status
-ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
- u16 vid)
-{
- return _ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid,
- hw->port_info->lport,
- hw->switch_info);
-}
-
-/**
- * _ice_set_vlan_vsi_promisc
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @rm_vlan_promisc: Clear VLANs VSI promisc mode
- * @lport: logical port number to configure promisc mode
- * @sw: pointer to switch info struct for which function add rule
- *
- * Configure VSI with all associated VLANs to given promiscuous mode(s)
- */
-static enum ice_status
-_ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
- bool rm_vlan_promisc, u8 lport,
- struct ice_switch_info *sw)
-{
- struct ice_fltr_list_entry *list_itr, *tmp;
- struct LIST_HEAD_TYPE vsi_list_head;
- struct LIST_HEAD_TYPE *vlan_head;
- struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
- enum ice_status status;
- u16 vlan_id;
-
- INIT_LIST_HEAD(&vsi_list_head);
- vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
- vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
- ice_acquire_lock(vlan_lock);
- status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
- &vsi_list_head);
- ice_release_lock(vlan_lock);
- if (status)
- goto free_fltr_list;
-
- LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
- list_entry) {
- vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
- if (rm_vlan_promisc)
- status = _ice_clear_vsi_promisc(hw, vsi_handle,
- promisc_mask,
- vlan_id, sw);
- else
- status = _ice_set_vsi_promisc(hw, vsi_handle,
- promisc_mask, vlan_id,
- lport, sw);
- if (status)
- break;
- }
-
-free_fltr_list:
- LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
- ice_fltr_list_entry, list_entry) {
- LIST_DEL(&list_itr->list_entry);
- ice_free(hw, list_itr);
- }
- return status;
-}
-
-/**
- * ice_set_vlan_vsi_promisc
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to configure
- * @promisc_mask: mask of promiscuous config bits
- * @rm_vlan_promisc: Clear VLANs VSI promisc mode
- *
- * Configure VSI with all associated VLANs to given promiscuous mode(s)
- */
-enum ice_status
-ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
- bool rm_vlan_promisc)
-{
- return _ice_set_vlan_vsi_promisc(hw, vsi_handle, promisc_mask,
- rm_vlan_promisc, hw->port_info->lport,
- hw->switch_info);
-}
-
-/**
- * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @recp_list: recipe list from which function remove fltr
- * @lkup: switch rule filter lookup type
- */
-static void
-ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
- struct ice_sw_recipe *recp_list,
- enum ice_sw_lkup_type lkup)
-{
- struct ice_fltr_list_entry *fm_entry;
- struct LIST_HEAD_TYPE remove_list_head;
- struct LIST_HEAD_TYPE *rule_head;
- struct ice_fltr_list_entry *tmp;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
- enum ice_status status;
-
- INIT_LIST_HEAD(&remove_list_head);
- rule_lock = &recp_list[lkup].filt_rule_lock;
- rule_head = &recp_list[lkup].filt_rules;
- ice_acquire_lock(rule_lock);
- status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
- &remove_list_head);
- ice_release_lock(rule_lock);
- if (status)
- return;
+ new_fltr.src = hw_vsi_id;
+ } else {
+ new_fltr.flag |= ICE_FLTR_RX;
+ new_fltr.src = lport;
+ }
- switch (lkup) {
- case ICE_SW_LKUP_MAC:
- ice_remove_mac_rule(hw, &remove_list_head, &recp_list[lkup]);
- break;
- case ICE_SW_LKUP_VLAN:
- ice_remove_vlan_rule(hw, &remove_list_head, &recp_list[lkup]);
- break;
- case ICE_SW_LKUP_PROMISC:
- case ICE_SW_LKUP_PROMISC_VLAN:
- ice_remove_promisc(hw, lkup, &remove_list_head);
- break;
- case ICE_SW_LKUP_MAC_VLAN:
- ice_remove_mac_vlan(hw, &remove_list_head);
- break;
- case ICE_SW_LKUP_ETHERTYPE:
- case ICE_SW_LKUP_ETHERTYPE_MAC:
- ice_remove_eth_mac(hw, &remove_list_head);
- break;
- case ICE_SW_LKUP_DFLT:
- ice_debug(hw, ICE_DBG_SW, "Remove filters for this lookup type hasn't been implemented yet\n");
- break;
- case ICE_SW_LKUP_LAST:
- ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
- break;
- }
+ new_fltr.fltr_act = ICE_FWD_TO_VSI;
+ new_fltr.vsi_handle = vsi_handle;
+ new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+ f_list_entry.fltr_info = new_fltr;
+ recp_list = &sw->recp_list[recipe_id];
- LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
- ice_fltr_list_entry, list_entry) {
- LIST_DEL(&fm_entry->list_entry);
- ice_free(hw, fm_entry);
+ status = ice_add_rule_internal(hw, recp_list, lport,
+ &f_list_entry);
+ if (status != ICE_SUCCESS)
+ goto set_promisc_exit;
}
-}
-
-/**
- * ice_remove_vsi_fltr_rule - Remove all filters for a VSI
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
- * @sw: pointer to switch info struct
- */
-static void
-ice_remove_vsi_fltr_rule(struct ice_hw *hw, u16 vsi_handle,
- struct ice_switch_info *sw)
-{
- ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_MAC);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_MAC_VLAN);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_PROMISC);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_VLAN);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_DFLT);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_ETHERTYPE);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_ETHERTYPE_MAC);
- ice_remove_vsi_lkup_fltr(hw, vsi_handle,
- sw->recp_list, ICE_SW_LKUP_PROMISC_VLAN);
+set_promisc_exit:
+ return status;
}
/**
- * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
* @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle to remove filters from
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
*/
-void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ u16 vid)
{
- ice_remove_vsi_fltr_rule(hw, vsi_handle, hw->switch_info);
+ return _ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid,
+ hw->port_info->lport,
+ hw->switch_info);
}
/**
@@ -5761,260 +4610,6 @@ enum ice_status ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id)
counter_id);
}
-/**
- * ice_free_vlan_res_counter - Free counter resource for VLAN type
- * @hw: pointer to the hardware structure
- * @counter_id: counter index to be freed
- */
-enum ice_status ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id)
-{
- return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_VLAN_COUNTER,
- ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1,
- counter_id);
-}
-
-/**
- * ice_alloc_res_lg_act - add large action resource
- * @hw: pointer to the hardware structure
- * @l_id: large action ID to fill it in
- * @num_acts: number of actions to hold with a large action entry
- */
-static enum ice_status
-ice_alloc_res_lg_act(struct ice_hw *hw, u16 *l_id, u16 num_acts)
-{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
- enum ice_status status;
- u16 buf_len;
-
- if (num_acts > ICE_MAX_LG_ACT || num_acts == 0)
- return ICE_ERR_PARAM;
-
- /* Allocate resource for large action */
- buf_len = ice_struct_size(sw_buf, elem, 1);
- sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
- if (!sw_buf)
- return ICE_ERR_NO_MEMORY;
-
- sw_buf->num_elems = CPU_TO_LE16(1);
-
- /* If num_acts is 1, use ICE_AQC_RES_TYPE_WIDE_TABLE_1.
- * If num_acts is 2, use ICE_AQC_RES_TYPE_WIDE_TABLE_3.
- * If num_acts is greater than 2, then use
- * ICE_AQC_RES_TYPE_WIDE_TABLE_4.
- * The num_acts cannot exceed 4. This was ensured at the
- * beginning of the function.
- */
- if (num_acts == 1)
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_1);
- else if (num_acts == 2)
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_2);
- else
- sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_4);
-
- status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
- ice_aqc_opc_alloc_res, NULL);
- if (!status)
- *l_id = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
-
- ice_free(hw, sw_buf);
- return status;
-}
-
-/**
- * ice_add_mac_with_sw_marker - add filter with sw marker
- * @hw: pointer to the hardware structure
- * @f_info: filter info structure containing the MAC filter information
- * @sw_marker: sw marker to tag the Rx descriptor with
- */
-enum ice_status
-ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
- u16 sw_marker)
-{
- struct ice_fltr_mgmt_list_entry *m_entry;
- struct ice_fltr_list_entry fl_info;
- struct ice_sw_recipe *recp_list;
- struct LIST_HEAD_TYPE l_head;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
- enum ice_status ret;
- bool entry_exists;
- u16 lg_act_id;
-
- if (f_info->fltr_act != ICE_FWD_TO_VSI)
- return ICE_ERR_PARAM;
-
- if (f_info->lkup_type != ICE_SW_LKUP_MAC)
- return ICE_ERR_PARAM;
-
- if (sw_marker == ICE_INVAL_SW_MARKER_ID)
- return ICE_ERR_PARAM;
-
- if (!ice_is_vsi_valid(hw, f_info->vsi_handle))
- return ICE_ERR_PARAM;
- f_info->fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, f_info->vsi_handle);
-
- /* Add filter if it doesn't exist so then the adding of large
- * action always results in update
- */
-
- INIT_LIST_HEAD(&l_head);
- fl_info.fltr_info = *f_info;
- LIST_ADD(&fl_info.list_entry, &l_head);
-
- entry_exists = false;
- ret = ice_add_mac_rule(hw, &l_head, hw->switch_info,
- hw->port_info->lport);
- if (ret == ICE_ERR_ALREADY_EXISTS)
- entry_exists = true;
- else if (ret)
- return ret;
-
- recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC];
- rule_lock = &recp_list->filt_rule_lock;
- ice_acquire_lock(rule_lock);
- /* Get the book keeping entry for the filter */
- m_entry = ice_find_rule_entry(&recp_list->filt_rules, f_info);
- if (!m_entry)
- goto exit_error;
-
- /* If counter action was enabled for this rule then don't enable
- * sw marker large action
- */
- if (m_entry->counter_index != ICE_INVAL_COUNTER_ID) {
- ret = ICE_ERR_PARAM;
- goto exit_error;
- }
-
- /* if same marker was added before */
- if (m_entry->sw_marker_id == sw_marker) {
- ret = ICE_ERR_ALREADY_EXISTS;
- goto exit_error;
- }
-
- /* Allocate a hardware table entry to hold large act. Three actions
- * for marker based large action
- */
- ret = ice_alloc_res_lg_act(hw, &lg_act_id, 3);
- if (ret)
- goto exit_error;
-
- if (lg_act_id == ICE_INVAL_LG_ACT_INDEX)
- goto exit_error;
-
- /* Update the switch rule to add the marker action */
- ret = ice_add_marker_act(hw, m_entry, sw_marker, lg_act_id);
- if (!ret) {
- ice_release_lock(rule_lock);
- return ret;
- }
-
-exit_error:
- ice_release_lock(rule_lock);
- /* only remove entry if it did not exist previously */
- if (!entry_exists)
- ret = ice_remove_mac(hw, &l_head);
-
- return ret;
-}
-
-/**
- * ice_add_mac_with_counter - add filter with counter enabled
- * @hw: pointer to the hardware structure
- * @f_info: pointer to filter info structure containing the MAC filter
- * information
- */
-enum ice_status
-ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info)
-{
- struct ice_fltr_mgmt_list_entry *m_entry;
- struct ice_fltr_list_entry fl_info;
- struct ice_sw_recipe *recp_list;
- struct LIST_HEAD_TYPE l_head;
- struct ice_lock *rule_lock; /* Lock to protect filter rule list */
- enum ice_status ret;
- bool entry_exist;
- u16 counter_id;
- u16 lg_act_id;
-
- if (f_info->fltr_act != ICE_FWD_TO_VSI)
- return ICE_ERR_PARAM;
-
- if (f_info->lkup_type != ICE_SW_LKUP_MAC)
- return ICE_ERR_PARAM;
-
- if (!ice_is_vsi_valid(hw, f_info->vsi_handle))
- return ICE_ERR_PARAM;
- f_info->fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, f_info->vsi_handle);
- recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC];
-
- entry_exist = false;
-
- rule_lock = &recp_list->filt_rule_lock;
-
- /* Add filter if it doesn't exist so then the adding of large
- * action always results in update
- */
- INIT_LIST_HEAD(&l_head);
-
- fl_info.fltr_info = *f_info;
- LIST_ADD(&fl_info.list_entry, &l_head);
-
- ret = ice_add_mac_rule(hw, &l_head, hw->switch_info,
- hw->port_info->lport);
- if (ret == ICE_ERR_ALREADY_EXISTS)
- entry_exist = true;
- else if (ret)
- return ret;
-
- ice_acquire_lock(rule_lock);
- m_entry = ice_find_rule_entry(&recp_list->filt_rules, f_info);
- if (!m_entry) {
- ret = ICE_ERR_BAD_PTR;
- goto exit_error;
- }
-
- /* Don't enable counter for a filter for which sw marker was enabled */
- if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID) {
- ret = ICE_ERR_PARAM;
- goto exit_error;
- }
-
- /* If a counter was already enabled then don't need to add again */
- if (m_entry->counter_index != ICE_INVAL_COUNTER_ID) {
- ret = ICE_ERR_ALREADY_EXISTS;
- goto exit_error;
- }
-
- /* Allocate a hardware table entry to VLAN counter */
- ret = ice_alloc_vlan_res_counter(hw, &counter_id);
- if (ret)
- goto exit_error;
-
- /* Allocate a hardware table entry to hold large act. Two actions for
- * counter based large action
- */
- ret = ice_alloc_res_lg_act(hw, &lg_act_id, 2);
- if (ret)
- goto exit_error;
-
- if (lg_act_id == ICE_INVAL_LG_ACT_INDEX)
- goto exit_error;
-
- /* Update the switch rule to add the counter action */
- ret = ice_add_counter_act(hw, m_entry, counter_id, lg_act_id);
- if (!ret) {
- ice_release_lock(rule_lock);
- return ret;
- }
-
-exit_error:
- ice_release_lock(rule_lock);
- /* only remove entry if it did not exist previously */
- if (!entry_exist)
- ret = ice_remove_mac(hw, &l_head);
-
- return ret;
-}
-
/* This is mapping table entry that maps every word within a given protocol
* structure to the real byte offset as per the specification of that
* protocol header.
@@ -8374,155 +6969,6 @@ ice_rem_adv_rule_by_id(struct ice_hw *hw,
return ICE_ERR_DOES_NOT_EXIST;
}
-/**
- * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
- * given VSI handle
- * @hw: pointer to the hardware structure
- * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
- *
- * This function is used to remove all the rules for a given VSI and as soon
- * as removing a rule fails, it will return immediately with the error code,
- * else it will return ICE_SUCCESS
- */
-enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
-{
- struct ice_adv_fltr_mgmt_list_entry *list_itr, *tmp_entry;
- struct ice_vsi_list_map_info *map_info;
- struct LIST_HEAD_TYPE *list_head;
- struct ice_adv_rule_info rinfo;
- struct ice_switch_info *sw;
- enum ice_status status;
- u8 rid;
-
- sw = hw->switch_info;
- for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
- if (!sw->recp_list[rid].recp_created)
- continue;
- if (!sw->recp_list[rid].adv_rule)
- continue;
-
- list_head = &sw->recp_list[rid].filt_rules;
- LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp_entry, list_head,
- ice_adv_fltr_mgmt_list_entry,
- list_entry) {
- rinfo = list_itr->rule_info;
-
- if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST) {
- map_info = list_itr->vsi_list_info;
- if (!map_info)
- continue;
-
- if (!ice_is_bit_set(map_info->vsi_map,
- vsi_handle))
- continue;
- } else if (rinfo.sw_act.vsi_handle != vsi_handle) {
- continue;
- }
-
- rinfo.sw_act.vsi_handle = vsi_handle;
- status = ice_rem_adv_rule(hw, list_itr->lkups,
- list_itr->lkups_cnt, &rinfo);
-
- if (status)
- return status;
- }
- }
- return ICE_SUCCESS;
-}
-
-/**
- * ice_replay_fltr - Replay all the filters stored by a specific list head
- * @hw: pointer to the hardware structure
- * @list_head: list for which filters needs to be replayed
- * @recp_id: Recipe ID for which rules need to be replayed
- */
-static enum ice_status
-ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head)
-{
- struct ice_fltr_mgmt_list_entry *itr;
- enum ice_status status = ICE_SUCCESS;
- struct ice_sw_recipe *recp_list;
- u8 lport = hw->port_info->lport;
- struct LIST_HEAD_TYPE l_head;
-
- if (LIST_EMPTY(list_head))
- return status;
-
- recp_list = &hw->switch_info->recp_list[recp_id];
- /* Move entries from the given list_head to a temporary l_head so that
- * they can be replayed. Otherwise when trying to re-add the same
- * filter, the function will return already exists
- */
- LIST_REPLACE_INIT(list_head, &l_head);
-
- /* Mark the given list_head empty by reinitializing it so filters
- * could be added again by *handler
- */
- LIST_FOR_EACH_ENTRY(itr, &l_head, ice_fltr_mgmt_list_entry,
- list_entry) {
- struct ice_fltr_list_entry f_entry;
- u16 vsi_handle;
-
- f_entry.fltr_info = itr->fltr_info;
- if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN) {
- status = ice_add_rule_internal(hw, recp_list, lport,
- &f_entry);
- if (status != ICE_SUCCESS)
- goto end;
- continue;
- }
-
- /* Add a filter per VSI separately */
- ice_for_each_set_bit(vsi_handle, itr->vsi_list_info->vsi_map,
- ICE_MAX_VSI) {
- if (!ice_is_vsi_valid(hw, vsi_handle))
- break;
-
- ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
- f_entry.fltr_info.vsi_handle = vsi_handle;
- f_entry.fltr_info.fwd_id.hw_vsi_id =
- ice_get_hw_vsi_num(hw, vsi_handle);
- f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
- if (recp_id == ICE_SW_LKUP_VLAN)
- status = ice_add_vlan_internal(hw, recp_list,
- &f_entry);
- else
- status = ice_add_rule_internal(hw, recp_list,
- lport,
- &f_entry);
- if (status != ICE_SUCCESS)
- goto end;
- }
- }
-end:
- /* Clear the filter management list */
- ice_rem_sw_rule_info(hw, &l_head);
- return status;
-}
-
-/**
- * ice_replay_all_fltr - replay all filters stored in bookkeeping lists
- * @hw: pointer to the hardware structure
- *
- * NOTE: This function does not clean up partially added filters on error.
- * It is up to caller of the function to issue a reset or fail early.
- */
-enum ice_status ice_replay_all_fltr(struct ice_hw *hw)
-{
- struct ice_switch_info *sw = hw->switch_info;
- enum ice_status status = ICE_SUCCESS;
- u8 i;
-
- for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
- struct LIST_HEAD_TYPE *head = &sw->recp_list[i].filt_rules;
-
- status = ice_replay_fltr(hw, i, head);
- if (status != ICE_SUCCESS)
- return status;
- }
- return status;
-}
-
/**
* ice_replay_vsi_fltr - Replay filters for requested VSI
* @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index be9b74fd4c..680f8dad38 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -386,30 +386,12 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
struct ice_sq_cd *cd);
struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
void ice_clear_all_vsi_ctx(struct ice_hw *hw);
-enum ice_status
-ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
- struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
- u16 count, struct ice_mir_rule_buf *mr_buf,
- struct ice_sq_cd *cd, u16 *rule_id);
-enum ice_status
-ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd,
- struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh,
- u32 *ctl_bitmask);
-enum ice_status
-ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh,
- u32 ctl_bitmask);
/* Switch config */
enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
enum ice_status
ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
enum ice_status
-ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
-enum ice_status
ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
u16 *counter_id);
enum ice_status
@@ -417,27 +399,10 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
u16 counter_id);
/* Switch/bridge related commands */
-enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
-enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id);
-enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id);
-enum ice_status
-ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id,
- u16 *counter_id);
-enum ice_status
-ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id);
-enum ice_status
-ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries,
- struct ice_aqc_get_res_resp_elem *buf, u16 buf_size,
- struct ice_sq_cd *cd);
-enum ice_status
-ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
- struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type,
- bool res_shared, u16 *desc_id, struct ice_sq_cd *cd);
enum ice_status
ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
enum ice_status
ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
-void ice_rem_all_sw_rules_info(struct ice_hw *hw);
enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
enum ice_status
@@ -445,38 +410,15 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
enum ice_status
ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
enum ice_status
-ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
-enum ice_status
ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
-enum ice_status
-ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
- u16 sw_marker);
-enum ice_status
-ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info);
-void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
-
/* Promisc/defport setup for VSIs */
enum ice_status
-ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
- u8 direction);
-enum ice_status
ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
u16 vid);
enum ice_status
ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
u16 vid);
-enum ice_status
-ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
- bool rm_vlan_promisc);
-
-/* Get VSIs Promisc/defport settings */
-enum ice_status
-ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid);
-enum ice_status
-ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
- u16 *vid);
enum ice_status
ice_aq_add_recipe(struct ice_hw *hw,
@@ -501,16 +443,12 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
struct ice_rule_query_data *added_entry);
enum ice_status
-ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle);
-enum ice_status
ice_rem_adv_rule_by_id(struct ice_hw *hw,
struct ice_rule_query_data *remove_entry);
enum ice_status
ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
u16 lkups_cnt, struct ice_adv_rule_info *rinfo);
-enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
-
enum ice_status
ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list);
u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
diff --git a/drivers/net/igc/base/igc_api.c b/drivers/net/igc/base/igc_api.c
index 2f8c0753cb..efa7a8dd2b 100644
--- a/drivers/net/igc/base/igc_api.c
+++ b/drivers/net/igc/base/igc_api.c
@@ -317,35 +317,6 @@ static s32 igc_get_i2c_ack(struct igc_hw *hw)
return status;
}
-/**
- * igc_set_i2c_bb - Enable I2C bit-bang
- * @hw: pointer to the HW structure
- *
- * Enable I2C bit-bang interface
- *
- **/
-s32 igc_set_i2c_bb(struct igc_hw *hw)
-{
- s32 ret_val = IGC_SUCCESS;
- u32 ctrl_ext, i2cparams;
-
- DEBUGFUNC("igc_set_i2c_bb");
-
- ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
- ctrl_ext |= IGC_CTRL_I2C_ENA;
- IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext);
- IGC_WRITE_FLUSH(hw);
-
- i2cparams = IGC_READ_REG(hw, IGC_I2CPARAMS);
- i2cparams |= IGC_I2CBB_EN;
- i2cparams |= IGC_I2C_DATA_OE_N;
- i2cparams |= IGC_I2C_CLK_OE_N;
- IGC_WRITE_REG(hw, IGC_I2CPARAMS, i2cparams);
- IGC_WRITE_FLUSH(hw);
-
- return ret_val;
-}
-
/**
* igc_read_i2c_byte_generic - Reads 8 bit word over I2C
* @hw: pointer to hardware structure
@@ -622,32 +593,6 @@ s32 igc_init_phy_params(struct igc_hw *hw)
return ret_val;
}
-/**
- * igc_init_mbx_params - Initialize mailbox function pointers
- * @hw: pointer to the HW structure
- *
- * This function initializes the function pointers for the PHY
- * set of functions. Called by drivers or by igc_setup_init_funcs.
- **/
-s32 igc_init_mbx_params(struct igc_hw *hw)
-{
- s32 ret_val = IGC_SUCCESS;
-
- if (hw->mbx.ops.init_params) {
- ret_val = hw->mbx.ops.init_params(hw);
- if (ret_val) {
- DEBUGOUT("Mailbox Initialization Error\n");
- goto out;
- }
- } else {
- DEBUGOUT("mbx.init_mbx_params was NULL\n");
- ret_val = -IGC_ERR_CONFIG;
- }
-
-out:
- return ret_val;
-}
-
/**
* igc_set_mac_type - Sets MAC type
* @hw: pointer to the HW structure
@@ -998,34 +943,6 @@ s32 igc_get_bus_info(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_clear_vfta - Clear VLAN filter table
- * @hw: pointer to the HW structure
- *
- * This clears the VLAN filter table on the adapter. This is a function
- * pointer entry point called by drivers.
- **/
-void igc_clear_vfta(struct igc_hw *hw)
-{
- if (hw->mac.ops.clear_vfta)
- hw->mac.ops.clear_vfta(hw);
-}
-
-/**
- * igc_write_vfta - Write value to VLAN filter table
- * @hw: pointer to the HW structure
- * @offset: the 32-bit offset in which to write the value to.
- * @value: the 32-bit value to write at location offset.
- *
- * This writes a 32-bit value to a 32-bit offset in the VLAN filter
- * table. This is a function pointer entry point called by drivers.
- **/
-void igc_write_vfta(struct igc_hw *hw, u32 offset, u32 value)
-{
- if (hw->mac.ops.write_vfta)
- hw->mac.ops.write_vfta(hw, offset, value);
-}
-
/**
* igc_update_mc_addr_list - Update Multicast addresses
* @hw: pointer to the HW structure
@@ -1043,19 +960,6 @@ void igc_update_mc_addr_list(struct igc_hw *hw, u8 *mc_addr_list,
mc_addr_count);
}
-/**
- * igc_force_mac_fc - Force MAC flow control
- * @hw: pointer to the HW structure
- *
- * Force the MAC's flow control settings. Currently no func pointer exists
- * and all implementations are handled in the generic version of this
- * function.
- **/
-s32 igc_force_mac_fc(struct igc_hw *hw)
-{
- return igc_force_mac_fc_generic(hw);
-}
-
/**
* igc_check_for_link - Check/Store link connection
* @hw: pointer to the HW structure
@@ -1072,34 +976,6 @@ s32 igc_check_for_link(struct igc_hw *hw)
return -IGC_ERR_CONFIG;
}
-/**
- * igc_check_mng_mode - Check management mode
- * @hw: pointer to the HW structure
- *
- * This checks if the adapter has manageability enabled.
- * This is a function pointer entry point called by drivers.
- **/
-bool igc_check_mng_mode(struct igc_hw *hw)
-{
- if (hw->mac.ops.check_mng_mode)
- return hw->mac.ops.check_mng_mode(hw);
-
- return false;
-}
-
-/**
- * igc_mng_write_dhcp_info - Writes DHCP info to host interface
- * @hw: pointer to the HW structure
- * @buffer: pointer to the host interface
- * @length: size of the buffer
- *
- * Writes the DHCP information to the host interface.
- **/
-s32 igc_mng_write_dhcp_info(struct igc_hw *hw, u8 *buffer, u16 length)
-{
- return igc_mng_write_dhcp_info_generic(hw, buffer, length);
-}
-
/**
* igc_reset_hw - Reset hardware
* @hw: pointer to the HW structure
@@ -1146,86 +1022,6 @@ s32 igc_setup_link(struct igc_hw *hw)
return -IGC_ERR_CONFIG;
}
-/**
- * igc_get_speed_and_duplex - Returns current speed and duplex
- * @hw: pointer to the HW structure
- * @speed: pointer to a 16-bit value to store the speed
- * @duplex: pointer to a 16-bit value to store the duplex.
- *
- * This returns the speed and duplex of the adapter in the two 'out'
- * variables passed in. This is a function pointer entry point called
- * by drivers.
- **/
-s32 igc_get_speed_and_duplex(struct igc_hw *hw, u16 *speed, u16 *duplex)
-{
- if (hw->mac.ops.get_link_up_info)
- return hw->mac.ops.get_link_up_info(hw, speed, duplex);
-
- return -IGC_ERR_CONFIG;
-}
-
-/**
- * igc_setup_led - Configures SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This prepares the SW controllable LED for use and saves the current state
- * of the LED so it can be later restored. This is a function pointer entry
- * point called by drivers.
- **/
-s32 igc_setup_led(struct igc_hw *hw)
-{
- if (hw->mac.ops.setup_led)
- return hw->mac.ops.setup_led(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_cleanup_led - Restores SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This restores the SW controllable LED to the value saved off by
- * igc_setup_led. This is a function pointer entry point called by drivers.
- **/
-s32 igc_cleanup_led(struct igc_hw *hw)
-{
- if (hw->mac.ops.cleanup_led)
- return hw->mac.ops.cleanup_led(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_blink_led - Blink SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This starts the adapter LED blinking. Request the LED to be setup first
- * and cleaned up after. This is a function pointer entry point called by
- * drivers.
- **/
-s32 igc_blink_led(struct igc_hw *hw)
-{
- if (hw->mac.ops.blink_led)
- return hw->mac.ops.blink_led(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_id_led_init - store LED configurations in SW
- * @hw: pointer to the HW structure
- *
- * Initializes the LED config in SW. This is a function pointer entry point
- * called by drivers.
- **/
-s32 igc_id_led_init(struct igc_hw *hw)
-{
- if (hw->mac.ops.id_led_init)
- return hw->mac.ops.id_led_init(hw);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_led_on - Turn on SW controllable LED
* @hw: pointer to the HW structure
@@ -1256,43 +1052,6 @@ s32 igc_led_off(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_reset_adaptive - Reset adaptive IFS
- * @hw: pointer to the HW structure
- *
- * Resets the adaptive IFS. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-void igc_reset_adaptive(struct igc_hw *hw)
-{
- igc_reset_adaptive_generic(hw);
-}
-
-/**
- * igc_update_adaptive - Update adaptive IFS
- * @hw: pointer to the HW structure
- *
- * Updates adapter IFS. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-void igc_update_adaptive(struct igc_hw *hw)
-{
- igc_update_adaptive_generic(hw);
-}
-
-/**
- * igc_disable_pcie_master - Disable PCI-Express master access
- * @hw: pointer to the HW structure
- *
- * Disables PCI-Express master access and verifies there are no pending
- * requests. Currently no func pointer exists and all implementations are
- * handled in the generic version of this function.
- **/
-s32 igc_disable_pcie_master(struct igc_hw *hw)
-{
- return igc_disable_pcie_master_generic(hw);
-}
-
/**
* igc_config_collision_dist - Configure collision distance
* @hw: pointer to the HW structure
@@ -1322,94 +1081,6 @@ int igc_rar_set(struct igc_hw *hw, u8 *addr, u32 index)
return IGC_SUCCESS;
}
-/**
- * igc_validate_mdi_setting - Ensures valid MDI/MDIX SW state
- * @hw: pointer to the HW structure
- *
- * Ensures that the MDI/MDIX SW state is valid.
- **/
-s32 igc_validate_mdi_setting(struct igc_hw *hw)
-{
- if (hw->mac.ops.validate_mdi_setting)
- return hw->mac.ops.validate_mdi_setting(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_hash_mc_addr - Determines address location in multicast table
- * @hw: pointer to the HW structure
- * @mc_addr: Multicast address to hash.
- *
- * This hashes an address to determine its location in the multicast
- * table. Currently no func pointer exists and all implementations
- * are handled in the generic version of this function.
- **/
-u32 igc_hash_mc_addr(struct igc_hw *hw, u8 *mc_addr)
-{
- return igc_hash_mc_addr_generic(hw, mc_addr);
-}
-
-/**
- * igc_enable_tx_pkt_filtering - Enable packet filtering on TX
- * @hw: pointer to the HW structure
- *
- * Enables packet filtering on transmit packets if manageability is enabled
- * and host interface is enabled.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-bool igc_enable_tx_pkt_filtering(struct igc_hw *hw)
-{
- return igc_enable_tx_pkt_filtering_generic(hw);
-}
-
-/**
- * igc_mng_host_if_write - Writes to the manageability host interface
- * @hw: pointer to the HW structure
- * @buffer: pointer to the host interface buffer
- * @length: size of the buffer
- * @offset: location in the buffer to write to
- * @sum: sum of the data (not checksum)
- *
- * This function writes the buffer content at the offset given on the host if.
- * It also does alignment considerations to do the writes in most efficient
- * way. Also fills up the sum of the buffer in *buffer parameter.
- **/
-s32 igc_mng_host_if_write(struct igc_hw *hw, u8 *buffer, u16 length,
- u16 offset, u8 *sum)
-{
- return igc_mng_host_if_write_generic(hw, buffer, length, offset, sum);
-}
-
-/**
- * igc_mng_write_cmd_header - Writes manageability command header
- * @hw: pointer to the HW structure
- * @hdr: pointer to the host interface command header
- *
- * Writes the command header after does the checksum calculation.
- **/
-s32 igc_mng_write_cmd_header(struct igc_hw *hw,
- struct igc_host_mng_command_header *hdr)
-{
- return igc_mng_write_cmd_header_generic(hw, hdr);
-}
-
-/**
- * igc_mng_enable_host_if - Checks host interface is enabled
- * @hw: pointer to the HW structure
- *
- * Returns IGC_success upon success, else IGC_ERR_HOST_INTERFACE_COMMAND
- *
- * This function checks whether the HOST IF is enabled for command operation
- * and also checks whether the previous command is completed. It busy waits
- * in case of previous command is not completed.
- **/
-s32 igc_mng_enable_host_if(struct igc_hw *hw)
-{
- return igc_mng_enable_host_if_generic(hw);
-}
-
/**
* igc_check_reset_block - Verifies PHY can be reset
* @hw: pointer to the HW structure
@@ -1425,126 +1096,6 @@ s32 igc_check_reset_block(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_read_phy_reg - Reads PHY register
- * @hw: pointer to the HW structure
- * @offset: the register to read
- * @data: the buffer to store the 16-bit read.
- *
- * Reads the PHY register and returns the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data)
-{
- if (hw->phy.ops.read_reg)
- return hw->phy.ops.read_reg(hw, offset, data);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_write_phy_reg - Writes PHY register
- * @hw: pointer to the HW structure
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes the PHY register at offset with the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 igc_write_phy_reg(struct igc_hw *hw, u32 offset, u16 data)
-{
- if (hw->phy.ops.write_reg)
- return hw->phy.ops.write_reg(hw, offset, data);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_release_phy - Generic release PHY
- * @hw: pointer to the HW structure
- *
- * Return if silicon family does not require a semaphore when accessing the
- * PHY.
- **/
-void igc_release_phy(struct igc_hw *hw)
-{
- if (hw->phy.ops.release)
- hw->phy.ops.release(hw);
-}
-
-/**
- * igc_acquire_phy - Generic acquire PHY
- * @hw: pointer to the HW structure
- *
- * Return success if silicon family does not require a semaphore when
- * accessing the PHY.
- **/
-s32 igc_acquire_phy(struct igc_hw *hw)
-{
- if (hw->phy.ops.acquire)
- return hw->phy.ops.acquire(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_cfg_on_link_up - Configure PHY upon link up
- * @hw: pointer to the HW structure
- **/
-s32 igc_cfg_on_link_up(struct igc_hw *hw)
-{
- if (hw->phy.ops.cfg_on_link_up)
- return hw->phy.ops.cfg_on_link_up(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_read_kmrn_reg - Reads register using Kumeran interface
- * @hw: pointer to the HW structure
- * @offset: the register to read
- * @data: the location to store the 16-bit value read.
- *
- * Reads a register out of the Kumeran interface. Currently no func pointer
- * exists and all implementations are handled in the generic version of
- * this function.
- **/
-s32 igc_read_kmrn_reg(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return igc_read_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- * igc_write_kmrn_reg - Writes register using Kumeran interface
- * @hw: pointer to the HW structure
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes a register to the Kumeran interface. Currently no func pointer
- * exists and all implementations are handled in the generic version of
- * this function.
- **/
-s32 igc_write_kmrn_reg(struct igc_hw *hw, u32 offset, u16 data)
-{
- return igc_write_kmrn_reg_generic(hw, offset, data);
-}
-
-/**
- * igc_get_cable_length - Retrieves cable length estimation
- * @hw: pointer to the HW structure
- *
- * This function estimates the cable length and stores them in
- * hw->phy.min_length and hw->phy.max_length. This is a function pointer
- * entry point called by drivers.
- **/
-s32 igc_get_cable_length(struct igc_hw *hw)
-{
- if (hw->phy.ops.get_cable_length)
- return hw->phy.ops.get_cable_length(hw);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_get_phy_info - Retrieves PHY information from registers
* @hw: pointer to the HW structure
@@ -1576,65 +1127,6 @@ s32 igc_phy_hw_reset(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_phy_commit - Soft PHY reset
- * @hw: pointer to the HW structure
- *
- * Performs a soft PHY reset on those that apply. This is a function pointer
- * entry point called by drivers.
- **/
-s32 igc_phy_commit(struct igc_hw *hw)
-{
- if (hw->phy.ops.commit)
- return hw->phy.ops.commit(hw);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_set_d0_lplu_state - Sets low power link up state for D0
- * @hw: pointer to the HW structure
- * @active: boolean used to enable/disable lplu
- *
- * Success returns 0, Failure returns 1
- *
- * The low power link up (lplu) state is set to the power management level D0
- * and SmartSpeed is disabled when active is true, else clear lplu for D0
- * and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
- * is used during Dx states where the power conservation is most important.
- * During driver activity, SmartSpeed should be enabled so performance is
- * maintained. This is a function pointer entry point called by drivers.
- **/
-s32 igc_set_d0_lplu_state(struct igc_hw *hw, bool active)
-{
- if (hw->phy.ops.set_d0_lplu_state)
- return hw->phy.ops.set_d0_lplu_state(hw, active);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_set_d3_lplu_state - Sets low power link up state for D3
- * @hw: pointer to the HW structure
- * @active: boolean used to enable/disable lplu
- *
- * Success returns 0, Failure returns 1
- *
- * The low power link up (lplu) state is set to the power management level D3
- * and SmartSpeed is disabled when active is true, else clear lplu for D3
- * and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
- * is used during Dx states where the power conservation is most important.
- * During driver activity, SmartSpeed should be enabled so performance is
- * maintained. This is a function pointer entry point called by drivers.
- **/
-s32 igc_set_d3_lplu_state(struct igc_hw *hw, bool active)
-{
- if (hw->phy.ops.set_d3_lplu_state)
- return hw->phy.ops.set_d3_lplu_state(hw, active);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_read_mac_addr - Reads MAC address
* @hw: pointer to the HW structure
@@ -1651,52 +1143,6 @@ s32 igc_read_mac_addr(struct igc_hw *hw)
return igc_read_mac_addr_generic(hw);
}
-/**
- * igc_read_pba_string - Read device part number string
- * @hw: pointer to the HW structure
- * @pba_num: pointer to device part number
- * @pba_num_size: size of part number buffer
- *
- * Reads the product board assembly (PBA) number from the EEPROM and stores
- * the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 igc_read_pba_string(struct igc_hw *hw, u8 *pba_num, u32 pba_num_size)
-{
- return igc_read_pba_string_generic(hw, pba_num, pba_num_size);
-}
-
-/**
- * igc_read_pba_length - Read device part number string length
- * @hw: pointer to the HW structure
- * @pba_num_size: size of part number buffer
- *
- * Reads the product board assembly (PBA) number length from the EEPROM and
- * stores the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 igc_read_pba_length(struct igc_hw *hw, u32 *pba_num_size)
-{
- return igc_read_pba_length_generic(hw, pba_num_size);
-}
-
-/**
- * igc_read_pba_num - Read device part number
- * @hw: pointer to the HW structure
- * @pba_num: pointer to device part number
- *
- * Reads the product board assembly (PBA) number from the EEPROM and stores
- * the value in pba_num.
- * Currently no func pointer exists and all implementations are handled in the
- * generic version of this function.
- **/
-s32 igc_read_pba_num(struct igc_hw *hw, u32 *pba_num)
-{
- return igc_read_pba_num_generic(hw, pba_num);
-}
-
/**
* igc_validate_nvm_checksum - Verifies NVM (EEPROM) checksum
* @hw: pointer to the HW structure
@@ -1712,34 +1158,6 @@ s32 igc_validate_nvm_checksum(struct igc_hw *hw)
return -IGC_ERR_CONFIG;
}
-/**
- * igc_update_nvm_checksum - Updates NVM (EEPROM) checksum
- * @hw: pointer to the HW structure
- *
- * Updates the NVM checksum. Currently no func pointer exists and all
- * implementations are handled in the generic version of this function.
- **/
-s32 igc_update_nvm_checksum(struct igc_hw *hw)
-{
- if (hw->nvm.ops.update)
- return hw->nvm.ops.update(hw);
-
- return -IGC_ERR_CONFIG;
-}
-
-/**
- * igc_reload_nvm - Reloads EEPROM
- * @hw: pointer to the HW structure
- *
- * Reloads the EEPROM by setting the "Reinitialize from EEPROM" bit in the
- * extended control register.
- **/
-void igc_reload_nvm(struct igc_hw *hw)
-{
- if (hw->nvm.ops.reload)
- hw->nvm.ops.reload(hw);
-}
-
/**
* igc_read_nvm - Reads NVM (EEPROM)
* @hw: pointer to the HW structure
@@ -1776,22 +1194,6 @@ s32 igc_write_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
return IGC_SUCCESS;
}
-/**
- * igc_write_8bit_ctrl_reg - Writes 8bit Control register
- * @hw: pointer to the HW structure
- * @reg: 32bit register offset
- * @offset: the register to write
- * @data: the value to write.
- *
- * Writes the PHY register at offset with the value in data.
- * This is a function pointer entry point called by drivers.
- **/
-s32 igc_write_8bit_ctrl_reg(struct igc_hw *hw, u32 reg, u32 offset,
- u8 data)
-{
- return igc_write_8bit_ctrl_reg_generic(hw, reg, offset, data);
-}
-
/**
* igc_power_up_phy - Restores link in case of PHY power down
* @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_api.h b/drivers/net/igc/base/igc_api.h
index 00681ee4f8..6bb22912dd 100644
--- a/drivers/net/igc/base/igc_api.h
+++ b/drivers/net/igc/base/igc_api.h
@@ -19,7 +19,6 @@
#define IGC_I2C_T_SU_STO 4
#define IGC_I2C_T_BUF 5
-s32 igc_set_i2c_bb(struct igc_hw *hw);
s32 igc_read_i2c_byte_generic(struct igc_hw *hw, u8 byte_offset,
u8 dev_addr, u8 *data);
s32 igc_write_i2c_byte_generic(struct igc_hw *hw, u8 byte_offset,
@@ -46,66 +45,26 @@ s32 igc_setup_init_funcs(struct igc_hw *hw, bool init_device);
s32 igc_init_mac_params(struct igc_hw *hw);
s32 igc_init_nvm_params(struct igc_hw *hw);
s32 igc_init_phy_params(struct igc_hw *hw);
-s32 igc_init_mbx_params(struct igc_hw *hw);
s32 igc_get_bus_info(struct igc_hw *hw);
-void igc_clear_vfta(struct igc_hw *hw);
-void igc_write_vfta(struct igc_hw *hw, u32 offset, u32 value);
-s32 igc_force_mac_fc(struct igc_hw *hw);
s32 igc_check_for_link(struct igc_hw *hw);
s32 igc_reset_hw(struct igc_hw *hw);
s32 igc_init_hw(struct igc_hw *hw);
s32 igc_setup_link(struct igc_hw *hw);
-s32 igc_get_speed_and_duplex(struct igc_hw *hw, u16 *speed, u16 *duplex);
-s32 igc_disable_pcie_master(struct igc_hw *hw);
void igc_config_collision_dist(struct igc_hw *hw);
int igc_rar_set(struct igc_hw *hw, u8 *addr, u32 index);
-u32 igc_hash_mc_addr(struct igc_hw *hw, u8 *mc_addr);
void igc_update_mc_addr_list(struct igc_hw *hw, u8 *mc_addr_list,
u32 mc_addr_count);
-s32 igc_setup_led(struct igc_hw *hw);
-s32 igc_cleanup_led(struct igc_hw *hw);
s32 igc_check_reset_block(struct igc_hw *hw);
-s32 igc_blink_led(struct igc_hw *hw);
s32 igc_led_on(struct igc_hw *hw);
s32 igc_led_off(struct igc_hw *hw);
-s32 igc_id_led_init(struct igc_hw *hw);
-void igc_reset_adaptive(struct igc_hw *hw);
-void igc_update_adaptive(struct igc_hw *hw);
-s32 igc_get_cable_length(struct igc_hw *hw);
-s32 igc_validate_mdi_setting(struct igc_hw *hw);
-s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_phy_reg(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_8bit_ctrl_reg(struct igc_hw *hw, u32 reg, u32 offset,
- u8 data);
s32 igc_get_phy_info(struct igc_hw *hw);
-void igc_release_phy(struct igc_hw *hw);
-s32 igc_acquire_phy(struct igc_hw *hw);
-s32 igc_cfg_on_link_up(struct igc_hw *hw);
s32 igc_phy_hw_reset(struct igc_hw *hw);
-s32 igc_phy_commit(struct igc_hw *hw);
void igc_power_up_phy(struct igc_hw *hw);
void igc_power_down_phy(struct igc_hw *hw);
s32 igc_read_mac_addr(struct igc_hw *hw);
-s32 igc_read_pba_num(struct igc_hw *hw, u32 *part_num);
-s32 igc_read_pba_string(struct igc_hw *hw, u8 *pba_num, u32 pba_num_size);
-s32 igc_read_pba_length(struct igc_hw *hw, u32 *pba_num_size);
-void igc_reload_nvm(struct igc_hw *hw);
-s32 igc_update_nvm_checksum(struct igc_hw *hw);
s32 igc_validate_nvm_checksum(struct igc_hw *hw);
s32 igc_read_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32 igc_read_kmrn_reg(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_kmrn_reg(struct igc_hw *hw, u32 offset, u16 data);
s32 igc_write_nvm(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32 igc_set_d3_lplu_state(struct igc_hw *hw, bool active);
-s32 igc_set_d0_lplu_state(struct igc_hw *hw, bool active);
-bool igc_check_mng_mode(struct igc_hw *hw);
-bool igc_enable_tx_pkt_filtering(struct igc_hw *hw);
-s32 igc_mng_enable_host_if(struct igc_hw *hw);
-s32 igc_mng_host_if_write(struct igc_hw *hw, u8 *buffer, u16 length,
- u16 offset, u8 *sum);
-s32 igc_mng_write_cmd_header(struct igc_hw *hw,
- struct igc_host_mng_command_header *hdr);
-s32 igc_mng_write_dhcp_info(struct igc_hw *hw, u8 *buffer, u16 length);
u32 igc_translate_register_82542(u32 reg);
#endif /* _IGC_API_H_ */
diff --git a/drivers/net/igc/base/igc_base.c b/drivers/net/igc/base/igc_base.c
index 1e8b908902..55aca5ad63 100644
--- a/drivers/net/igc/base/igc_base.c
+++ b/drivers/net/igc/base/igc_base.c
@@ -110,81 +110,3 @@ void igc_power_down_phy_copper_base(struct igc_hw *hw)
if (!phy->ops.check_reset_block(hw))
igc_power_down_phy_copper(hw);
}
-
-/**
- * igc_rx_fifo_flush_base - Clean Rx FIFO after Rx enable
- * @hw: pointer to the HW structure
- *
- * After Rx enable, if manageability is enabled then there is likely some
- * bad data at the start of the FIFO and possibly in the DMA FIFO. This
- * function clears the FIFOs and flushes any packets that came in as Rx was
- * being enabled.
- **/
-void igc_rx_fifo_flush_base(struct igc_hw *hw)
-{
- u32 rctl, rlpml, rxdctl[4], rfctl, temp_rctl, rx_enabled;
- int i, ms_wait;
-
- DEBUGFUNC("igc_rx_fifo_flush_base");
-
- /* disable IPv6 options as per hardware errata */
- rfctl = IGC_READ_REG(hw, IGC_RFCTL);
- rfctl |= IGC_RFCTL_IPV6_EX_DIS;
- IGC_WRITE_REG(hw, IGC_RFCTL, rfctl);
-
- if (!(IGC_READ_REG(hw, IGC_MANC) & IGC_MANC_RCV_TCO_EN))
- return;
-
- /* Disable all Rx queues */
- for (i = 0; i < 4; i++) {
- rxdctl[i] = IGC_READ_REG(hw, IGC_RXDCTL(i));
- IGC_WRITE_REG(hw, IGC_RXDCTL(i),
- rxdctl[i] & ~IGC_RXDCTL_QUEUE_ENABLE);
- }
- /* Poll all queues to verify they have shut down */
- for (ms_wait = 0; ms_wait < 10; ms_wait++) {
- msec_delay(1);
- rx_enabled = 0;
- for (i = 0; i < 4; i++)
- rx_enabled |= IGC_READ_REG(hw, IGC_RXDCTL(i));
- if (!(rx_enabled & IGC_RXDCTL_QUEUE_ENABLE))
- break;
- }
-
- if (ms_wait == 10)
- DEBUGOUT("Queue disable timed out after 10ms\n");
-
- /* Clear RLPML, RCTL.SBP, RFCTL.LEF, and set RCTL.LPE so that all
- * incoming packets are rejected. Set enable and wait 2ms so that
- * any packet that was coming in as RCTL.EN was set is flushed
- */
- IGC_WRITE_REG(hw, IGC_RFCTL, rfctl & ~IGC_RFCTL_LEF);
-
- rlpml = IGC_READ_REG(hw, IGC_RLPML);
- IGC_WRITE_REG(hw, IGC_RLPML, 0);
-
- rctl = IGC_READ_REG(hw, IGC_RCTL);
- temp_rctl = rctl & ~(IGC_RCTL_EN | IGC_RCTL_SBP);
- temp_rctl |= IGC_RCTL_LPE;
-
- IGC_WRITE_REG(hw, IGC_RCTL, temp_rctl);
- IGC_WRITE_REG(hw, IGC_RCTL, temp_rctl | IGC_RCTL_EN);
- IGC_WRITE_FLUSH(hw);
- msec_delay(2);
-
- /* Enable Rx queues that were previously enabled and restore our
- * previous state
- */
- for (i = 0; i < 4; i++)
- IGC_WRITE_REG(hw, IGC_RXDCTL(i), rxdctl[i]);
- IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- IGC_WRITE_FLUSH(hw);
-
- IGC_WRITE_REG(hw, IGC_RLPML, rlpml);
- IGC_WRITE_REG(hw, IGC_RFCTL, rfctl);
-
- /* Flush receive errors generated by workaround */
- IGC_READ_REG(hw, IGC_ROC);
- IGC_READ_REG(hw, IGC_RNBC);
- IGC_READ_REG(hw, IGC_MPC);
-}
diff --git a/drivers/net/igc/base/igc_base.h b/drivers/net/igc/base/igc_base.h
index 5f342af7ee..19b549ae45 100644
--- a/drivers/net/igc/base/igc_base.h
+++ b/drivers/net/igc/base/igc_base.h
@@ -8,7 +8,6 @@
/* forward declaration */
s32 igc_init_hw_base(struct igc_hw *hw);
void igc_power_down_phy_copper_base(struct igc_hw *hw);
-void igc_rx_fifo_flush_base(struct igc_hw *hw);
s32 igc_acquire_phy_base(struct igc_hw *hw);
void igc_release_phy_base(struct igc_hw *hw);
diff --git a/drivers/net/igc/base/igc_hw.h b/drivers/net/igc/base/igc_hw.h
index be38fafa5f..55d63b211c 100644
--- a/drivers/net/igc/base/igc_hw.h
+++ b/drivers/net/igc/base/igc_hw.h
@@ -1041,10 +1041,7 @@ struct igc_hw {
#include "igc_base.h"
/* These functions must be implemented by drivers */
-void igc_pci_clear_mwi(struct igc_hw *hw);
-void igc_pci_set_mwi(struct igc_hw *hw);
s32 igc_read_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value);
-s32 igc_write_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value);
void igc_read_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value);
void igc_write_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value);
diff --git a/drivers/net/igc/base/igc_i225.c b/drivers/net/igc/base/igc_i225.c
index 060b2f8f93..01d2c7487d 100644
--- a/drivers/net/igc/base/igc_i225.c
+++ b/drivers/net/igc/base/igc_i225.c
@@ -590,102 +590,6 @@ static s32 __igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
return ret_val;
}
-/* igc_read_invm_version_i225 - Reads iNVM version and image type
- * @hw: pointer to the HW structure
- * @invm_ver: version structure for the version read
- *
- * Reads iNVM version and image type.
- */
-s32 igc_read_invm_version_i225(struct igc_hw *hw,
- struct igc_fw_version *invm_ver)
-{
- u32 *record = NULL;
- u32 *next_record = NULL;
- u32 i = 0;
- u32 invm_dword = 0;
- u32 invm_blocks = IGC_INVM_SIZE - (IGC_INVM_ULT_BYTES_SIZE /
- IGC_INVM_RECORD_SIZE_IN_BYTES);
- u32 buffer[IGC_INVM_SIZE];
- s32 status = -IGC_ERR_INVM_VALUE_NOT_FOUND;
- u16 version = 0;
-
- DEBUGFUNC("igc_read_invm_version_i225");
-
- /* Read iNVM memory */
- for (i = 0; i < IGC_INVM_SIZE; i++) {
- invm_dword = IGC_READ_REG(hw, IGC_INVM_DATA_REG(i));
- buffer[i] = invm_dword;
- }
-
- /* Read version number */
- for (i = 1; i < invm_blocks; i++) {
- record = &buffer[invm_blocks - i];
- next_record = &buffer[invm_blocks - i + 1];
-
- /* Check if we have first version location used */
- if (i == 1 && (*record & IGC_INVM_VER_FIELD_ONE) == 0) {
- version = 0;
- status = IGC_SUCCESS;
- break;
- }
- /* Check if we have second version location used */
- else if ((i == 1) &&
- ((*record & IGC_INVM_VER_FIELD_TWO) == 0)) {
- version = (*record & IGC_INVM_VER_FIELD_ONE) >> 3;
- status = IGC_SUCCESS;
- break;
- }
- /* Check if we have odd version location
- * used and it is the last one used
- */
- else if ((((*record & IGC_INVM_VER_FIELD_ONE) == 0) &&
- ((*record & 0x3) == 0)) || (((*record & 0x3) != 0) &&
- (i != 1))) {
- version = (*next_record & IGC_INVM_VER_FIELD_TWO)
- >> 13;
- status = IGC_SUCCESS;
- break;
- }
- /* Check if we have even version location
- * used and it is the last one used
- */
- else if (((*record & IGC_INVM_VER_FIELD_TWO) == 0) &&
- ((*record & 0x3) == 0)) {
- version = (*record & IGC_INVM_VER_FIELD_ONE) >> 3;
- status = IGC_SUCCESS;
- break;
- }
- }
-
- if (status == IGC_SUCCESS) {
- invm_ver->invm_major = (version & IGC_INVM_MAJOR_MASK)
- >> IGC_INVM_MAJOR_SHIFT;
- invm_ver->invm_minor = version & IGC_INVM_MINOR_MASK;
- }
- /* Read Image Type */
- for (i = 1; i < invm_blocks; i++) {
- record = &buffer[invm_blocks - i];
- next_record = &buffer[invm_blocks - i + 1];
-
- /* Check if we have image type in first location used */
- if (i == 1 && (*record & IGC_INVM_IMGTYPE_FIELD) == 0) {
- invm_ver->invm_img_type = 0;
- status = IGC_SUCCESS;
- break;
- }
- /* Check if we have image type in first location used */
- else if ((((*record & 0x3) == 0) &&
- ((*record & IGC_INVM_IMGTYPE_FIELD) == 0)) ||
- ((((*record & 0x3) != 0) && (i != 1)))) {
- invm_ver->invm_img_type =
- (*next_record & IGC_INVM_IMGTYPE_FIELD) >> 23;
- status = IGC_SUCCESS;
- break;
- }
- }
- return status;
-}
-
/* igc_validate_nvm_checksum_i225 - Validate EEPROM checksum
* @hw: pointer to the HW structure
*
@@ -1313,66 +1217,3 @@ s32 igc_set_d3_lplu_state_i225(struct igc_hw *hw, bool active)
IGC_WRITE_REG(hw, IGC_I225_PHPM, data);
return IGC_SUCCESS;
}
-
-/**
- * igc_set_eee_i225 - Enable/disable EEE support
- * @hw: pointer to the HW structure
- * @adv2p5G: boolean flag enabling 2.5G EEE advertisement
- * @adv1G: boolean flag enabling 1G EEE advertisement
- * @adv100M: boolean flag enabling 100M EEE advertisement
- *
- * Enable/disable EEE based on setting in dev_spec structure.
- *
- **/
-s32 igc_set_eee_i225(struct igc_hw *hw, bool adv2p5G, bool adv1G,
- bool adv100M)
-{
- u32 ipcnfg, eeer;
-
- DEBUGFUNC("igc_set_eee_i225");
-
- if (hw->mac.type != igc_i225 ||
- hw->phy.media_type != igc_media_type_copper)
- goto out;
- ipcnfg = IGC_READ_REG(hw, IGC_IPCNFG);
- eeer = IGC_READ_REG(hw, IGC_EEER);
-
- /* enable or disable per user setting */
- if (!(hw->dev_spec._i225.eee_disable)) {
- u32 eee_su = IGC_READ_REG(hw, IGC_EEE_SU);
-
- if (adv100M)
- ipcnfg |= IGC_IPCNFG_EEE_100M_AN;
- else
- ipcnfg &= ~IGC_IPCNFG_EEE_100M_AN;
-
- if (adv1G)
- ipcnfg |= IGC_IPCNFG_EEE_1G_AN;
- else
- ipcnfg &= ~IGC_IPCNFG_EEE_1G_AN;
-
- if (adv2p5G)
- ipcnfg |= IGC_IPCNFG_EEE_2_5G_AN;
- else
- ipcnfg &= ~IGC_IPCNFG_EEE_2_5G_AN;
-
- eeer |= (IGC_EEER_TX_LPI_EN | IGC_EEER_RX_LPI_EN |
- IGC_EEER_LPI_FC);
-
- /* This bit should not be set in normal operation. */
- if (eee_su & IGC_EEE_SU_LPI_CLK_STP)
- DEBUGOUT("LPI Clock Stop Bit should not be set!\n");
- } else {
- ipcnfg &= ~(IGC_IPCNFG_EEE_2_5G_AN | IGC_IPCNFG_EEE_1G_AN |
- IGC_IPCNFG_EEE_100M_AN);
- eeer &= ~(IGC_EEER_TX_LPI_EN | IGC_EEER_RX_LPI_EN |
- IGC_EEER_LPI_FC);
- }
- IGC_WRITE_REG(hw, IGC_IPCNFG, ipcnfg);
- IGC_WRITE_REG(hw, IGC_EEER, eeer);
- IGC_READ_REG(hw, IGC_IPCNFG);
- IGC_READ_REG(hw, IGC_EEER);
-out:
-
- return IGC_SUCCESS;
-}
diff --git a/drivers/net/igc/base/igc_i225.h b/drivers/net/igc/base/igc_i225.h
index c61ece0e82..ff17a2a9c9 100644
--- a/drivers/net/igc/base/igc_i225.h
+++ b/drivers/net/igc/base/igc_i225.h
@@ -13,8 +13,6 @@ s32 igc_write_nvm_srwr_i225(struct igc_hw *hw, u16 offset,
u16 words, u16 *data);
s32 igc_read_nvm_srrd_i225(struct igc_hw *hw, u16 offset,
u16 words, u16 *data);
-s32 igc_read_invm_version_i225(struct igc_hw *hw,
- struct igc_fw_version *invm_ver);
s32 igc_set_flsw_flash_burst_counter_i225(struct igc_hw *hw,
u32 burst_counter);
s32 igc_write_erase_flash_command_i225(struct igc_hw *hw, u32 opcode,
@@ -26,8 +24,6 @@ s32 igc_init_hw_i225(struct igc_hw *hw);
s32 igc_setup_copper_link_i225(struct igc_hw *hw);
s32 igc_set_d0_lplu_state_i225(struct igc_hw *hw, bool active);
s32 igc_set_d3_lplu_state_i225(struct igc_hw *hw, bool active);
-s32 igc_set_eee_i225(struct igc_hw *hw, bool adv2p5G, bool adv1G,
- bool adv100M);
#define ID_LED_DEFAULT_I225 ((ID_LED_OFF1_ON2 << 8) | \
(ID_LED_DEF1_DEF2 << 4) | \
diff --git a/drivers/net/igc/base/igc_mac.c b/drivers/net/igc/base/igc_mac.c
index 3cd6506e5e..cef85d0b17 100644
--- a/drivers/net/igc/base/igc_mac.c
+++ b/drivers/net/igc/base/igc_mac.c
@@ -122,121 +122,6 @@ void igc_null_write_vfta(struct igc_hw IGC_UNUSEDARG * hw,
UNREFERENCED_3PARAMETER(hw, a, b);
}
-/**
- * igc_null_rar_set - No-op function, return 0
- * @hw: pointer to the HW structure
- * @h: dummy variable
- * @a: dummy variable
- **/
-int igc_null_rar_set(struct igc_hw IGC_UNUSEDARG * hw,
- u8 IGC_UNUSEDARG * h, u32 IGC_UNUSEDARG a)
-{
- DEBUGFUNC("igc_null_rar_set");
- UNREFERENCED_3PARAMETER(hw, h, a);
- return IGC_SUCCESS;
-}
-
-/**
- * igc_get_bus_info_pci_generic - Get PCI(x) bus information
- * @hw: pointer to the HW structure
- *
- * Determines and stores the system bus information for a particular
- * network interface. The following bus information is determined and stored:
- * bus speed, bus width, type (PCI/PCIx), and PCI(-x) function.
- **/
-s32 igc_get_bus_info_pci_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- struct igc_bus_info *bus = &hw->bus;
- u32 status = IGC_READ_REG(hw, IGC_STATUS);
- s32 ret_val = IGC_SUCCESS;
-
- DEBUGFUNC("igc_get_bus_info_pci_generic");
-
- /* PCI or PCI-X? */
- bus->type = (status & IGC_STATUS_PCIX_MODE)
- ? igc_bus_type_pcix
- : igc_bus_type_pci;
-
- /* Bus speed */
- if (bus->type == igc_bus_type_pci) {
- bus->speed = (status & IGC_STATUS_PCI66)
- ? igc_bus_speed_66
- : igc_bus_speed_33;
- } else {
- switch (status & IGC_STATUS_PCIX_SPEED) {
- case IGC_STATUS_PCIX_SPEED_66:
- bus->speed = igc_bus_speed_66;
- break;
- case IGC_STATUS_PCIX_SPEED_100:
- bus->speed = igc_bus_speed_100;
- break;
- case IGC_STATUS_PCIX_SPEED_133:
- bus->speed = igc_bus_speed_133;
- break;
- default:
- bus->speed = igc_bus_speed_reserved;
- break;
- }
- }
-
- /* Bus width */
- bus->width = (status & IGC_STATUS_BUS64)
- ? igc_bus_width_64
- : igc_bus_width_32;
-
- /* Which PCI(-X) function? */
- mac->ops.set_lan_id(hw);
-
- return ret_val;
-}
-
-/**
- * igc_get_bus_info_pcie_generic - Get PCIe bus information
- * @hw: pointer to the HW structure
- *
- * Determines and stores the system bus information for a particular
- * network interface. The following bus information is determined and stored:
- * bus speed, bus width, type (PCIe), and PCIe function.
- **/
-s32 igc_get_bus_info_pcie_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- struct igc_bus_info *bus = &hw->bus;
- s32 ret_val;
- u16 pcie_link_status;
-
- DEBUGFUNC("igc_get_bus_info_pcie_generic");
-
- bus->type = igc_bus_type_pci_express;
-
- ret_val = igc_read_pcie_cap_reg(hw, PCIE_LINK_STATUS,
- &pcie_link_status);
- if (ret_val) {
- bus->width = igc_bus_width_unknown;
- bus->speed = igc_bus_speed_unknown;
- } else {
- switch (pcie_link_status & PCIE_LINK_SPEED_MASK) {
- case PCIE_LINK_SPEED_2500:
- bus->speed = igc_bus_speed_2500;
- break;
- case PCIE_LINK_SPEED_5000:
- bus->speed = igc_bus_speed_5000;
- break;
- default:
- bus->speed = igc_bus_speed_unknown;
- break;
- }
-
- bus->width = (enum igc_bus_width)((pcie_link_status &
- PCIE_LINK_WIDTH_MASK) >> PCIE_LINK_WIDTH_SHIFT);
- }
-
- mac->ops.set_lan_id(hw);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
*
@@ -257,60 +142,6 @@ static void igc_set_lan_id_multi_port_pcie(struct igc_hw *hw)
bus->func = (reg & IGC_STATUS_FUNC_MASK) >> IGC_STATUS_FUNC_SHIFT;
}
-/**
- * igc_set_lan_id_multi_port_pci - Set LAN id for PCI multiple port devices
- * @hw: pointer to the HW structure
- *
- * Determines the LAN function id by reading PCI config space.
- **/
-void igc_set_lan_id_multi_port_pci(struct igc_hw *hw)
-{
- struct igc_bus_info *bus = &hw->bus;
- u16 pci_header_type;
- u32 status;
-
- igc_read_pci_cfg(hw, PCI_HEADER_TYPE_REGISTER, &pci_header_type);
- if (pci_header_type & PCI_HEADER_TYPE_MULTIFUNC) {
- status = IGC_READ_REG(hw, IGC_STATUS);
- bus->func = (status & IGC_STATUS_FUNC_MASK)
- >> IGC_STATUS_FUNC_SHIFT;
- } else {
- bus->func = 0;
- }
-}
-
-/**
- * igc_set_lan_id_single_port - Set LAN id for a single port device
- * @hw: pointer to the HW structure
- *
- * Sets the LAN function id to zero for a single port device.
- **/
-void igc_set_lan_id_single_port(struct igc_hw *hw)
-{
- struct igc_bus_info *bus = &hw->bus;
-
- bus->func = 0;
-}
-
-/**
- * igc_clear_vfta_generic - Clear VLAN filter table
- * @hw: pointer to the HW structure
- *
- * Clears the register array which contains the VLAN filter table by
- * setting all the values to 0.
- **/
-void igc_clear_vfta_generic(struct igc_hw *hw)
-{
- u32 offset;
-
- DEBUGFUNC("igc_clear_vfta_generic");
-
- for (offset = 0; offset < IGC_VLAN_FILTER_TBL_SIZE; offset++) {
- IGC_WRITE_REG_ARRAY(hw, IGC_VFTA, offset, 0);
- IGC_WRITE_FLUSH(hw);
- }
-}
-
/**
* igc_write_vfta_generic - Write value to VLAN filter table
* @hw: pointer to the HW structure
@@ -582,43 +413,6 @@ void igc_update_mc_addr_list_generic(struct igc_hw *hw,
IGC_WRITE_FLUSH(hw);
}
-/**
- * igc_pcix_mmrbc_workaround_generic - Fix incorrect MMRBC value
- * @hw: pointer to the HW structure
- *
- * In certain situations, a system BIOS may report that the PCIx maximum
- * memory read byte count (MMRBC) value is higher than than the actual
- * value. We check the PCIx command register with the current PCIx status
- * register.
- **/
-void igc_pcix_mmrbc_workaround_generic(struct igc_hw *hw)
-{
- u16 cmd_mmrbc;
- u16 pcix_cmd;
- u16 pcix_stat_hi_word;
- u16 stat_mmrbc;
-
- DEBUGFUNC("igc_pcix_mmrbc_workaround_generic");
-
- /* Workaround for PCI-X issue when BIOS sets MMRBC incorrectly */
- if (hw->bus.type != igc_bus_type_pcix)
- return;
-
- igc_read_pci_cfg(hw, PCIX_COMMAND_REGISTER, &pcix_cmd);
- igc_read_pci_cfg(hw, PCIX_STATUS_REGISTER_HI, &pcix_stat_hi_word);
- cmd_mmrbc = (pcix_cmd & PCIX_COMMAND_MMRBC_MASK) >>
- PCIX_COMMAND_MMRBC_SHIFT;
- stat_mmrbc = (pcix_stat_hi_word & PCIX_STATUS_HI_MMRBC_MASK) >>
- PCIX_STATUS_HI_MMRBC_SHIFT;
- if (stat_mmrbc == PCIX_STATUS_HI_MMRBC_4K)
- stat_mmrbc = PCIX_STATUS_HI_MMRBC_2K;
- if (cmd_mmrbc > stat_mmrbc) {
- pcix_cmd &= ~PCIX_COMMAND_MMRBC_MASK;
- pcix_cmd |= stat_mmrbc << PCIX_COMMAND_MMRBC_SHIFT;
- igc_write_pci_cfg(hw, PCIX_COMMAND_REGISTER, &pcix_cmd);
- }
-}
-
/**
* igc_clear_hw_cntrs_base_generic - Clear base hardware counters
* @hw: pointer to the HW structure
@@ -668,296 +462,6 @@ void igc_clear_hw_cntrs_base_generic(struct igc_hw *hw)
IGC_READ_REG(hw, IGC_BPTC);
}
-/**
- * igc_check_for_copper_link_generic - Check for link (Copper)
- * @hw: pointer to the HW structure
- *
- * Checks to see of the link status of the hardware has changed. If a
- * change in link status has been detected, then we read the PHY registers
- * to get the current speed/duplex if link exists.
- **/
-s32 igc_check_for_copper_link_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- s32 ret_val;
- bool link;
-
- DEBUGFUNC("igc_check_for_copper_link");
-
- /* We only want to go out to the PHY registers to see if Auto-Neg
- * has completed and/or if our link status has changed. The
- * get_link_status flag is set upon receiving a Link Status
- * Change or Rx Sequence Error interrupt.
- */
- if (!mac->get_link_status)
- return IGC_SUCCESS;
-
- /* First we want to see if the MII Status Register reports
- * link. If so, then we want to get the current speed/duplex
- * of the PHY.
- */
- ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
- if (ret_val)
- return ret_val;
-
- if (!link)
- return IGC_SUCCESS; /* No link detected */
-
- mac->get_link_status = false;
-
- /* Check if there was DownShift, must be checked
- * immediately after link-up
- */
- igc_check_downshift_generic(hw);
-
- /* If we are forcing speed/duplex, then we simply return since
- * we have already determined whether we have link or not.
- */
- if (!mac->autoneg)
- return -IGC_ERR_CONFIG;
-
- /* Auto-Neg is enabled. Auto Speed Detection takes care
- * of MAC speed/duplex configuration. So we only need to
- * configure Collision Distance in the MAC.
- */
- mac->ops.config_collision_dist(hw);
-
- /* Configure Flow Control now that Auto-Neg has completed.
- * First, we need to restore the desired flow control
- * settings because we may have had to re-autoneg with a
- * different link partner.
- */
- ret_val = igc_config_fc_after_link_up_generic(hw);
- if (ret_val)
- DEBUGOUT("Error configuring flow control\n");
-
- return ret_val;
-}
-
-/**
- * igc_check_for_fiber_link_generic - Check for link (Fiber)
- * @hw: pointer to the HW structure
- *
- * Checks for link up on the hardware. If link is not up and we have
- * a signal, then we need to force link up.
- **/
-s32 igc_check_for_fiber_link_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- u32 rxcw;
- u32 ctrl;
- u32 status;
- s32 ret_val;
-
- DEBUGFUNC("igc_check_for_fiber_link_generic");
-
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- status = IGC_READ_REG(hw, IGC_STATUS);
- rxcw = IGC_READ_REG(hw, IGC_RXCW);
-
- /* If we don't have link (auto-negotiation failed or link partner
- * cannot auto-negotiate), the cable is plugged in (we have signal),
- * and our link partner is not trying to auto-negotiate with us (we
- * are receiving idles or data), we need to force link up. We also
- * need to give auto-negotiation time to complete, in case the cable
- * was just plugged in. The autoneg_failed flag does this.
- */
- /* (ctrl & IGC_CTRL_SWDPIN1) == 1 == have signal */
- if ((ctrl & IGC_CTRL_SWDPIN1) && !(status & IGC_STATUS_LU) &&
- !(rxcw & IGC_RXCW_C)) {
- if (!mac->autoneg_failed) {
- mac->autoneg_failed = true;
- return IGC_SUCCESS;
- }
- DEBUGOUT("NOT Rx'ing /C/, disable AutoNeg and force link.\n");
-
- /* Disable auto-negotiation in the TXCW register */
- IGC_WRITE_REG(hw, IGC_TXCW, (mac->txcw & ~IGC_TXCW_ANE));
-
- /* Force link-up and also force full-duplex. */
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- ctrl |= (IGC_CTRL_SLU | IGC_CTRL_FD);
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-
- /* Configure Flow Control after forcing link up. */
- ret_val = igc_config_fc_after_link_up_generic(hw);
- if (ret_val) {
- DEBUGOUT("Error configuring flow control\n");
- return ret_val;
- }
- } else if ((ctrl & IGC_CTRL_SLU) && (rxcw & IGC_RXCW_C)) {
- /* If we are forcing link and we are receiving /C/ ordered
- * sets, re-enable auto-negotiation in the TXCW register
- * and disable forced link in the Device Control register
- * in an attempt to auto-negotiate with our link partner.
- */
- DEBUGOUT("Rx'ing /C/, enable AutoNeg and stop forcing link.\n");
- IGC_WRITE_REG(hw, IGC_TXCW, mac->txcw);
- IGC_WRITE_REG(hw, IGC_CTRL, (ctrl & ~IGC_CTRL_SLU));
-
- mac->serdes_has_link = true;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_check_for_serdes_link_generic - Check for link (Serdes)
- * @hw: pointer to the HW structure
- *
- * Checks for link up on the hardware. If link is not up and we have
- * a signal, then we need to force link up.
- **/
-s32 igc_check_for_serdes_link_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- u32 rxcw;
- u32 ctrl;
- u32 status;
- s32 ret_val;
-
- DEBUGFUNC("igc_check_for_serdes_link_generic");
-
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- status = IGC_READ_REG(hw, IGC_STATUS);
- rxcw = IGC_READ_REG(hw, IGC_RXCW);
-
- /* If we don't have link (auto-negotiation failed or link partner
- * cannot auto-negotiate), and our link partner is not trying to
- * auto-negotiate with us (we are receiving idles or data),
- * we need to force link up. We also need to give auto-negotiation
- * time to complete.
- */
- /* (ctrl & IGC_CTRL_SWDPIN1) == 1 == have signal */
- if (!(status & IGC_STATUS_LU) && !(rxcw & IGC_RXCW_C)) {
- if (!mac->autoneg_failed) {
- mac->autoneg_failed = true;
- return IGC_SUCCESS;
- }
- DEBUGOUT("NOT Rx'ing /C/, disable AutoNeg and force link.\n");
-
- /* Disable auto-negotiation in the TXCW register */
- IGC_WRITE_REG(hw, IGC_TXCW, (mac->txcw & ~IGC_TXCW_ANE));
-
- /* Force link-up and also force full-duplex. */
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- ctrl |= (IGC_CTRL_SLU | IGC_CTRL_FD);
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
-
- /* Configure Flow Control after forcing link up. */
- ret_val = igc_config_fc_after_link_up_generic(hw);
- if (ret_val) {
- DEBUGOUT("Error configuring flow control\n");
- return ret_val;
- }
- } else if ((ctrl & IGC_CTRL_SLU) && (rxcw & IGC_RXCW_C)) {
- /* If we are forcing link and we are receiving /C/ ordered
- * sets, re-enable auto-negotiation in the TXCW register
- * and disable forced link in the Device Control register
- * in an attempt to auto-negotiate with our link partner.
- */
- DEBUGOUT("Rx'ing /C/, enable AutoNeg and stop forcing link.\n");
- IGC_WRITE_REG(hw, IGC_TXCW, mac->txcw);
- IGC_WRITE_REG(hw, IGC_CTRL, (ctrl & ~IGC_CTRL_SLU));
-
- mac->serdes_has_link = true;
- } else if (!(IGC_TXCW_ANE & IGC_READ_REG(hw, IGC_TXCW))) {
- /* If we force link for non-auto-negotiation switch, check
- * link status based on MAC synchronization for internal
- * serdes media type.
- */
- /* SYNCH bit and IV bit are sticky. */
- usec_delay(10);
- rxcw = IGC_READ_REG(hw, IGC_RXCW);
- if (rxcw & IGC_RXCW_SYNCH) {
- if (!(rxcw & IGC_RXCW_IV)) {
- mac->serdes_has_link = true;
- DEBUGOUT("SERDES: Link up - forced.\n");
- }
- } else {
- mac->serdes_has_link = false;
- DEBUGOUT("SERDES: Link down - force failed.\n");
- }
- }
-
- if (IGC_TXCW_ANE & IGC_READ_REG(hw, IGC_TXCW)) {
- status = IGC_READ_REG(hw, IGC_STATUS);
- if (status & IGC_STATUS_LU) {
- /* SYNCH bit and IV bit are sticky, so reread rxcw. */
- usec_delay(10);
- rxcw = IGC_READ_REG(hw, IGC_RXCW);
- if (rxcw & IGC_RXCW_SYNCH) {
- if (!(rxcw & IGC_RXCW_IV)) {
- mac->serdes_has_link = true;
- DEBUGOUT("SERDES: Link up - autoneg completed successfully.\n");
- } else {
- mac->serdes_has_link = false;
- DEBUGOUT("SERDES: Link down - invalid codewords detected in autoneg.\n");
- }
- } else {
- mac->serdes_has_link = false;
- DEBUGOUT("SERDES: Link down - no sync.\n");
- }
- } else {
- mac->serdes_has_link = false;
- DEBUGOUT("SERDES: Link down - autoneg failed\n");
- }
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_set_default_fc_generic - Set flow control default values
- * @hw: pointer to the HW structure
- *
- * Read the EEPROM for the default values for flow control and store the
- * values.
- **/
-s32 igc_set_default_fc_generic(struct igc_hw *hw)
-{
- s32 ret_val;
- u16 nvm_data;
- u16 nvm_offset = 0;
-
- DEBUGFUNC("igc_set_default_fc_generic");
-
- /* Read and store word 0x0F of the EEPROM. This word contains bits
- * that determine the hardware's default PAUSE (flow control) mode,
- * a bit that determines whether the HW defaults to enabling or
- * disabling auto-negotiation, and the direction of the
- * SW defined pins. If there is no SW over-ride of the flow
- * control setting, then the variable hw->fc will
- * be initialized based on a value in the EEPROM.
- */
- if (hw->mac.type == igc_i350) {
- nvm_offset = NVM_82580_LAN_FUNC_OFFSET(hw->bus.func);
- ret_val = hw->nvm.ops.read(hw,
- NVM_INIT_CONTROL2_REG +
- nvm_offset,
- 1, &nvm_data);
- } else {
- ret_val = hw->nvm.ops.read(hw,
- NVM_INIT_CONTROL2_REG,
- 1, &nvm_data);
- }
-
- if (ret_val) {
- DEBUGOUT("NVM Read Error\n");
- return ret_val;
- }
-
- if (!(nvm_data & NVM_WORD0F_PAUSE_MASK))
- hw->fc.requested_mode = igc_fc_none;
- else if ((nvm_data & NVM_WORD0F_PAUSE_MASK) ==
- NVM_WORD0F_ASM_DIR)
- hw->fc.requested_mode = igc_fc_tx_pause;
- else
- hw->fc.requested_mode = igc_fc_full;
-
- return IGC_SUCCESS;
-}
-
/**
* igc_setup_link_generic - Setup flow control and link settings
* @hw: pointer to the HW structure
@@ -1131,57 +635,6 @@ s32 igc_poll_fiber_serdes_link_generic(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_setup_fiber_serdes_link_generic - Setup link for fiber/serdes
- * @hw: pointer to the HW structure
- *
- * Configures collision distance and flow control for fiber and serdes
- * links. Upon successful setup, poll for link.
- **/
-s32 igc_setup_fiber_serdes_link_generic(struct igc_hw *hw)
-{
- u32 ctrl;
- s32 ret_val;
-
- DEBUGFUNC("igc_setup_fiber_serdes_link_generic");
-
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
-
- /* Take the link out of reset */
- ctrl &= ~IGC_CTRL_LRST;
-
- hw->mac.ops.config_collision_dist(hw);
-
- ret_val = igc_commit_fc_settings_generic(hw);
- if (ret_val)
- return ret_val;
-
- /* Since auto-negotiation is enabled, take the link out of reset (the
- * link will be in reset, because we previously reset the chip). This
- * will restart auto-negotiation. If auto-negotiation is successful
- * then the link-up status bit will be set and the flow control enable
- * bits (RFCE and TFCE) will be set according to their negotiated value.
- */
- DEBUGOUT("Auto-negotiation enabled\n");
-
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
- IGC_WRITE_FLUSH(hw);
- msec_delay(1);
-
- /* For these adapters, the SW definable pin 1 is set when the optics
- * detect a signal. If we have a signal, then poll for a "Link-Up"
- * indication.
- */
- if (hw->phy.media_type == igc_media_type_internal_serdes ||
- (IGC_READ_REG(hw, IGC_CTRL) & IGC_CTRL_SWDPIN1)) {
- ret_val = igc_poll_fiber_serdes_link_generic(hw);
- } else {
- DEBUGOUT("No signal detected\n");
- }
-
- return ret_val;
-}
-
/**
* igc_config_collision_dist_generic - Configure collision distance
* @hw: pointer to the HW structure
@@ -1532,28 +985,6 @@ s32 igc_get_speed_and_duplex_copper_generic(struct igc_hw *hw, u16 *speed,
return IGC_SUCCESS;
}
-/**
- * igc_get_speed_and_duplex_fiber_generic - Retrieve current speed/duplex
- * @hw: pointer to the HW structure
- * @speed: stores the current speed
- * @duplex: stores the current duplex
- *
- * Sets the speed and duplex to gigabit full duplex (the only possible option)
- * for fiber/serdes links.
- **/
-s32
-igc_get_speed_and_duplex_fiber_serdes_generic(struct igc_hw *hw,
- u16 *speed, u16 *duplex)
-{
- DEBUGFUNC("igc_get_speed_and_duplex_fiber_serdes_generic");
- UNREFERENCED_1PARAMETER(hw);
-
- *speed = SPEED_1000;
- *duplex = FULL_DUPLEX;
-
- return IGC_SUCCESS;
-}
-
/**
* igc_get_hw_semaphore_generic - Acquire hardware semaphore
* @hw: pointer to the HW structure
@@ -1651,274 +1082,6 @@ s32 igc_get_auto_rd_done_generic(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_valid_led_default_generic - Verify a valid default LED config
- * @hw: pointer to the HW structure
- * @data: pointer to the NVM (EEPROM)
- *
- * Read the EEPROM for the current default LED configuration. If the
- * LED configuration is not valid, set to a valid LED configuration.
- **/
-s32 igc_valid_led_default_generic(struct igc_hw *hw, u16 *data)
-{
- s32 ret_val;
-
- DEBUGFUNC("igc_valid_led_default_generic");
-
- ret_val = hw->nvm.ops.read(hw, NVM_ID_LED_SETTINGS, 1, data);
- if (ret_val) {
- DEBUGOUT("NVM Read Error\n");
- return ret_val;
- }
-
- if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF)
- *data = ID_LED_DEFAULT;
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_id_led_init_generic -
- * @hw: pointer to the HW structure
- *
- **/
-s32 igc_id_led_init_generic(struct igc_hw *hw)
-{
- struct igc_mac_info *mac = &hw->mac;
- s32 ret_val;
- const u32 ledctl_mask = 0x000000FF;
- const u32 ledctl_on = IGC_LEDCTL_MODE_LED_ON;
- const u32 ledctl_off = IGC_LEDCTL_MODE_LED_OFF;
- u16 data, i, temp;
- const u16 led_mask = 0x0F;
-
- DEBUGFUNC("igc_id_led_init_generic");
-
- ret_val = hw->nvm.ops.valid_led_default(hw, &data);
- if (ret_val)
- return ret_val;
-
- mac->ledctl_default = IGC_READ_REG(hw, IGC_LEDCTL);
- mac->ledctl_mode1 = mac->ledctl_default;
- mac->ledctl_mode2 = mac->ledctl_default;
-
- for (i = 0; i < 4; i++) {
- temp = (data >> (i << 2)) & led_mask;
- switch (temp) {
- case ID_LED_ON1_DEF2:
- case ID_LED_ON1_ON2:
- case ID_LED_ON1_OFF2:
- mac->ledctl_mode1 &= ~(ledctl_mask << (i << 3));
- mac->ledctl_mode1 |= ledctl_on << (i << 3);
- break;
- case ID_LED_OFF1_DEF2:
- case ID_LED_OFF1_ON2:
- case ID_LED_OFF1_OFF2:
- mac->ledctl_mode1 &= ~(ledctl_mask << (i << 3));
- mac->ledctl_mode1 |= ledctl_off << (i << 3);
- break;
- default:
- /* Do nothing */
- break;
- }
- switch (temp) {
- case ID_LED_DEF1_ON2:
- case ID_LED_ON1_ON2:
- case ID_LED_OFF1_ON2:
- mac->ledctl_mode2 &= ~(ledctl_mask << (i << 3));
- mac->ledctl_mode2 |= ledctl_on << (i << 3);
- break;
- case ID_LED_DEF1_OFF2:
- case ID_LED_ON1_OFF2:
- case ID_LED_OFF1_OFF2:
- mac->ledctl_mode2 &= ~(ledctl_mask << (i << 3));
- mac->ledctl_mode2 |= ledctl_off << (i << 3);
- break;
- default:
- /* Do nothing */
- break;
- }
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_setup_led_generic - Configures SW controllable LED
- * @hw: pointer to the HW structure
- *
- * This prepares the SW controllable LED for use and saves the current state
- * of the LED so it can be later restored.
- **/
-s32 igc_setup_led_generic(struct igc_hw *hw)
-{
- u32 ledctl;
-
- DEBUGFUNC("igc_setup_led_generic");
-
- if (hw->mac.ops.setup_led != igc_setup_led_generic)
- return -IGC_ERR_CONFIG;
-
- if (hw->phy.media_type == igc_media_type_fiber) {
- ledctl = IGC_READ_REG(hw, IGC_LEDCTL);
- hw->mac.ledctl_default = ledctl;
- /* Turn off LED0 */
- ledctl &= ~(IGC_LEDCTL_LED0_IVRT | IGC_LEDCTL_LED0_BLINK |
- IGC_LEDCTL_LED0_MODE_MASK);
- ledctl |= (IGC_LEDCTL_MODE_LED_OFF <<
- IGC_LEDCTL_LED0_MODE_SHIFT);
- IGC_WRITE_REG(hw, IGC_LEDCTL, ledctl);
- } else if (hw->phy.media_type == igc_media_type_copper) {
- IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode1);
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_cleanup_led_generic - Set LED config to default operation
- * @hw: pointer to the HW structure
- *
- * Remove the current LED configuration and set the LED configuration
- * to the default value, saved from the EEPROM.
- **/
-s32 igc_cleanup_led_generic(struct igc_hw *hw)
-{
- DEBUGFUNC("igc_cleanup_led_generic");
-
- IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_default);
- return IGC_SUCCESS;
-}
-
-/**
- * igc_blink_led_generic - Blink LED
- * @hw: pointer to the HW structure
- *
- * Blink the LEDs which are set to be on.
- **/
-s32 igc_blink_led_generic(struct igc_hw *hw)
-{
- u32 ledctl_blink = 0;
- u32 i;
-
- DEBUGFUNC("igc_blink_led_generic");
-
- if (hw->phy.media_type == igc_media_type_fiber) {
- /* always blink LED0 for PCI-E fiber */
- ledctl_blink = IGC_LEDCTL_LED0_BLINK |
- (IGC_LEDCTL_MODE_LED_ON << IGC_LEDCTL_LED0_MODE_SHIFT);
- } else {
- /* Set the blink bit for each LED that's "on" (0x0E)
- * (or "off" if inverted) in ledctl_mode2. The blink
- * logic in hardware only works when mode is set to "on"
- * so it must be changed accordingly when the mode is
- * "off" and inverted.
- */
- ledctl_blink = hw->mac.ledctl_mode2;
- for (i = 0; i < 32; i += 8) {
- u32 mode = (hw->mac.ledctl_mode2 >> i) &
- IGC_LEDCTL_LED0_MODE_MASK;
- u32 led_default = hw->mac.ledctl_default >> i;
-
- if ((!(led_default & IGC_LEDCTL_LED0_IVRT) &&
- mode == IGC_LEDCTL_MODE_LED_ON) ||
- ((led_default & IGC_LEDCTL_LED0_IVRT) &&
- mode == IGC_LEDCTL_MODE_LED_OFF)) {
- ledctl_blink &=
- ~(IGC_LEDCTL_LED0_MODE_MASK << i);
- ledctl_blink |= (IGC_LEDCTL_LED0_BLINK |
- IGC_LEDCTL_MODE_LED_ON) << i;
- }
- }
- }
-
- IGC_WRITE_REG(hw, IGC_LEDCTL, ledctl_blink);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_led_on_generic - Turn LED on
- * @hw: pointer to the HW structure
- *
- * Turn LED on.
- **/
-s32 igc_led_on_generic(struct igc_hw *hw)
-{
- u32 ctrl;
-
- DEBUGFUNC("igc_led_on_generic");
-
- switch (hw->phy.media_type) {
- case igc_media_type_fiber:
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- ctrl &= ~IGC_CTRL_SWDPIN0;
- ctrl |= IGC_CTRL_SWDPIO0;
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
- break;
- case igc_media_type_copper:
- IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode2);
- break;
- default:
- break;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_led_off_generic - Turn LED off
- * @hw: pointer to the HW structure
- *
- * Turn LED off.
- **/
-s32 igc_led_off_generic(struct igc_hw *hw)
-{
- u32 ctrl;
-
- DEBUGFUNC("igc_led_off_generic");
-
- switch (hw->phy.media_type) {
- case igc_media_type_fiber:
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- ctrl |= IGC_CTRL_SWDPIN0;
- ctrl |= IGC_CTRL_SWDPIO0;
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
- break;
- case igc_media_type_copper:
- IGC_WRITE_REG(hw, IGC_LEDCTL, hw->mac.ledctl_mode1);
- break;
- default:
- break;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_set_pcie_no_snoop_generic - Set PCI-express capabilities
- * @hw: pointer to the HW structure
- * @no_snoop: bitmap of snoop events
- *
- * Set the PCI-express register to snoop for events enabled in 'no_snoop'.
- **/
-void igc_set_pcie_no_snoop_generic(struct igc_hw *hw, u32 no_snoop)
-{
- u32 gcr;
-
- DEBUGFUNC("igc_set_pcie_no_snoop_generic");
-
- if (hw->bus.type != igc_bus_type_pci_express)
- return;
-
- if (no_snoop) {
- gcr = IGC_READ_REG(hw, IGC_GCR);
- gcr &= ~(PCIE_NO_SNOOP_ALL);
- gcr |= no_snoop;
- IGC_WRITE_REG(hw, IGC_GCR, gcr);
- }
-}
-
/**
* igc_disable_pcie_master_generic - Disables PCI-express master access
* @hw: pointer to the HW structure
@@ -2046,22 +1209,6 @@ static s32 igc_validate_mdi_setting_generic(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_validate_mdi_setting_crossover_generic - Verify MDI/MDIx settings
- * @hw: pointer to the HW structure
- *
- * Validate the MDI/MDIx setting, allowing for auto-crossover during forced
- * operation.
- **/
-s32
-igc_validate_mdi_setting_crossover_generic(struct igc_hw IGC_UNUSEDARG * hw)
-{
- DEBUGFUNC("igc_validate_mdi_setting_crossover_generic");
- UNREFERENCED_1PARAMETER(hw);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_write_8bit_ctrl_reg_generic - Write a 8bit CTRL register
* @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_mac.h b/drivers/net/igc/base/igc_mac.h
index 035a371e1e..26a88c2014 100644
--- a/drivers/net/igc/base/igc_mac.h
+++ b/drivers/net/igc/base/igc_mac.h
@@ -13,51 +13,29 @@ s32 igc_null_link_info(struct igc_hw *hw, u16 *s, u16 *d);
bool igc_null_mng_mode(struct igc_hw *hw);
void igc_null_update_mc(struct igc_hw *hw, u8 *h, u32 a);
void igc_null_write_vfta(struct igc_hw *hw, u32 a, u32 b);
-int igc_null_rar_set(struct igc_hw *hw, u8 *h, u32 a);
-s32 igc_blink_led_generic(struct igc_hw *hw);
-s32 igc_check_for_copper_link_generic(struct igc_hw *hw);
-s32 igc_check_for_fiber_link_generic(struct igc_hw *hw);
-s32 igc_check_for_serdes_link_generic(struct igc_hw *hw);
-s32 igc_cleanup_led_generic(struct igc_hw *hw);
s32 igc_commit_fc_settings_generic(struct igc_hw *hw);
s32 igc_poll_fiber_serdes_link_generic(struct igc_hw *hw);
s32 igc_config_fc_after_link_up_generic(struct igc_hw *hw);
s32 igc_disable_pcie_master_generic(struct igc_hw *hw);
s32 igc_force_mac_fc_generic(struct igc_hw *hw);
s32 igc_get_auto_rd_done_generic(struct igc_hw *hw);
-s32 igc_get_bus_info_pci_generic(struct igc_hw *hw);
-s32 igc_get_bus_info_pcie_generic(struct igc_hw *hw);
-void igc_set_lan_id_single_port(struct igc_hw *hw);
-void igc_set_lan_id_multi_port_pci(struct igc_hw *hw);
s32 igc_get_hw_semaphore_generic(struct igc_hw *hw);
s32 igc_get_speed_and_duplex_copper_generic(struct igc_hw *hw, u16 *speed,
u16 *duplex);
-s32 igc_get_speed_and_duplex_fiber_serdes_generic(struct igc_hw *hw,
- u16 *speed, u16 *duplex);
-s32 igc_id_led_init_generic(struct igc_hw *hw);
-s32 igc_led_on_generic(struct igc_hw *hw);
-s32 igc_led_off_generic(struct igc_hw *hw);
void igc_update_mc_addr_list_generic(struct igc_hw *hw,
u8 *mc_addr_list, u32 mc_addr_count);
-s32 igc_set_default_fc_generic(struct igc_hw *hw);
s32 igc_set_fc_watermarks_generic(struct igc_hw *hw);
-s32 igc_setup_fiber_serdes_link_generic(struct igc_hw *hw);
-s32 igc_setup_led_generic(struct igc_hw *hw);
s32 igc_setup_link_generic(struct igc_hw *hw);
-s32 igc_validate_mdi_setting_crossover_generic(struct igc_hw *hw);
s32 igc_write_8bit_ctrl_reg_generic(struct igc_hw *hw, u32 reg,
u32 offset, u8 data);
u32 igc_hash_mc_addr_generic(struct igc_hw *hw, u8 *mc_addr);
void igc_clear_hw_cntrs_base_generic(struct igc_hw *hw);
-void igc_clear_vfta_generic(struct igc_hw *hw);
void igc_init_rx_addrs_generic(struct igc_hw *hw, u16 rar_count);
-void igc_pcix_mmrbc_workaround_generic(struct igc_hw *hw);
void igc_put_hw_semaphore_generic(struct igc_hw *hw);
s32 igc_check_alt_mac_addr_generic(struct igc_hw *hw);
void igc_reset_adaptive_generic(struct igc_hw *hw);
-void igc_set_pcie_no_snoop_generic(struct igc_hw *hw, u32 no_snoop);
void igc_update_adaptive_generic(struct igc_hw *hw);
void igc_write_vfta_generic(struct igc_hw *hw, u32 offset, u32 value);
diff --git a/drivers/net/igc/base/igc_manage.c b/drivers/net/igc/base/igc_manage.c
index 563ab81603..aa68174031 100644
--- a/drivers/net/igc/base/igc_manage.c
+++ b/drivers/net/igc/base/igc_manage.c
@@ -73,24 +73,6 @@ s32 igc_mng_enable_host_if_generic(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_check_mng_mode_generic - Generic check management mode
- * @hw: pointer to the HW structure
- *
- * Reads the firmware semaphore register and returns true (>0) if
- * manageability is enabled, else false (0).
- **/
-bool igc_check_mng_mode_generic(struct igc_hw *hw)
-{
- u32 fwsm = IGC_READ_REG(hw, IGC_FWSM);
-
- DEBUGFUNC("igc_check_mng_mode_generic");
-
-
- return (fwsm & IGC_FWSM_MODE_MASK) ==
- (IGC_MNG_IAMT_MODE << IGC_FWSM_MODE_SHIFT);
-}
-
/**
* igc_enable_tx_pkt_filtering_generic - Enable packet filtering on Tx
* @hw: pointer to the HW structure
@@ -301,247 +283,3 @@ s32 igc_mng_write_dhcp_info_generic(struct igc_hw *hw, u8 *buffer,
return IGC_SUCCESS;
}
-
-/**
- * igc_enable_mng_pass_thru - Check if management passthrough is needed
- * @hw: pointer to the HW structure
- *
- * Verifies the hardware needs to leave interface enabled so that frames can
- * be directed to and from the management interface.
- **/
-bool igc_enable_mng_pass_thru(struct igc_hw *hw)
-{
- u32 manc;
- u32 fwsm, factps;
-
- DEBUGFUNC("igc_enable_mng_pass_thru");
-
- if (!hw->mac.asf_firmware_present)
- return false;
-
- manc = IGC_READ_REG(hw, IGC_MANC);
-
- if (!(manc & IGC_MANC_RCV_TCO_EN))
- return false;
-
- if (hw->mac.has_fwsm) {
- fwsm = IGC_READ_REG(hw, IGC_FWSM);
- factps = IGC_READ_REG(hw, IGC_FACTPS);
-
- if (!(factps & IGC_FACTPS_MNGCG) &&
- ((fwsm & IGC_FWSM_MODE_MASK) ==
- (igc_mng_mode_pt << IGC_FWSM_MODE_SHIFT)))
- return true;
- } else if ((hw->mac.type == igc_82574) ||
- (hw->mac.type == igc_82583)) {
- u16 data;
- s32 ret_val;
-
- factps = IGC_READ_REG(hw, IGC_FACTPS);
- ret_val = igc_read_nvm(hw, NVM_INIT_CONTROL2_REG, 1, &data);
- if (ret_val)
- return false;
-
- if (!(factps & IGC_FACTPS_MNGCG) &&
- ((data & IGC_NVM_INIT_CTRL2_MNGM) ==
- (igc_mng_mode_pt << 13)))
- return true;
- } else if ((manc & IGC_MANC_SMBUS_EN) &&
- !(manc & IGC_MANC_ASF_EN)) {
- return true;
- }
-
- return false;
-}
-
-/**
- * igc_host_interface_command - Writes buffer to host interface
- * @hw: pointer to the HW structure
- * @buffer: contains a command to write
- * @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- * Writes a buffer to the Host Interface. Upon success, returns IGC_SUCCESS
- * else returns IGC_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 igc_host_interface_command(struct igc_hw *hw, u8 *buffer, u32 length)
-{
- u32 hicr, i;
-
- DEBUGFUNC("igc_host_interface_command");
-
- if (!(hw->mac.arc_subsystem_valid)) {
- DEBUGOUT("Hardware doesn't support host interface command.\n");
- return IGC_SUCCESS;
- }
-
- if (!hw->mac.asf_firmware_present) {
- DEBUGOUT("Firmware is not present.\n");
- return IGC_SUCCESS;
- }
-
- if (length == 0 || length & 0x3 ||
- length > IGC_HI_MAX_BLOCK_BYTE_LENGTH) {
- DEBUGOUT("Buffer length failure.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Check that the host interface is enabled. */
- hicr = IGC_READ_REG(hw, IGC_HICR);
- if (!(hicr & IGC_HICR_EN)) {
- DEBUGOUT("IGC_HOST_EN bit disabled.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Calculate length in DWORDs */
- length >>= 2;
-
- /* The device driver writes the relevant command block
- * into the ram area.
- */
- for (i = 0; i < length; i++)
- IGC_WRITE_REG_ARRAY_DWORD(hw, IGC_HOST_IF, i,
- *((u32 *)buffer + i));
-
- /* Setting this bit tells the ARC that a new command is pending. */
- IGC_WRITE_REG(hw, IGC_HICR, hicr | IGC_HICR_C);
-
- for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
- hicr = IGC_READ_REG(hw, IGC_HICR);
- if (!(hicr & IGC_HICR_C))
- break;
- msec_delay(1);
- }
-
- /* Check command successful completion. */
- if (i == IGC_HI_COMMAND_TIMEOUT ||
- (!(IGC_READ_REG(hw, IGC_HICR) & IGC_HICR_SV))) {
- DEBUGOUT("Command has failed with no status valid.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- for (i = 0; i < length; i++)
- *((u32 *)buffer + i) = IGC_READ_REG_ARRAY_DWORD(hw,
- IGC_HOST_IF,
- i);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_load_firmware - Writes proxy FW code buffer to host interface
- * and execute.
- * @hw: pointer to the HW structure
- * @buffer: contains a firmware to write
- * @length: the byte length of the buffer, must be multiple of 4 bytes
- *
- * Upon success returns IGC_SUCCESS, returns IGC_ERR_CONFIG if not enabled
- * in HW else returns IGC_ERR_HOST_INTERFACE_COMMAND.
- **/
-s32 igc_load_firmware(struct igc_hw *hw, u8 *buffer, u32 length)
-{
- u32 hicr, hibba, fwsm, icr, i;
-
- DEBUGFUNC("igc_load_firmware");
-
- if (hw->mac.type < igc_i210) {
- DEBUGOUT("Hardware doesn't support loading FW by the driver\n");
- return -IGC_ERR_CONFIG;
- }
-
- /* Check that the host interface is enabled. */
- hicr = IGC_READ_REG(hw, IGC_HICR);
- if (!(hicr & IGC_HICR_EN)) {
- DEBUGOUT("IGC_HOST_EN bit disabled.\n");
- return -IGC_ERR_CONFIG;
- }
- if (!(hicr & IGC_HICR_MEMORY_BASE_EN)) {
- DEBUGOUT("IGC_HICR_MEMORY_BASE_EN bit disabled.\n");
- return -IGC_ERR_CONFIG;
- }
-
- if (length == 0 || length & 0x3 || length > IGC_HI_FW_MAX_LENGTH) {
- DEBUGOUT("Buffer length failure.\n");
- return -IGC_ERR_INVALID_ARGUMENT;
- }
-
- /* Clear notification from ROM-FW by reading ICR register */
- icr = IGC_READ_REG(hw, IGC_ICR_V2);
-
- /* Reset ROM-FW */
- hicr = IGC_READ_REG(hw, IGC_HICR);
- hicr |= IGC_HICR_FW_RESET_ENABLE;
- IGC_WRITE_REG(hw, IGC_HICR, hicr);
- hicr |= IGC_HICR_FW_RESET;
- IGC_WRITE_REG(hw, IGC_HICR, hicr);
- IGC_WRITE_FLUSH(hw);
-
- /* Wait till MAC notifies about its readiness after ROM-FW reset */
- for (i = 0; i < (IGC_HI_COMMAND_TIMEOUT * 2); i++) {
- icr = IGC_READ_REG(hw, IGC_ICR_V2);
- if (icr & IGC_ICR_MNG)
- break;
- msec_delay(1);
- }
-
- /* Check for timeout */
- if (i == IGC_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("FW reset failed.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Wait till MAC is ready to accept new FW code */
- for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
- fwsm = IGC_READ_REG(hw, IGC_FWSM);
- if ((fwsm & IGC_FWSM_FW_VALID) &&
- ((fwsm & IGC_FWSM_MODE_MASK) >> IGC_FWSM_MODE_SHIFT ==
- IGC_FWSM_HI_EN_ONLY_MODE))
- break;
- msec_delay(1);
- }
-
- /* Check for timeout */
- if (i == IGC_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("FW reset failed.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- /* Calculate length in DWORDs */
- length >>= 2;
-
- /* The device driver writes the relevant FW code block
- * into the ram area in DWORDs via 1kB ram addressing window.
- */
- for (i = 0; i < length; i++) {
- if (!(i % IGC_HI_FW_BLOCK_DWORD_LENGTH)) {
- /* Point to correct 1kB ram window */
- hibba = IGC_HI_FW_BASE_ADDRESS +
- ((IGC_HI_FW_BLOCK_DWORD_LENGTH << 2) *
- (i / IGC_HI_FW_BLOCK_DWORD_LENGTH));
-
- IGC_WRITE_REG(hw, IGC_HIBBA, hibba);
- }
-
- IGC_WRITE_REG_ARRAY_DWORD(hw, IGC_HOST_IF,
- i % IGC_HI_FW_BLOCK_DWORD_LENGTH,
- *((u32 *)buffer + i));
- }
-
- /* Setting this bit tells the ARC that a new FW is ready to execute. */
- hicr = IGC_READ_REG(hw, IGC_HICR);
- IGC_WRITE_REG(hw, IGC_HICR, hicr | IGC_HICR_C);
-
- for (i = 0; i < IGC_HI_COMMAND_TIMEOUT; i++) {
- hicr = IGC_READ_REG(hw, IGC_HICR);
- if (!(hicr & IGC_HICR_C))
- break;
- msec_delay(1);
- }
-
- /* Check for successful FW start. */
- if (i == IGC_HI_COMMAND_TIMEOUT) {
- DEBUGOUT("New FW did not start within timeout period.\n");
- return -IGC_ERR_HOST_INTERFACE_COMMAND;
- }
-
- return IGC_SUCCESS;
-}
diff --git a/drivers/net/igc/base/igc_manage.h b/drivers/net/igc/base/igc_manage.h
index 10cae6d7f8..7070de54df 100644
--- a/drivers/net/igc/base/igc_manage.h
+++ b/drivers/net/igc/base/igc_manage.h
@@ -5,7 +5,6 @@
#ifndef _IGC_MANAGE_H_
#define _IGC_MANAGE_H_
-bool igc_check_mng_mode_generic(struct igc_hw *hw);
bool igc_enable_tx_pkt_filtering_generic(struct igc_hw *hw);
s32 igc_mng_enable_host_if_generic(struct igc_hw *hw);
s32 igc_mng_host_if_write_generic(struct igc_hw *hw, u8 *buffer,
@@ -14,10 +13,7 @@ s32 igc_mng_write_cmd_header_generic(struct igc_hw *hw,
struct igc_host_mng_command_header *hdr);
s32 igc_mng_write_dhcp_info_generic(struct igc_hw *hw,
u8 *buffer, u16 length);
-bool igc_enable_mng_pass_thru(struct igc_hw *hw);
u8 igc_calculate_checksum(u8 *buffer, u32 length);
-s32 igc_host_interface_command(struct igc_hw *hw, u8 *buffer, u32 length);
-s32 igc_load_firmware(struct igc_hw *hw, u8 *buffer, u32 length);
enum igc_mng_mode {
igc_mng_mode_none = 0,
diff --git a/drivers/net/igc/base/igc_nvm.c b/drivers/net/igc/base/igc_nvm.c
index a7c901ab56..1583c232e7 100644
--- a/drivers/net/igc/base/igc_nvm.c
+++ b/drivers/net/igc/base/igc_nvm.c
@@ -114,91 +114,6 @@ static void igc_lower_eec_clk(struct igc_hw *hw, u32 *eecd)
usec_delay(hw->nvm.delay_usec);
}
-/**
- * igc_shift_out_eec_bits - Shift data bits our to the EEPROM
- * @hw: pointer to the HW structure
- * @data: data to send to the EEPROM
- * @count: number of bits to shift out
- *
- * We need to shift 'count' bits out to the EEPROM. So, the value in the
- * "data" parameter will be shifted out to the EEPROM one bit at a time.
- * In order to do this, "data" must be broken down into bits.
- **/
-static void igc_shift_out_eec_bits(struct igc_hw *hw, u16 data, u16 count)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- u32 eecd = IGC_READ_REG(hw, IGC_EECD);
- u32 mask;
-
- DEBUGFUNC("igc_shift_out_eec_bits");
-
- mask = 0x01 << (count - 1);
- if (nvm->type == igc_nvm_eeprom_microwire)
- eecd &= ~IGC_EECD_DO;
- else if (nvm->type == igc_nvm_eeprom_spi)
- eecd |= IGC_EECD_DO;
-
- do {
- eecd &= ~IGC_EECD_DI;
-
- if (data & mask)
- eecd |= IGC_EECD_DI;
-
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
-
- usec_delay(nvm->delay_usec);
-
- igc_raise_eec_clk(hw, &eecd);
- igc_lower_eec_clk(hw, &eecd);
-
- mask >>= 1;
- } while (mask);
-
- eecd &= ~IGC_EECD_DI;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
-}
-
-/**
- * igc_shift_in_eec_bits - Shift data bits in from the EEPROM
- * @hw: pointer to the HW structure
- * @count: number of bits to shift in
- *
- * In order to read a register from the EEPROM, we need to shift 'count' bits
- * in from the EEPROM. Bits are "shifted in" by raising the clock input to
- * the EEPROM (setting the SK bit), and then reading the value of the data out
- * "DO" bit. During this "shifting in" process the data in "DI" bit should
- * always be clear.
- **/
-static u16 igc_shift_in_eec_bits(struct igc_hw *hw, u16 count)
-{
- u32 eecd;
- u32 i;
- u16 data;
-
- DEBUGFUNC("igc_shift_in_eec_bits");
-
- eecd = IGC_READ_REG(hw, IGC_EECD);
-
- eecd &= ~(IGC_EECD_DO | IGC_EECD_DI);
- data = 0;
-
- for (i = 0; i < count; i++) {
- data <<= 1;
- igc_raise_eec_clk(hw, &eecd);
-
- eecd = IGC_READ_REG(hw, IGC_EECD);
-
- eecd &= ~IGC_EECD_DI;
- if (eecd & IGC_EECD_DO)
- data |= 1;
-
- igc_lower_eec_clk(hw, &eecd);
- }
-
- return data;
-}
-
/**
* igc_poll_eerd_eewr_done - Poll for EEPROM read/write completion
* @hw: pointer to the HW structure
@@ -229,83 +144,6 @@ s32 igc_poll_eerd_eewr_done(struct igc_hw *hw, int ee_reg)
return -IGC_ERR_NVM;
}
-/**
- * igc_acquire_nvm_generic - Generic request for access to EEPROM
- * @hw: pointer to the HW structure
- *
- * Set the EEPROM access request bit and wait for EEPROM access grant bit.
- * Return successful if access grant bit set, else clear the request for
- * EEPROM access and return -IGC_ERR_NVM (-1).
- **/
-s32 igc_acquire_nvm_generic(struct igc_hw *hw)
-{
- u32 eecd = IGC_READ_REG(hw, IGC_EECD);
- s32 timeout = IGC_NVM_GRANT_ATTEMPTS;
-
- DEBUGFUNC("igc_acquire_nvm_generic");
-
- IGC_WRITE_REG(hw, IGC_EECD, eecd | IGC_EECD_REQ);
- eecd = IGC_READ_REG(hw, IGC_EECD);
-
- while (timeout) {
- if (eecd & IGC_EECD_GNT)
- break;
- usec_delay(5);
- eecd = IGC_READ_REG(hw, IGC_EECD);
- timeout--;
- }
-
- if (!timeout) {
- eecd &= ~IGC_EECD_REQ;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- DEBUGOUT("Could not acquire NVM grant\n");
- return -IGC_ERR_NVM;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_standby_nvm - Return EEPROM to standby state
- * @hw: pointer to the HW structure
- *
- * Return the EEPROM to a standby state.
- **/
-static void igc_standby_nvm(struct igc_hw *hw)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- u32 eecd = IGC_READ_REG(hw, IGC_EECD);
-
- DEBUGFUNC("igc_standby_nvm");
-
- if (nvm->type == igc_nvm_eeprom_microwire) {
- eecd &= ~(IGC_EECD_CS | IGC_EECD_SK);
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
- usec_delay(nvm->delay_usec);
-
- igc_raise_eec_clk(hw, &eecd);
-
- /* Select EEPROM */
- eecd |= IGC_EECD_CS;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
- usec_delay(nvm->delay_usec);
-
- igc_lower_eec_clk(hw, &eecd);
- } else if (nvm->type == igc_nvm_eeprom_spi) {
- /* Toggle CS to flush commands */
- eecd |= IGC_EECD_CS;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
- usec_delay(nvm->delay_usec);
- eecd &= ~IGC_EECD_CS;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
- usec_delay(nvm->delay_usec);
- }
-}
-
/**
* igc_stop_nvm - Terminate EEPROM command
* @hw: pointer to the HW structure
@@ -332,196 +170,6 @@ void igc_stop_nvm(struct igc_hw *hw)
}
}
-/**
- * igc_release_nvm_generic - Release exclusive access to EEPROM
- * @hw: pointer to the HW structure
- *
- * Stop any current commands to the EEPROM and clear the EEPROM request bit.
- **/
-void igc_release_nvm_generic(struct igc_hw *hw)
-{
- u32 eecd;
-
- DEBUGFUNC("igc_release_nvm_generic");
-
- igc_stop_nvm(hw);
-
- eecd = IGC_READ_REG(hw, IGC_EECD);
- eecd &= ~IGC_EECD_REQ;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
-}
-
-/**
- * igc_ready_nvm_eeprom - Prepares EEPROM for read/write
- * @hw: pointer to the HW structure
- *
- * Setups the EEPROM for reading and writing.
- **/
-static s32 igc_ready_nvm_eeprom(struct igc_hw *hw)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- u32 eecd = IGC_READ_REG(hw, IGC_EECD);
- u8 spi_stat_reg;
-
- DEBUGFUNC("igc_ready_nvm_eeprom");
-
- if (nvm->type == igc_nvm_eeprom_microwire) {
- /* Clear SK and DI */
- eecd &= ~(IGC_EECD_DI | IGC_EECD_SK);
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- /* Set CS */
- eecd |= IGC_EECD_CS;
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- } else if (nvm->type == igc_nvm_eeprom_spi) {
- u16 timeout = NVM_MAX_RETRY_SPI;
-
- /* Clear SK and CS */
- eecd &= ~(IGC_EECD_CS | IGC_EECD_SK);
- IGC_WRITE_REG(hw, IGC_EECD, eecd);
- IGC_WRITE_FLUSH(hw);
- usec_delay(1);
-
- /* Read "Status Register" repeatedly until the LSB is cleared.
- * The EEPROM will signal that the command has been completed
- * by clearing bit 0 of the internal status register. If it's
- * not cleared within 'timeout', then error out.
- */
- while (timeout) {
- igc_shift_out_eec_bits(hw, NVM_RDSR_OPCODE_SPI,
- hw->nvm.opcode_bits);
- spi_stat_reg = (u8)igc_shift_in_eec_bits(hw, 8);
- if (!(spi_stat_reg & NVM_STATUS_RDY_SPI))
- break;
-
- usec_delay(5);
- igc_standby_nvm(hw);
- timeout--;
- }
-
- if (!timeout) {
- DEBUGOUT("SPI NVM Status error\n");
- return -IGC_ERR_NVM;
- }
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_read_nvm_spi - Read EEPROM's using SPI
- * @hw: pointer to the HW structure
- * @offset: offset of word in the EEPROM to read
- * @words: number of words to read
- * @data: word read from the EEPROM
- *
- * Reads a 16 bit word from the EEPROM.
- **/
-s32 igc_read_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- u32 i = 0;
- s32 ret_val;
- u16 word_in;
- u8 read_opcode = NVM_READ_OPCODE_SPI;
-
- DEBUGFUNC("igc_read_nvm_spi");
-
- /* A check for invalid values: offset too large, too many words,
- * and not enough words.
- */
- if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
- words == 0) {
- DEBUGOUT("nvm parameter(s) out of bounds\n");
- return -IGC_ERR_NVM;
- }
-
- ret_val = nvm->ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_ready_nvm_eeprom(hw);
- if (ret_val)
- goto release;
-
- igc_standby_nvm(hw);
-
- if (nvm->address_bits == 8 && offset >= 128)
- read_opcode |= NVM_A8_OPCODE_SPI;
-
- /* Send the READ command (opcode + addr) */
- igc_shift_out_eec_bits(hw, read_opcode, nvm->opcode_bits);
- igc_shift_out_eec_bits(hw, (u16)(offset * 2), nvm->address_bits);
-
- /* Read the data. SPI NVMs increment the address with each byte
- * read and will roll over if reading beyond the end. This allows
- * us to read the whole NVM from any offset
- */
- for (i = 0; i < words; i++) {
- word_in = igc_shift_in_eec_bits(hw, 16);
- data[i] = (word_in >> 8) | (word_in << 8);
- }
-
-release:
- nvm->ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_read_nvm_microwire - Reads EEPROM's using microwire
- * @hw: pointer to the HW structure
- * @offset: offset of word in the EEPROM to read
- * @words: number of words to read
- * @data: word read from the EEPROM
- *
- * Reads a 16 bit word from the EEPROM.
- **/
-s32 igc_read_nvm_microwire(struct igc_hw *hw, u16 offset, u16 words,
- u16 *data)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- u32 i = 0;
- s32 ret_val;
- u8 read_opcode = NVM_READ_OPCODE_MICROWIRE;
-
- DEBUGFUNC("igc_read_nvm_microwire");
-
- /* A check for invalid values: offset too large, too many words,
- * and not enough words.
- */
- if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
- words == 0) {
- DEBUGOUT("nvm parameter(s) out of bounds\n");
- return -IGC_ERR_NVM;
- }
-
- ret_val = nvm->ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_ready_nvm_eeprom(hw);
- if (ret_val)
- goto release;
-
- for (i = 0; i < words; i++) {
- /* Send the READ command (opcode + addr) */
- igc_shift_out_eec_bits(hw, read_opcode, nvm->opcode_bits);
- igc_shift_out_eec_bits(hw, (u16)(offset + i),
- nvm->address_bits);
-
- /* Read the data. For microwire, each word requires the
- * overhead of setup and tear-down.
- */
- data[i] = igc_shift_in_eec_bits(hw, 16);
- igc_standby_nvm(hw);
- }
-
-release:
- nvm->ops.release(hw);
-
- return ret_val;
-}
-
/**
* igc_read_nvm_eerd - Reads EEPROM using EERD register
* @hw: pointer to the HW structure
@@ -567,173 +215,6 @@ s32 igc_read_nvm_eerd(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
return ret_val;
}
-/**
- * igc_write_nvm_spi - Write to EEPROM using SPI
- * @hw: pointer to the HW structure
- * @offset: offset within the EEPROM to be written to
- * @words: number of words to write
- * @data: 16 bit word(s) to be written to the EEPROM
- *
- * Writes data to EEPROM at offset using SPI interface.
- *
- * If igc_update_nvm_checksum is not called after this function , the
- * EEPROM will most likely contain an invalid checksum.
- **/
-s32 igc_write_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- s32 ret_val = -IGC_ERR_NVM;
- u16 widx = 0;
-
- DEBUGFUNC("igc_write_nvm_spi");
-
- /* A check for invalid values: offset too large, too many words,
- * and not enough words.
- */
- if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
- words == 0) {
- DEBUGOUT("nvm parameter(s) out of bounds\n");
- return -IGC_ERR_NVM;
- }
-
- while (widx < words) {
- u8 write_opcode = NVM_WRITE_OPCODE_SPI;
-
- ret_val = nvm->ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_ready_nvm_eeprom(hw);
- if (ret_val) {
- nvm->ops.release(hw);
- return ret_val;
- }
-
- igc_standby_nvm(hw);
-
- /* Send the WRITE ENABLE command (8 bit opcode) */
- igc_shift_out_eec_bits(hw, NVM_WREN_OPCODE_SPI,
- nvm->opcode_bits);
-
- igc_standby_nvm(hw);
-
- /* Some SPI eeproms use the 8th address bit embedded in the
- * opcode
- */
- if (nvm->address_bits == 8 && offset >= 128)
- write_opcode |= NVM_A8_OPCODE_SPI;
-
- /* Send the Write command (8-bit opcode + addr) */
- igc_shift_out_eec_bits(hw, write_opcode, nvm->opcode_bits);
- igc_shift_out_eec_bits(hw, (u16)((offset + widx) * 2),
- nvm->address_bits);
-
- /* Loop to allow for up to whole page write of eeprom */
- while (widx < words) {
- u16 word_out = data[widx];
- word_out = (word_out >> 8) | (word_out << 8);
- igc_shift_out_eec_bits(hw, word_out, 16);
- widx++;
-
- if ((((offset + widx) * 2) % nvm->page_size) == 0) {
- igc_standby_nvm(hw);
- break;
- }
- }
- msec_delay(10);
- nvm->ops.release(hw);
- }
-
- return ret_val;
-}
-
-/**
- * igc_write_nvm_microwire - Writes EEPROM using microwire
- * @hw: pointer to the HW structure
- * @offset: offset within the EEPROM to be written to
- * @words: number of words to write
- * @data: 16 bit word(s) to be written to the EEPROM
- *
- * Writes data to EEPROM at offset using microwire interface.
- *
- * If igc_update_nvm_checksum is not called after this function , the
- * EEPROM will most likely contain an invalid checksum.
- **/
-s32 igc_write_nvm_microwire(struct igc_hw *hw, u16 offset, u16 words,
- u16 *data)
-{
- struct igc_nvm_info *nvm = &hw->nvm;
- s32 ret_val;
- u32 eecd;
- u16 words_written = 0;
- u16 widx = 0;
-
- DEBUGFUNC("igc_write_nvm_microwire");
-
- /* A check for invalid values: offset too large, too many words,
- * and not enough words.
- */
- if (offset >= nvm->word_size || words > (nvm->word_size - offset) ||
- words == 0) {
- DEBUGOUT("nvm parameter(s) out of bounds\n");
- return -IGC_ERR_NVM;
- }
-
- ret_val = nvm->ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_ready_nvm_eeprom(hw);
- if (ret_val)
- goto release;
-
- igc_shift_out_eec_bits(hw, NVM_EWEN_OPCODE_MICROWIRE,
- (u16)(nvm->opcode_bits + 2));
-
- igc_shift_out_eec_bits(hw, 0, (u16)(nvm->address_bits - 2));
-
- igc_standby_nvm(hw);
-
- while (words_written < words) {
- igc_shift_out_eec_bits(hw, NVM_WRITE_OPCODE_MICROWIRE,
- nvm->opcode_bits);
-
- igc_shift_out_eec_bits(hw, (u16)(offset + words_written),
- nvm->address_bits);
-
- igc_shift_out_eec_bits(hw, data[words_written], 16);
-
- igc_standby_nvm(hw);
-
- for (widx = 0; widx < 200; widx++) {
- eecd = IGC_READ_REG(hw, IGC_EECD);
- if (eecd & IGC_EECD_DO)
- break;
- usec_delay(50);
- }
-
- if (widx == 200) {
- DEBUGOUT("NVM Write did not complete\n");
- ret_val = -IGC_ERR_NVM;
- goto release;
- }
-
- igc_standby_nvm(hw);
-
- words_written++;
- }
-
- igc_shift_out_eec_bits(hw, NVM_EWDS_OPCODE_MICROWIRE,
- (u16)(nvm->opcode_bits + 2));
-
- igc_shift_out_eec_bits(hw, 0, (u16)(nvm->address_bits - 2));
-
-release:
- nvm->ops.release(hw);
-
- return ret_val;
-}
-
/**
* igc_read_pba_string_generic - Read device part number
* @hw: pointer to the HW structure
@@ -939,134 +420,6 @@ s32 igc_read_pba_num_generic(struct igc_hw *hw, u32 *pba_num)
}
-/**
- * igc_read_pba_raw
- * @hw: pointer to the HW structure
- * @eeprom_buf: optional pointer to EEPROM image
- * @eeprom_buf_size: size of EEPROM image in words
- * @max_pba_block_size: PBA block size limit
- * @pba: pointer to output PBA structure
- *
- * Reads PBA from EEPROM image when eeprom_buf is not NULL.
- * Reads PBA from physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 igc_read_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, u16 max_pba_block_size,
- struct igc_pba *pba)
-{
- s32 ret_val;
- u16 pba_block_size;
-
- if (pba == NULL)
- return -IGC_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = igc_read_nvm(hw, NVM_PBA_OFFSET_0, 2,
- &pba->word[0]);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
- pba->word[0] = eeprom_buf[NVM_PBA_OFFSET_0];
- pba->word[1] = eeprom_buf[NVM_PBA_OFFSET_1];
- } else {
- return -IGC_ERR_PARAM;
- }
- }
-
- if (pba->word[0] == NVM_PBA_PTR_GUARD) {
- if (pba->pba_block == NULL)
- return -IGC_ERR_PARAM;
-
- ret_val = igc_get_pba_block_size(hw, eeprom_buf,
- eeprom_buf_size,
- &pba_block_size);
- if (ret_val)
- return ret_val;
-
- if (pba_block_size > max_pba_block_size)
- return -IGC_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = igc_read_nvm(hw, pba->word[1],
- pba_block_size,
- pba->pba_block);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > (u32)(pba->word[1] +
- pba_block_size)) {
- memcpy(pba->pba_block,
- &eeprom_buf[pba->word[1]],
- pba_block_size * sizeof(u16));
- } else {
- return -IGC_ERR_PARAM;
- }
- }
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_write_pba_raw
- * @hw: pointer to the HW structure
- * @eeprom_buf: optional pointer to EEPROM image
- * @eeprom_buf_size: size of EEPROM image in words
- * @pba: pointer to PBA structure
- *
- * Writes PBA to EEPROM image when eeprom_buf is not NULL.
- * Writes PBA to physical EEPROM device when eeprom_buf is NULL.
- *
- **/
-s32 igc_write_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, struct igc_pba *pba)
-{
- s32 ret_val;
-
- if (pba == NULL)
- return -IGC_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = igc_write_nvm(hw, NVM_PBA_OFFSET_0, 2,
- &pba->word[0]);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > NVM_PBA_OFFSET_1) {
- eeprom_buf[NVM_PBA_OFFSET_0] = pba->word[0];
- eeprom_buf[NVM_PBA_OFFSET_1] = pba->word[1];
- } else {
- return -IGC_ERR_PARAM;
- }
- }
-
- if (pba->word[0] == NVM_PBA_PTR_GUARD) {
- if (pba->pba_block == NULL)
- return -IGC_ERR_PARAM;
-
- if (eeprom_buf == NULL) {
- ret_val = igc_write_nvm(hw, pba->word[1],
- pba->pba_block[0],
- pba->pba_block);
- if (ret_val)
- return ret_val;
- } else {
- if (eeprom_buf_size > (u32)(pba->word[1] +
- pba->pba_block[0])) {
- memcpy(&eeprom_buf[pba->word[1]],
- pba->pba_block,
- pba->pba_block[0] * sizeof(u16));
- } else {
- return -IGC_ERR_PARAM;
- }
- }
- }
-
- return IGC_SUCCESS;
-}
-
/**
* igc_get_pba_block_size
* @hw: pointer to the HW structure
@@ -1188,38 +541,6 @@ s32 igc_validate_nvm_checksum_generic(struct igc_hw *hw)
return IGC_SUCCESS;
}
-/**
- * igc_update_nvm_checksum_generic - Update EEPROM checksum
- * @hw: pointer to the HW structure
- *
- * Updates the EEPROM checksum by reading/adding each word of the EEPROM
- * up to the checksum. Then calculates the EEPROM checksum and writes the
- * value to the EEPROM.
- **/
-s32 igc_update_nvm_checksum_generic(struct igc_hw *hw)
-{
- s32 ret_val;
- u16 checksum = 0;
- u16 i, nvm_data;
-
- DEBUGFUNC("igc_update_nvm_checksum");
-
- for (i = 0; i < NVM_CHECKSUM_REG; i++) {
- ret_val = hw->nvm.ops.read(hw, i, 1, &nvm_data);
- if (ret_val) {
- DEBUGOUT("NVM Read Error while updating checksum.\n");
- return ret_val;
- }
- checksum += nvm_data;
- }
- checksum = (u16)NVM_SUM - checksum;
- ret_val = hw->nvm.ops.write(hw, NVM_CHECKSUM_REG, 1, &checksum);
- if (ret_val)
- DEBUGOUT("NVM Write Error while updating checksum.\n");
-
- return ret_val;
-}
-
/**
* igc_reload_nvm_generic - Reloads EEPROM
* @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_nvm.h b/drivers/net/igc/base/igc_nvm.h
index 0eee5e4571..e4c1c15f9f 100644
--- a/drivers/net/igc/base/igc_nvm.h
+++ b/drivers/net/igc/base/igc_nvm.h
@@ -32,7 +32,6 @@ s32 igc_null_read_nvm(struct igc_hw *hw, u16 a, u16 b, u16 *c);
void igc_null_nvm_generic(struct igc_hw *hw);
s32 igc_null_led_default(struct igc_hw *hw, u16 *data);
s32 igc_null_write_nvm(struct igc_hw *hw, u16 a, u16 b, u16 *c);
-s32 igc_acquire_nvm_generic(struct igc_hw *hw);
s32 igc_poll_eerd_eewr_done(struct igc_hw *hw, int ee_reg);
s32 igc_read_mac_addr_generic(struct igc_hw *hw);
@@ -40,27 +39,12 @@ s32 igc_read_pba_num_generic(struct igc_hw *hw, u32 *pba_num);
s32 igc_read_pba_string_generic(struct igc_hw *hw, u8 *pba_num,
u32 pba_num_size);
s32 igc_read_pba_length_generic(struct igc_hw *hw, u32 *pba_num_size);
-s32 igc_read_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, u16 max_pba_block_size,
- struct igc_pba *pba);
-s32 igc_write_pba_raw(struct igc_hw *hw, u16 *eeprom_buf,
- u32 eeprom_buf_size, struct igc_pba *pba);
s32 igc_get_pba_block_size(struct igc_hw *hw, u16 *eeprom_buf,
u32 eeprom_buf_size, u16 *pba_block_size);
-s32 igc_read_nvm_spi(struct igc_hw *hw, u16 offset, u16 words, u16 *data);
-s32 igc_read_nvm_microwire(struct igc_hw *hw, u16 offset,
- u16 words, u16 *data);
s32 igc_read_nvm_eerd(struct igc_hw *hw, u16 offset, u16 words,
u16 *data);
-s32 igc_valid_led_default_generic(struct igc_hw *hw, u16 *data);
s32 igc_validate_nvm_checksum_generic(struct igc_hw *hw);
-s32 igc_write_nvm_microwire(struct igc_hw *hw, u16 offset,
- u16 words, u16 *data);
-s32 igc_write_nvm_spi(struct igc_hw *hw, u16 offset, u16 words,
- u16 *data);
-s32 igc_update_nvm_checksum_generic(struct igc_hw *hw);
void igc_stop_nvm(struct igc_hw *hw);
-void igc_release_nvm_generic(struct igc_hw *hw);
void igc_get_fw_version(struct igc_hw *hw,
struct igc_fw_version *fw_vers);
diff --git a/drivers/net/igc/base/igc_osdep.c b/drivers/net/igc/base/igc_osdep.c
index 508f2e07ad..22e9471c79 100644
--- a/drivers/net/igc/base/igc_osdep.c
+++ b/drivers/net/igc/base/igc_osdep.c
@@ -26,18 +26,6 @@ igc_read_pci_cfg(struct igc_hw *hw, u32 reg, u16 *value)
*value = 0;
}
-void
-igc_pci_set_mwi(struct igc_hw *hw)
-{
- (void)hw;
-}
-
-void
-igc_pci_clear_mwi(struct igc_hw *hw)
-{
- (void)hw;
-}
-
/*
* Read the PCI Express capabilities
*/
@@ -49,16 +37,3 @@ igc_read_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value)
(void)value;
return IGC_NOT_IMPLEMENTED;
}
-
-/*
- * Write the PCI Express capabilities
- */
-int32_t
-igc_write_pcie_cap_reg(struct igc_hw *hw, u32 reg, u16 *value)
-{
- (void)hw;
- (void)reg;
- (void)value;
-
- return IGC_NOT_IMPLEMENTED;
-}
diff --git a/drivers/net/igc/base/igc_phy.c b/drivers/net/igc/base/igc_phy.c
index 43bbe69bca..ffcb0bb67e 100644
--- a/drivers/net/igc/base/igc_phy.c
+++ b/drivers/net/igc/base/igc_phy.c
@@ -5,31 +5,6 @@
#include "igc_api.h"
static s32 igc_wait_autoneg(struct igc_hw *hw);
-static s32 igc_access_phy_wakeup_reg_bm(struct igc_hw *hw, u32 offset,
- u16 *data, bool read, bool page_set);
-static u32 igc_get_phy_addr_for_hv_page(u32 page);
-static s32 igc_access_phy_debug_regs_hv(struct igc_hw *hw, u32 offset,
- u16 *data, bool read);
-
-/* Cable length tables */
-static const u16 igc_m88_cable_length_table[] = {
- 0, 50, 80, 110, 140, 140, IGC_CABLE_LENGTH_UNDEFINED };
-#define M88IGC_CABLE_LENGTH_TABLE_SIZE \
- (sizeof(igc_m88_cable_length_table) / \
- sizeof(igc_m88_cable_length_table[0]))
-
-static const u16 igc_igp_2_cable_length_table[] = {
- 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 8, 11, 13, 16, 18, 21, 0, 0, 0, 3,
- 6, 10, 13, 16, 19, 23, 26, 29, 32, 35, 38, 41, 6, 10, 14, 18, 22,
- 26, 30, 33, 37, 41, 44, 48, 51, 54, 58, 61, 21, 26, 31, 35, 40,
- 44, 49, 53, 57, 61, 65, 68, 72, 75, 79, 82, 40, 45, 51, 56, 61,
- 66, 70, 75, 79, 83, 87, 91, 94, 98, 101, 104, 60, 66, 72, 77, 82,
- 87, 92, 96, 100, 104, 108, 111, 114, 117, 119, 121, 83, 89, 95,
- 100, 105, 109, 113, 116, 119, 122, 124, 104, 109, 114, 118, 121,
- 124};
-#define IGP02IGC_CABLE_LENGTH_TABLE_SIZE \
- (sizeof(igc_igp_2_cable_length_table) / \
- sizeof(igc_igp_2_cable_length_table[0]))
/**
* igc_init_phy_ops_generic - Initialize PHY function pointers
@@ -385,299 +360,6 @@ s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
return IGC_SUCCESS;
}
-/**
- * igc_read_phy_reg_i2c - Read PHY register using i2c
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Reads the PHY register at offset using the i2c interface and stores the
- * retrieved information in data.
- **/
-s32 igc_read_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 *data)
-{
- struct igc_phy_info *phy = &hw->phy;
- u32 i, i2ccmd = 0;
-
- DEBUGFUNC("igc_read_phy_reg_i2c");
-
- /* Set up Op-code, Phy Address, and register address in the I2CCMD
- * register. The MAC will take care of interfacing with the
- * PHY to retrieve the desired data.
- */
- i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
- (phy->addr << IGC_I2CCMD_PHY_ADDR_SHIFT) |
- (IGC_I2CCMD_OPCODE_READ));
-
- IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
- /* Poll the ready bit to see if the I2C read completed */
- for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
- usec_delay(50);
- i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
- if (i2ccmd & IGC_I2CCMD_READY)
- break;
- }
- if (!(i2ccmd & IGC_I2CCMD_READY)) {
- DEBUGOUT("I2CCMD Read did not complete\n");
- return -IGC_ERR_PHY;
- }
- if (i2ccmd & IGC_I2CCMD_ERROR) {
- DEBUGOUT("I2CCMD Error bit set\n");
- return -IGC_ERR_PHY;
- }
-
- /* Need to byte-swap the 16-bit value. */
- *data = ((i2ccmd >> 8) & 0x00FF) | ((i2ccmd << 8) & 0xFF00);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_write_phy_reg_i2c - Write PHY register using i2c
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Writes the data to PHY register at the offset using the i2c interface.
- **/
-s32 igc_write_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 data)
-{
- struct igc_phy_info *phy = &hw->phy;
- u32 i, i2ccmd = 0;
- u16 phy_data_swapped;
-
- DEBUGFUNC("igc_write_phy_reg_i2c");
-
- /* Prevent overwriting SFP I2C EEPROM which is at A0 address. */
- if (hw->phy.addr == 0 || hw->phy.addr > 7) {
- DEBUGOUT1("PHY I2C Address %d is out of range.\n",
- hw->phy.addr);
- return -IGC_ERR_CONFIG;
- }
-
- /* Swap the data bytes for the I2C interface */
- phy_data_swapped = ((data >> 8) & 0x00FF) | ((data << 8) & 0xFF00);
-
- /* Set up Op-code, Phy Address, and register address in the I2CCMD
- * register. The MAC will take care of interfacing with the
- * PHY to retrieve the desired data.
- */
- i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
- (phy->addr << IGC_I2CCMD_PHY_ADDR_SHIFT) |
- IGC_I2CCMD_OPCODE_WRITE |
- phy_data_swapped);
-
- IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
- /* Poll the ready bit to see if the I2C read completed */
- for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
- usec_delay(50);
- i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
- if (i2ccmd & IGC_I2CCMD_READY)
- break;
- }
- if (!(i2ccmd & IGC_I2CCMD_READY)) {
- DEBUGOUT("I2CCMD Write did not complete\n");
- return -IGC_ERR_PHY;
- }
- if (i2ccmd & IGC_I2CCMD_ERROR) {
- DEBUGOUT("I2CCMD Error bit set\n");
- return -IGC_ERR_PHY;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_read_sfp_data_byte - Reads SFP module data.
- * @hw: pointer to the HW structure
- * @offset: byte location offset to be read
- * @data: read data buffer pointer
- *
- * Reads one byte from SFP module data stored
- * in SFP resided EEPROM memory or SFP diagnostic area.
- * Function should be called with
- * IGC_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- * IGC_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- * access
- **/
-s32 igc_read_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 *data)
-{
- u32 i = 0;
- u32 i2ccmd = 0;
- u32 data_local = 0;
-
- DEBUGFUNC("igc_read_sfp_data_byte");
-
- if (offset > IGC_I2CCMD_SFP_DIAG_ADDR(255)) {
- DEBUGOUT("I2CCMD command address exceeds upper limit\n");
- return -IGC_ERR_PHY;
- }
-
- /* Set up Op-code, EEPROM Address,in the I2CCMD
- * register. The MAC will take care of interfacing with the
- * EEPROM to retrieve the desired data.
- */
- i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
- IGC_I2CCMD_OPCODE_READ);
-
- IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
-
- /* Poll the ready bit to see if the I2C read completed */
- for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
- usec_delay(50);
- data_local = IGC_READ_REG(hw, IGC_I2CCMD);
- if (data_local & IGC_I2CCMD_READY)
- break;
- }
- if (!(data_local & IGC_I2CCMD_READY)) {
- DEBUGOUT("I2CCMD Read did not complete\n");
- return -IGC_ERR_PHY;
- }
- if (data_local & IGC_I2CCMD_ERROR) {
- DEBUGOUT("I2CCMD Error bit set\n");
- return -IGC_ERR_PHY;
- }
- *data = (u8)data_local & 0xFF;
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_write_sfp_data_byte - Writes SFP module data.
- * @hw: pointer to the HW structure
- * @offset: byte location offset to write to
- * @data: data to write
- *
- * Writes one byte to SFP module data stored
- * in SFP resided EEPROM memory or SFP diagnostic area.
- * Function should be called with
- * IGC_I2CCMD_SFP_DATA_ADDR(<byte offset>) for SFP module database access
- * IGC_I2CCMD_SFP_DIAG_ADDR(<byte offset>) for SFP diagnostics parameters
- * access
- **/
-s32 igc_write_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 data)
-{
- u32 i = 0;
- u32 i2ccmd = 0;
- u32 data_local = 0;
-
- DEBUGFUNC("igc_write_sfp_data_byte");
-
- if (offset > IGC_I2CCMD_SFP_DIAG_ADDR(255)) {
- DEBUGOUT("I2CCMD command address exceeds upper limit\n");
- return -IGC_ERR_PHY;
- }
- /* The programming interface is 16 bits wide
- * so we need to read the whole word first
- * then update appropriate byte lane and write
- * the updated word back.
- */
- /* Set up Op-code, EEPROM Address,in the I2CCMD
- * register. The MAC will take care of interfacing
- * with an EEPROM to write the data given.
- */
- i2ccmd = ((offset << IGC_I2CCMD_REG_ADDR_SHIFT) |
- IGC_I2CCMD_OPCODE_READ);
- /* Set a command to read single word */
- IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
- for (i = 0; i < IGC_I2CCMD_PHY_TIMEOUT; i++) {
- usec_delay(50);
- /* Poll the ready bit to see if lastly
- * launched I2C operation completed
- */
- i2ccmd = IGC_READ_REG(hw, IGC_I2CCMD);
- if (i2ccmd & IGC_I2CCMD_READY) {
- /* Check if this is READ or WRITE phase */
- if ((i2ccmd & IGC_I2CCMD_OPCODE_READ) ==
- IGC_I2CCMD_OPCODE_READ) {
- /* Write the selected byte
- * lane and update whole word
- */
- data_local = i2ccmd & 0xFF00;
- data_local |= (u32)data;
- i2ccmd = ((offset <<
- IGC_I2CCMD_REG_ADDR_SHIFT) |
- IGC_I2CCMD_OPCODE_WRITE | data_local);
- IGC_WRITE_REG(hw, IGC_I2CCMD, i2ccmd);
- } else {
- break;
- }
- }
- }
- if (!(i2ccmd & IGC_I2CCMD_READY)) {
- DEBUGOUT("I2CCMD Write did not complete\n");
- return -IGC_ERR_PHY;
- }
- if (i2ccmd & IGC_I2CCMD_ERROR) {
- DEBUGOUT("I2CCMD Error bit set\n");
- return -IGC_ERR_PHY;
- }
- return IGC_SUCCESS;
-}
-
-/**
- * igc_read_phy_reg_m88 - Read m88 PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Acquires semaphore, if necessary, then reads the PHY register at offset
- * and storing the retrieved information in data. Release any acquired
- * semaphores before exiting.
- **/
-s32 igc_read_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 *data)
-{
- s32 ret_val;
-
- DEBUGFUNC("igc_read_phy_reg_m88");
-
- if (!hw->phy.ops.acquire)
- return IGC_SUCCESS;
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_write_phy_reg_m88 - Write m88 PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 data)
-{
- s32 ret_val;
-
- DEBUGFUNC("igc_write_phy_reg_m88");
-
- if (!hw->phy.ops.acquire)
- return IGC_SUCCESS;
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
/**
* igc_set_page_igp - Set page as on IGP-like PHY(s)
* @hw: pointer to the HW structure
@@ -698,144 +380,6 @@ s32 igc_set_page_igp(struct igc_hw *hw, u16 page)
return igc_write_phy_reg_mdic(hw, IGP01IGC_PHY_PAGE_SELECT, page);
}
-/**
- * __igc_read_phy_reg_igp - Read igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- * @locked: semaphore has already been acquired or not
- *
- * Acquires semaphore, if necessary, then reads the PHY register at offset
- * and stores the retrieved information in data. Release any acquired
- * semaphores before exiting.
- **/
-static s32 __igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data,
- bool locked)
-{
- s32 ret_val = IGC_SUCCESS;
-
- DEBUGFUNC("__igc_read_phy_reg_igp");
-
- if (!locked) {
- if (!hw->phy.ops.acquire)
- return IGC_SUCCESS;
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
- }
-
- if (offset > MAX_PHY_MULTI_PAGE_REG)
- ret_val = igc_write_phy_reg_mdic(hw,
- IGP01IGC_PHY_PAGE_SELECT,
- (u16)offset);
- if (!ret_val)
- ret_val = igc_read_phy_reg_mdic(hw,
- MAX_PHY_REG_ADDRESS & offset,
- data);
- if (!locked)
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_read_phy_reg_igp - Read igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Acquires semaphore then reads the PHY register at offset and stores the
- * retrieved information in data.
- * Release the acquired semaphore before exiting.
- **/
-s32 igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_phy_reg_igp(hw, offset, data, false);
-}
-
-/**
- * igc_read_phy_reg_igp_locked - Read igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Reads the PHY register at offset and stores the retrieved information
- * in data. Assumes semaphore already acquired.
- **/
-s32 igc_read_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_phy_reg_igp(hw, offset, data, true);
-}
-
-/**
- * igc_write_phy_reg_igp - Write igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- * @locked: semaphore has already been acquired or not
- *
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-static s32 __igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data,
- bool locked)
-{
- s32 ret_val = IGC_SUCCESS;
-
- DEBUGFUNC("igc_write_phy_reg_igp");
-
- if (!locked) {
- if (!hw->phy.ops.acquire)
- return IGC_SUCCESS;
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
- }
-
- if (offset > MAX_PHY_MULTI_PAGE_REG)
- ret_val = igc_write_phy_reg_mdic(hw,
- IGP01IGC_PHY_PAGE_SELECT,
- (u16)offset);
- if (!ret_val)
- ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS &
- offset,
- data);
- if (!locked)
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_write_phy_reg_igp - Write igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Acquires semaphore then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_phy_reg_igp(hw, offset, data, false);
-}
-
-/**
- * igc_write_phy_reg_igp_locked - Write igp PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Writes the data to PHY register at the offset.
- * Assumes semaphore already acquired.
- **/
-s32 igc_write_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_phy_reg_igp(hw, offset, data, true);
-}
-
/**
* __igc_read_kmrn_reg - Read kumeran register
* @hw: pointer to the HW structure
@@ -896,21 +440,6 @@ s32 igc_read_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 *data)
return __igc_read_kmrn_reg(hw, offset, data, false);
}
-/**
- * igc_read_kmrn_reg_locked - Read kumeran register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Reads the PHY register at offset using the kumeran interface. The
- * information retrieved is stored in data.
- * Assumes semaphore already acquired.
- **/
-s32 igc_read_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_kmrn_reg(hw, offset, data, true);
-}
-
/**
* __igc_write_kmrn_reg - Write kumeran register
* @hw: pointer to the HW structure
@@ -968,490 +497,17 @@ s32 igc_write_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 data)
}
/**
- * igc_write_kmrn_reg_locked - Write kumeran register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Write the data to PHY register at the offset using the kumeran interface.
- * Assumes semaphore already acquired.
- **/
-s32 igc_write_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_kmrn_reg(hw, offset, data, true);
-}
-
-/**
- * igc_set_master_slave_mode - Setup PHY for Master/slave mode
+ * igc_phy_setup_autoneg - Configure PHY for auto-negotiation
* @hw: pointer to the HW structure
*
- * Sets up Master/slave mode
+ * Reads the MII auto-neg advertisement register and/or the 1000T control
+ * register and if the PHY is already setup for auto-negotiation, then
+ * return successful. Otherwise, setup advertisement and flow control to
+ * the appropriate values for the wanted auto-negotiation.
**/
-static s32 igc_set_master_slave_mode(struct igc_hw *hw)
+s32 igc_phy_setup_autoneg(struct igc_hw *hw)
{
- s32 ret_val;
- u16 phy_data;
-
- /* Resolve Master/Slave mode */
- ret_val = hw->phy.ops.read_reg(hw, PHY_1000T_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- /* load defaults for future use */
- hw->phy.original_ms_type = (phy_data & CR_1000T_MS_ENABLE) ?
- ((phy_data & CR_1000T_MS_VALUE) ?
- igc_ms_force_master :
- igc_ms_force_slave) : igc_ms_auto;
-
- switch (hw->phy.ms_type) {
- case igc_ms_force_master:
- phy_data |= (CR_1000T_MS_ENABLE | CR_1000T_MS_VALUE);
- break;
- case igc_ms_force_slave:
- phy_data |= CR_1000T_MS_ENABLE;
- phy_data &= ~(CR_1000T_MS_VALUE);
- break;
- case igc_ms_auto:
- phy_data &= ~CR_1000T_MS_ENABLE;
- /* fall-through */
- default:
- break;
- }
-
- return hw->phy.ops.write_reg(hw, PHY_1000T_CTRL, phy_data);
-}
-
-/**
- * igc_copper_link_setup_82577 - Setup 82577 PHY for copper link
- * @hw: pointer to the HW structure
- *
- * Sets up Carrier-sense on Transmit and downshift values.
- **/
-s32 igc_copper_link_setup_82577(struct igc_hw *hw)
-{
- s32 ret_val;
- u16 phy_data;
-
- DEBUGFUNC("igc_copper_link_setup_82577");
-
- if (hw->phy.type == igc_phy_82580) {
- ret_val = hw->phy.ops.reset(hw);
- if (ret_val) {
- DEBUGOUT("Error resetting the PHY.\n");
- return ret_val;
- }
- }
-
- /* Enable CRS on Tx. This must be set for half-duplex operation. */
- ret_val = hw->phy.ops.read_reg(hw, I82577_CFG_REG, &phy_data);
- if (ret_val)
- return ret_val;
-
- phy_data |= I82577_CFG_ASSERT_CRS_ON_TX;
-
- /* Enable downshift */
- phy_data |= I82577_CFG_ENABLE_DOWNSHIFT;
-
- ret_val = hw->phy.ops.write_reg(hw, I82577_CFG_REG, phy_data);
- if (ret_val)
- return ret_val;
-
- /* Set MDI/MDIX mode */
- ret_val = hw->phy.ops.read_reg(hw, I82577_PHY_CTRL_2, &phy_data);
- if (ret_val)
- return ret_val;
- phy_data &= ~I82577_PHY_CTRL2_MDIX_CFG_MASK;
- /* Options:
- * 0 - Auto (default)
- * 1 - MDI mode
- * 2 - MDI-X mode
- */
- switch (hw->phy.mdix) {
- case 1:
- break;
- case 2:
- phy_data |= I82577_PHY_CTRL2_MANUAL_MDIX;
- break;
- case 0:
- default:
- phy_data |= I82577_PHY_CTRL2_AUTO_MDI_MDIX;
- break;
- }
- ret_val = hw->phy.ops.write_reg(hw, I82577_PHY_CTRL_2, phy_data);
- if (ret_val)
- return ret_val;
-
- return igc_set_master_slave_mode(hw);
-}
-
-/**
- * igc_copper_link_setup_m88 - Setup m88 PHY's for copper link
- * @hw: pointer to the HW structure
- *
- * Sets up MDI/MDI-X and polarity for m88 PHY's. If necessary, transmit clock
- * and downshift values are set also.
- **/
-s32 igc_copper_link_setup_m88(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
-
- DEBUGFUNC("igc_copper_link_setup_m88");
-
-
- /* Enable CRS on Tx. This must be set for half-duplex operation. */
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- /* For BM PHY this bit is downshift enable */
- if (phy->type != igc_phy_bm)
- phy_data |= M88IGC_PSCR_ASSERT_CRS_ON_TX;
-
- /* Options:
- * MDI/MDI-X = 0 (default)
- * 0 - Auto for all speeds
- * 1 - MDI mode
- * 2 - MDI-X mode
- * 3 - Auto for 1000Base-T only (MDI-X for 10/100Base-T modes)
- */
- phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
-
- switch (phy->mdix) {
- case 1:
- phy_data |= M88IGC_PSCR_MDI_MANUAL_MODE;
- break;
- case 2:
- phy_data |= M88IGC_PSCR_MDIX_MANUAL_MODE;
- break;
- case 3:
- phy_data |= M88IGC_PSCR_AUTO_X_1000T;
- break;
- case 0:
- default:
- phy_data |= M88IGC_PSCR_AUTO_X_MODE;
- break;
- }
-
- /* Options:
- * disable_polarity_correction = 0 (default)
- * Automatic Correction for Reversed Cable Polarity
- * 0 - Disabled
- * 1 - Enabled
- */
- phy_data &= ~M88IGC_PSCR_POLARITY_REVERSAL;
- if (phy->disable_polarity_correction)
- phy_data |= M88IGC_PSCR_POLARITY_REVERSAL;
-
- /* Enable downshift on BM (disabled by default) */
- if (phy->type == igc_phy_bm) {
- /* For 82574/82583, first disable then enable downshift */
- if (phy->id == BMIGC_E_PHY_ID_R2) {
- phy_data &= ~BMIGC_PSCR_ENABLE_DOWNSHIFT;
- ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL,
- phy_data);
- if (ret_val)
- return ret_val;
- /* Commit the changes. */
- ret_val = phy->ops.commit(hw);
- if (ret_val) {
- DEBUGOUT("Error committing the PHY changes\n");
- return ret_val;
- }
- }
-
- phy_data |= BMIGC_PSCR_ENABLE_DOWNSHIFT;
- }
-
- ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
- if (ret_val)
- return ret_val;
-
- if (phy->type == igc_phy_m88 && phy->revision < IGC_REVISION_4 &&
- phy->id != BMIGC_E_PHY_ID_R2) {
- /* Force TX_CLK in the Extended PHY Specific Control Register
- * to 25MHz clock.
- */
- ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
- &phy_data);
- if (ret_val)
- return ret_val;
-
- phy_data |= M88IGC_EPSCR_TX_CLK_25;
-
- if (phy->revision == IGC_REVISION_2 &&
- phy->id == M88E1111_I_PHY_ID) {
- /* 82573L PHY - set the downshift counter to 5x. */
- phy_data &= ~M88EC018_EPSCR_DOWNSHIFT_COUNTER_MASK;
- phy_data |= M88EC018_EPSCR_DOWNSHIFT_COUNTER_5X;
- } else {
- /* Configure Master and Slave downshift values */
- phy_data &= ~(M88IGC_EPSCR_MASTER_DOWNSHIFT_MASK |
- M88IGC_EPSCR_SLAVE_DOWNSHIFT_MASK);
- phy_data |= (M88IGC_EPSCR_MASTER_DOWNSHIFT_1X |
- M88IGC_EPSCR_SLAVE_DOWNSHIFT_1X);
- }
- ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
- phy_data);
- if (ret_val)
- return ret_val;
- }
-
- if (phy->type == igc_phy_bm && phy->id == BMIGC_E_PHY_ID_R2) {
- /* Set PHY page 0, register 29 to 0x0003 */
- ret_val = phy->ops.write_reg(hw, 29, 0x0003);
- if (ret_val)
- return ret_val;
-
- /* Set PHY page 0, register 30 to 0x0000 */
- ret_val = phy->ops.write_reg(hw, 30, 0x0000);
- if (ret_val)
- return ret_val;
- }
-
- /* Commit the changes. */
- ret_val = phy->ops.commit(hw);
- if (ret_val) {
- DEBUGOUT("Error committing the PHY changes\n");
- return ret_val;
- }
-
- if (phy->type == igc_phy_82578) {
- ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
- &phy_data);
- if (ret_val)
- return ret_val;
-
- /* 82578 PHY - set the downshift count to 1x. */
- phy_data |= I82578_EPSCR_DOWNSHIFT_ENABLE;
- phy_data &= ~I82578_EPSCR_DOWNSHIFT_COUNTER_MASK;
- ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL,
- phy_data);
- if (ret_val)
- return ret_val;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_copper_link_setup_m88_gen2 - Setup m88 PHY's for copper link
- * @hw: pointer to the HW structure
- *
- * Sets up MDI/MDI-X and polarity for i347-AT4, m88e1322 and m88e1112 PHY's.
- * Also enables and sets the downshift parameters.
- **/
-s32 igc_copper_link_setup_m88_gen2(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
-
- DEBUGFUNC("igc_copper_link_setup_m88_gen2");
-
-
- /* Enable CRS on Tx. This must be set for half-duplex operation. */
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- /* Options:
- * MDI/MDI-X = 0 (default)
- * 0 - Auto for all speeds
- * 1 - MDI mode
- * 2 - MDI-X mode
- * 3 - Auto for 1000Base-T only (MDI-X for 10/100Base-T modes)
- */
- phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
-
- switch (phy->mdix) {
- case 1:
- phy_data |= M88IGC_PSCR_MDI_MANUAL_MODE;
- break;
- case 2:
- phy_data |= M88IGC_PSCR_MDIX_MANUAL_MODE;
- break;
- case 3:
- /* M88E1112 does not support this mode) */
- if (phy->id != M88E1112_E_PHY_ID) {
- phy_data |= M88IGC_PSCR_AUTO_X_1000T;
- break;
- }
- /* Fall through */
- case 0:
- default:
- phy_data |= M88IGC_PSCR_AUTO_X_MODE;
- break;
- }
-
- /* Options:
- * disable_polarity_correction = 0 (default)
- * Automatic Correction for Reversed Cable Polarity
- * 0 - Disabled
- * 1 - Enabled
- */
- phy_data &= ~M88IGC_PSCR_POLARITY_REVERSAL;
- if (phy->disable_polarity_correction)
- phy_data |= M88IGC_PSCR_POLARITY_REVERSAL;
-
- /* Enable downshift and setting it to X6 */
- if (phy->id == M88E1543_E_PHY_ID) {
- phy_data &= ~I347AT4_PSCR_DOWNSHIFT_ENABLE;
- ret_val =
- phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.commit(hw);
- if (ret_val) {
- DEBUGOUT("Error committing the PHY changes\n");
- return ret_val;
- }
- }
-
- phy_data &= ~I347AT4_PSCR_DOWNSHIFT_MASK;
- phy_data |= I347AT4_PSCR_DOWNSHIFT_6X;
- phy_data |= I347AT4_PSCR_DOWNSHIFT_ENABLE;
-
- ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
- if (ret_val)
- return ret_val;
-
- /* Commit the changes. */
- ret_val = phy->ops.commit(hw);
- if (ret_val) {
- DEBUGOUT("Error committing the PHY changes\n");
- return ret_val;
- }
-
- ret_val = igc_set_master_slave_mode(hw);
- if (ret_val)
- return ret_val;
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_copper_link_setup_igp - Setup igp PHY's for copper link
- * @hw: pointer to the HW structure
- *
- * Sets up LPLU, MDI/MDI-X, polarity, Smartspeed and Master/Slave config for
- * igp PHY's.
- **/
-s32 igc_copper_link_setup_igp(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 data;
-
- DEBUGFUNC("igc_copper_link_setup_igp");
-
-
- ret_val = hw->phy.ops.reset(hw);
- if (ret_val) {
- DEBUGOUT("Error resetting the PHY.\n");
- return ret_val;
- }
-
- /* Wait 100ms for MAC to configure PHY from NVM settings, to avoid
- * timeout issues when LFS is enabled.
- */
- msec_delay(100);
-
- /* The NVM settings will configure LPLU in D3 for
- * non-IGP1 PHYs.
- */
- if (phy->type == igc_phy_igp) {
- /* disable lplu d3 during driver init */
- ret_val = hw->phy.ops.set_d3_lplu_state(hw, false);
- if (ret_val) {
- DEBUGOUT("Error Disabling LPLU D3\n");
- return ret_val;
- }
- }
-
- /* disable lplu d0 during driver init */
- if (hw->phy.ops.set_d0_lplu_state) {
- ret_val = hw->phy.ops.set_d0_lplu_state(hw, false);
- if (ret_val) {
- DEBUGOUT("Error Disabling LPLU D0\n");
- return ret_val;
- }
- }
- /* Configure mdi-mdix settings */
- ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CTRL, &data);
- if (ret_val)
- return ret_val;
-
- data &= ~IGP01IGC_PSCR_AUTO_MDIX;
-
- switch (phy->mdix) {
- case 1:
- data &= ~IGP01IGC_PSCR_FORCE_MDI_MDIX;
- break;
- case 2:
- data |= IGP01IGC_PSCR_FORCE_MDI_MDIX;
- break;
- case 0:
- default:
- data |= IGP01IGC_PSCR_AUTO_MDIX;
- break;
- }
- ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CTRL, data);
- if (ret_val)
- return ret_val;
-
- /* set auto-master slave resolution settings */
- if (hw->mac.autoneg) {
- /* when autonegotiation advertisement is only 1000Mbps then we
- * should disable SmartSpeed and enable Auto MasterSlave
- * resolution as hardware default.
- */
- if (phy->autoneg_advertised == ADVERTISE_1000_FULL) {
- /* Disable SmartSpeed */
- ret_val = phy->ops.read_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- &data);
- if (ret_val)
- return ret_val;
-
- data &= ~IGP01IGC_PSCFR_SMART_SPEED;
- ret_val = phy->ops.write_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- data);
- if (ret_val)
- return ret_val;
-
- /* Set auto Master/Slave resolution process */
- ret_val = phy->ops.read_reg(hw, PHY_1000T_CTRL, &data);
- if (ret_val)
- return ret_val;
-
- data &= ~CR_1000T_MS_ENABLE;
- ret_val = phy->ops.write_reg(hw, PHY_1000T_CTRL, data);
- if (ret_val)
- return ret_val;
- }
-
- ret_val = igc_set_master_slave_mode(hw);
- }
-
- return ret_val;
-}
-
-/**
- * igc_phy_setup_autoneg - Configure PHY for auto-negotiation
- * @hw: pointer to the HW structure
- *
- * Reads the MII auto-neg advertisement register and/or the 1000T control
- * register and if the PHY is already setup for auto-negotiation, then
- * return successful. Otherwise, setup advertisement and flow control to
- * the appropriate values for the wanted auto-negotiation.
- **/
-s32 igc_phy_setup_autoneg(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
+ struct igc_phy_info *phy = &hw->phy;
s32 ret_val;
u16 mii_autoneg_adv_reg;
u16 mii_1000t_ctrl_reg = 0;
@@ -1745,321 +801,48 @@ s32 igc_setup_copper_link_generic(struct igc_hw *hw)
}
/**
- * igc_phy_force_speed_duplex_igp - Force speed/duplex for igp PHY
+ * igc_phy_force_speed_duplex_setup - Configure forced PHY speed/duplex
* @hw: pointer to the HW structure
+ * @phy_ctrl: pointer to current value of PHY_CONTROL
*
- * Calls the PHY setup function to force speed and duplex. Clears the
- * auto-crossover to force MDI manually. Waits for link and returns
- * successful if link up is successful, else -IGC_ERR_PHY (-2).
+ * Forces speed and duplex on the PHY by doing the following: disable flow
+ * control, force speed/duplex on the MAC, disable auto speed detection,
+ * disable auto-negotiation, configure duplex, configure speed, configure
+ * the collision distance, write configuration to CTRL register. The
+ * caller must write to the PHY_CONTROL register for these settings to
+ * take affect.
**/
-s32 igc_phy_force_speed_duplex_igp(struct igc_hw *hw)
+void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
- bool link;
+ struct igc_mac_info *mac = &hw->mac;
+ u32 ctrl;
- DEBUGFUNC("igc_phy_force_speed_duplex_igp");
+ DEBUGFUNC("igc_phy_force_speed_duplex_setup");
- ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
- if (ret_val)
- return ret_val;
+ /* Turn off flow control when forcing speed/duplex */
+ hw->fc.current_mode = igc_fc_none;
- igc_phy_force_speed_duplex_setup(hw, &phy_data);
+ /* Force speed/duplex on the mac */
+ ctrl = IGC_READ_REG(hw, IGC_CTRL);
+ ctrl |= (IGC_CTRL_FRCSPD | IGC_CTRL_FRCDPX);
+ ctrl &= ~IGC_CTRL_SPD_SEL;
- ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
- if (ret_val)
- return ret_val;
+ /* Disable Auto Speed Detection */
+ ctrl &= ~IGC_CTRL_ASDE;
- /* Clear Auto-Crossover to force MDI manually. IGP requires MDI
- * forced whenever speed and duplex are forced.
- */
- ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
+ /* Disable autoneg on the phy */
+ *phy_ctrl &= ~MII_CR_AUTO_NEG_EN;
- phy_data &= ~IGP01IGC_PSCR_AUTO_MDIX;
- phy_data &= ~IGP01IGC_PSCR_FORCE_MDI_MDIX;
-
- ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CTRL, phy_data);
- if (ret_val)
- return ret_val;
-
- DEBUGOUT1("IGP PSCR: %X\n", phy_data);
-
- usec_delay(1);
-
- if (phy->autoneg_wait_to_complete) {
- DEBUGOUT("Waiting for forced speed/duplex link on IGP phy.\n");
-
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
-
- if (!link)
- DEBUGOUT("Link taking longer than expected.\n");
-
- /* Try once more */
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- }
-
- return ret_val;
-}
-
-/**
- * igc_phy_force_speed_duplex_m88 - Force speed/duplex for m88 PHY
- * @hw: pointer to the HW structure
- *
- * Calls the PHY setup function to force speed and duplex. Clears the
- * auto-crossover to force MDI manually. Resets the PHY to commit the
- * changes. If time expires while waiting for link up, we reset the DSP.
- * After reset, TX_CLK and CRS on Tx must be set. Return successful upon
- * successful completion, else return corresponding error code.
- **/
-s32 igc_phy_force_speed_duplex_m88(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
- bool link;
-
- DEBUGFUNC("igc_phy_force_speed_duplex_m88");
-
- /* I210 and I211 devices support Auto-Crossover in forced operation. */
- if (phy->type != igc_phy_i210) {
- /* Clear Auto-Crossover to force MDI manually. M88E1000
- * requires MDI forced whenever speed and duplex are forced.
- */
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL,
- &phy_data);
- if (ret_val)
- return ret_val;
-
- phy_data &= ~M88IGC_PSCR_AUTO_X_MODE;
- ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL,
- phy_data);
- if (ret_val)
- return ret_val;
-
- DEBUGOUT1("M88E1000 PSCR: %X\n", phy_data);
- }
-
- ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
- if (ret_val)
- return ret_val;
-
- igc_phy_force_speed_duplex_setup(hw, &phy_data);
-
- ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
- if (ret_val)
- return ret_val;
-
- /* Reset the phy to commit changes. */
- ret_val = hw->phy.ops.commit(hw);
- if (ret_val)
- return ret_val;
-
- if (phy->autoneg_wait_to_complete) {
- DEBUGOUT("Waiting for forced speed/duplex link on M88 phy.\n");
-
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
-
- if (!link) {
- bool reset_dsp = true;
-
- switch (hw->phy.id) {
- case I347AT4_E_PHY_ID:
- case M88E1340M_E_PHY_ID:
- case M88E1112_E_PHY_ID:
- case M88E1543_E_PHY_ID:
- case M88E1512_E_PHY_ID:
- case I210_I_PHY_ID:
- /* fall-through */
- case I225_I_PHY_ID:
- /* fall-through */
- reset_dsp = false;
- break;
- default:
- if (hw->phy.type != igc_phy_m88)
- reset_dsp = false;
- break;
- }
-
- if (!reset_dsp) {
- DEBUGOUT("Link taking longer than expected.\n");
- } else {
- /* We didn't get link.
- * Reset the DSP and cross our fingers.
- */
- ret_val = phy->ops.write_reg(hw,
- M88IGC_PHY_PAGE_SELECT,
- 0x001d);
- if (ret_val)
- return ret_val;
- ret_val = igc_phy_reset_dsp_generic(hw);
- if (ret_val)
- return ret_val;
- }
- }
-
- /* Try once more */
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
- }
-
- if (hw->phy.type != igc_phy_m88)
- return IGC_SUCCESS;
-
- if (hw->phy.id == I347AT4_E_PHY_ID ||
- hw->phy.id == M88E1340M_E_PHY_ID ||
- hw->phy.id == M88E1112_E_PHY_ID)
- return IGC_SUCCESS;
- if (hw->phy.id == I210_I_PHY_ID)
- return IGC_SUCCESS;
- if (hw->phy.id == I225_I_PHY_ID)
- return IGC_SUCCESS;
- if (hw->phy.id == M88E1543_E_PHY_ID || hw->phy.id == M88E1512_E_PHY_ID)
- return IGC_SUCCESS;
- ret_val = phy->ops.read_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- /* Resetting the phy means we need to re-force TX_CLK in the
- * Extended PHY Specific Control Register to 25MHz clock from
- * the reset value of 2.5MHz.
- */
- phy_data |= M88IGC_EPSCR_TX_CLK_25;
- ret_val = phy->ops.write_reg(hw, M88IGC_EXT_PHY_SPEC_CTRL, phy_data);
- if (ret_val)
- return ret_val;
-
- /* In addition, we must re-enable CRS on Tx for both half and full
- * duplex.
- */
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- phy_data |= M88IGC_PSCR_ASSERT_CRS_ON_TX;
- ret_val = phy->ops.write_reg(hw, M88IGC_PHY_SPEC_CTRL, phy_data);
-
- return ret_val;
-}
-
-/**
- * igc_phy_force_speed_duplex_ife - Force PHY speed & duplex
- * @hw: pointer to the HW structure
- *
- * Forces the speed and duplex settings of the PHY.
- * This is a function pointer entry point only called by
- * PHY setup routines.
- **/
-s32 igc_phy_force_speed_duplex_ife(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 data;
- bool link;
-
- DEBUGFUNC("igc_phy_force_speed_duplex_ife");
-
- ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &data);
- if (ret_val)
- return ret_val;
-
- igc_phy_force_speed_duplex_setup(hw, &data);
-
- ret_val = phy->ops.write_reg(hw, PHY_CONTROL, data);
- if (ret_val)
- return ret_val;
-
- /* Disable MDI-X support for 10/100 */
- ret_val = phy->ops.read_reg(hw, IFE_PHY_MDIX_CONTROL, &data);
- if (ret_val)
- return ret_val;
-
- data &= ~IFE_PMC_AUTO_MDIX;
- data &= ~IFE_PMC_FORCE_MDIX;
-
- ret_val = phy->ops.write_reg(hw, IFE_PHY_MDIX_CONTROL, data);
- if (ret_val)
- return ret_val;
-
- DEBUGOUT1("IFE PMC: %X\n", data);
-
- usec_delay(1);
-
- if (phy->autoneg_wait_to_complete) {
- DEBUGOUT("Waiting for forced speed/duplex link on IFE phy.\n");
-
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
-
- if (!link)
- DEBUGOUT("Link taking longer than expected.\n");
-
- /* Try once more */
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
- }
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_phy_force_speed_duplex_setup - Configure forced PHY speed/duplex
- * @hw: pointer to the HW structure
- * @phy_ctrl: pointer to current value of PHY_CONTROL
- *
- * Forces speed and duplex on the PHY by doing the following: disable flow
- * control, force speed/duplex on the MAC, disable auto speed detection,
- * disable auto-negotiation, configure duplex, configure speed, configure
- * the collision distance, write configuration to CTRL register. The
- * caller must write to the PHY_CONTROL register for these settings to
- * take affect.
- **/
-void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
-{
- struct igc_mac_info *mac = &hw->mac;
- u32 ctrl;
-
- DEBUGFUNC("igc_phy_force_speed_duplex_setup");
-
- /* Turn off flow control when forcing speed/duplex */
- hw->fc.current_mode = igc_fc_none;
-
- /* Force speed/duplex on the mac */
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- ctrl |= (IGC_CTRL_FRCSPD | IGC_CTRL_FRCDPX);
- ctrl &= ~IGC_CTRL_SPD_SEL;
-
- /* Disable Auto Speed Detection */
- ctrl &= ~IGC_CTRL_ASDE;
-
- /* Disable autoneg on the phy */
- *phy_ctrl &= ~MII_CR_AUTO_NEG_EN;
-
- /* Forcing Full or Half Duplex? */
- if (mac->forced_speed_duplex & IGC_ALL_HALF_DUPLEX) {
- ctrl &= ~IGC_CTRL_FD;
- *phy_ctrl &= ~MII_CR_FULL_DUPLEX;
- DEBUGOUT("Half Duplex\n");
- } else {
- ctrl |= IGC_CTRL_FD;
- *phy_ctrl |= MII_CR_FULL_DUPLEX;
- DEBUGOUT("Full Duplex\n");
- }
+ /* Forcing Full or Half Duplex? */
+ if (mac->forced_speed_duplex & IGC_ALL_HALF_DUPLEX) {
+ ctrl &= ~IGC_CTRL_FD;
+ *phy_ctrl &= ~MII_CR_FULL_DUPLEX;
+ DEBUGOUT("Half Duplex\n");
+ } else {
+ ctrl |= IGC_CTRL_FD;
+ *phy_ctrl |= MII_CR_FULL_DUPLEX;
+ DEBUGOUT("Full Duplex\n");
+ }
/* Forcing 10mb or 100mb? */
if (mac->forced_speed_duplex & IGC_ALL_100_SPEED) {
@@ -2078,96 +861,6 @@ void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl)
IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
}
-/**
- * igc_set_d3_lplu_state_generic - Sets low power link up state for D3
- * @hw: pointer to the HW structure
- * @active: boolean used to enable/disable lplu
- *
- * Success returns 0, Failure returns 1
- *
- * The low power link up (lplu) state is set to the power management level D3
- * and SmartSpeed is disabled when active is true, else clear lplu for D3
- * and enable Smartspeed. LPLU and Smartspeed are mutually exclusive. LPLU
- * is used during Dx states where the power conservation is most important.
- * During driver activity, SmartSpeed should be enabled so performance is
- * maintained.
- **/
-s32 igc_set_d3_lplu_state_generic(struct igc_hw *hw, bool active)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 data;
-
- DEBUGFUNC("igc_set_d3_lplu_state_generic");
-
- if (!hw->phy.ops.read_reg)
- return IGC_SUCCESS;
-
- ret_val = phy->ops.read_reg(hw, IGP02IGC_PHY_POWER_MGMT, &data);
- if (ret_val)
- return ret_val;
-
- if (!active) {
- data &= ~IGP02IGC_PM_D3_LPLU;
- ret_val = phy->ops.write_reg(hw, IGP02IGC_PHY_POWER_MGMT,
- data);
- if (ret_val)
- return ret_val;
- /* LPLU and SmartSpeed are mutually exclusive. LPLU is used
- * during Dx states where the power conservation is most
- * important. During driver activity we should enable
- * SmartSpeed, so performance is maintained.
- */
- if (phy->smart_speed == igc_smart_speed_on) {
- ret_val = phy->ops.read_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- &data);
- if (ret_val)
- return ret_val;
-
- data |= IGP01IGC_PSCFR_SMART_SPEED;
- ret_val = phy->ops.write_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- data);
- if (ret_val)
- return ret_val;
- } else if (phy->smart_speed == igc_smart_speed_off) {
- ret_val = phy->ops.read_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- &data);
- if (ret_val)
- return ret_val;
-
- data &= ~IGP01IGC_PSCFR_SMART_SPEED;
- ret_val = phy->ops.write_reg(hw,
- IGP01IGC_PHY_PORT_CONFIG,
- data);
- if (ret_val)
- return ret_val;
- }
- } else if ((phy->autoneg_advertised == IGC_ALL_SPEED_DUPLEX) ||
- (phy->autoneg_advertised == IGC_ALL_NOT_GIG) ||
- (phy->autoneg_advertised == IGC_ALL_10_SPEED)) {
- data |= IGP02IGC_PM_D3_LPLU;
- ret_val = phy->ops.write_reg(hw, IGP02IGC_PHY_POWER_MGMT,
- data);
- if (ret_val)
- return ret_val;
-
- /* When LPLU is enabled, we should disable SmartSpeed */
- ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_CONFIG,
- &data);
- if (ret_val)
- return ret_val;
-
- data &= ~IGP01IGC_PSCFR_SMART_SPEED;
- ret_val = phy->ops.write_reg(hw, IGP01IGC_PHY_PORT_CONFIG,
- data);
- }
-
- return ret_val;
-}
-
/**
* igc_check_downshift_generic - Checks whether a downshift in speed occurred
* @hw: pointer to the HW structure
@@ -2408,624 +1101,57 @@ s32 igc_phy_has_link_generic(struct igc_hw *hw, u32 iterations,
}
/**
- * igc_get_cable_length_m88 - Determine cable length for m88 PHY
+ * igc_phy_sw_reset_generic - PHY software reset
* @hw: pointer to the HW structure
*
- * Reads the PHY specific status register to retrieve the cable length
- * information. The cable length is determined by averaging the minimum and
- * maximum values to get the "average" cable length. The m88 PHY has four
- * possible cable length values, which are:
- * Register Value Cable Length
- * 0 < 50 meters
- * 1 50 - 80 meters
- * 2 80 - 110 meters
- * 3 110 - 140 meters
- * 4 > 140 meters
+ * Does a software reset of the PHY by reading the PHY control register and
+ * setting/write the control register reset bit to the PHY.
**/
-s32 igc_get_cable_length_m88(struct igc_hw *hw)
+s32 igc_phy_sw_reset_generic(struct igc_hw *hw)
{
- struct igc_phy_info *phy = &hw->phy;
s32 ret_val;
- u16 phy_data, index;
+ u16 phy_ctrl;
- DEBUGFUNC("igc_get_cable_length_m88");
+ DEBUGFUNC("igc_phy_sw_reset_generic");
+
+ if (!hw->phy.ops.read_reg)
+ return IGC_SUCCESS;
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_STATUS, &phy_data);
+ ret_val = hw->phy.ops.read_reg(hw, PHY_CONTROL, &phy_ctrl);
if (ret_val)
return ret_val;
- index = ((phy_data & M88IGC_PSSR_CABLE_LENGTH) >>
- M88IGC_PSSR_CABLE_LENGTH_SHIFT);
-
- if (index >= M88IGC_CABLE_LENGTH_TABLE_SIZE - 1)
- return -IGC_ERR_PHY;
-
- phy->min_cable_length = igc_m88_cable_length_table[index];
- phy->max_cable_length = igc_m88_cable_length_table[index + 1];
+ phy_ctrl |= MII_CR_RESET;
+ ret_val = hw->phy.ops.write_reg(hw, PHY_CONTROL, phy_ctrl);
+ if (ret_val)
+ return ret_val;
- phy->cable_length = (phy->min_cable_length + phy->max_cable_length) / 2;
+ usec_delay(1);
- return IGC_SUCCESS;
+ return ret_val;
}
-s32 igc_get_cable_length_m88_gen2(struct igc_hw *hw)
+/**
+ * igc_get_phy_type_from_id - Get PHY type from id
+ * @phy_id: phy_id read from the phy
+ *
+ * Returns the phy type from the id.
+ **/
+enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val = 0;
- u16 phy_data, phy_data2, is_cm;
- u16 index, default_page;
-
- DEBUGFUNC("igc_get_cable_length_m88_gen2");
-
- switch (hw->phy.id) {
- case I210_I_PHY_ID:
- /* Get cable length from PHY Cable Diagnostics Control Reg */
- ret_val = phy->ops.read_reg(hw, (0x7 << GS40G_PAGE_SHIFT) +
- (I347AT4_PCDL + phy->addr),
- &phy_data);
- if (ret_val)
- return ret_val;
-
- /* Check if the unit of cable length is meters or cm */
- ret_val = phy->ops.read_reg(hw, (0x7 << GS40G_PAGE_SHIFT) +
- I347AT4_PCDC, &phy_data2);
- if (ret_val)
- return ret_val;
-
- is_cm = !(phy_data2 & I347AT4_PCDC_CABLE_LENGTH_UNIT);
+ enum igc_phy_type phy_type = igc_phy_unknown;
- /* Populate the phy structure with cable length in meters */
- phy->min_cable_length = phy_data / (is_cm ? 100 : 1);
- phy->max_cable_length = phy_data / (is_cm ? 100 : 1);
- phy->cable_length = phy_data / (is_cm ? 100 : 1);
- break;
- case I225_I_PHY_ID:
- if (ret_val)
- return ret_val;
- /* TODO - complete with Foxville data */
- break;
+ switch (phy_id) {
+ case M88IGC_I_PHY_ID:
+ case M88IGC_E_PHY_ID:
+ case M88E1111_I_PHY_ID:
+ case M88E1011_I_PHY_ID:
case M88E1543_E_PHY_ID:
case M88E1512_E_PHY_ID:
- case M88E1340M_E_PHY_ID:
case I347AT4_E_PHY_ID:
- /* Remember the original page select and set it to 7 */
- ret_val = phy->ops.read_reg(hw, I347AT4_PAGE_SELECT,
- &default_page);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT, 0x07);
- if (ret_val)
- return ret_val;
-
- /* Get cable length from PHY Cable Diagnostics Control Reg */
- ret_val = phy->ops.read_reg(hw, (I347AT4_PCDL + phy->addr),
- &phy_data);
- if (ret_val)
- return ret_val;
-
- /* Check if the unit of cable length is meters or cm */
- ret_val = phy->ops.read_reg(hw, I347AT4_PCDC, &phy_data2);
- if (ret_val)
- return ret_val;
-
- is_cm = !(phy_data2 & I347AT4_PCDC_CABLE_LENGTH_UNIT);
-
- /* Populate the phy structure with cable length in meters */
- phy->min_cable_length = phy_data / (is_cm ? 100 : 1);
- phy->max_cable_length = phy_data / (is_cm ? 100 : 1);
- phy->cable_length = phy_data / (is_cm ? 100 : 1);
-
- /* Reset the page select to its original value */
- ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT,
- default_page);
- if (ret_val)
- return ret_val;
- break;
-
case M88E1112_E_PHY_ID:
- /* Remember the original page select and set it to 5 */
- ret_val = phy->ops.read_reg(hw, I347AT4_PAGE_SELECT,
- &default_page);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT, 0x05);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, M88E1112_VCT_DSP_DISTANCE,
- &phy_data);
- if (ret_val)
- return ret_val;
-
- index = (phy_data & M88IGC_PSSR_CABLE_LENGTH) >>
- M88IGC_PSSR_CABLE_LENGTH_SHIFT;
-
- if (index >= M88IGC_CABLE_LENGTH_TABLE_SIZE - 1)
- return -IGC_ERR_PHY;
-
- phy->min_cable_length = igc_m88_cable_length_table[index];
- phy->max_cable_length = igc_m88_cable_length_table[index + 1];
-
- phy->cable_length = (phy->min_cable_length +
- phy->max_cable_length) / 2;
-
- /* Reset the page select to its original value */
- ret_val = phy->ops.write_reg(hw, I347AT4_PAGE_SELECT,
- default_page);
- if (ret_val)
- return ret_val;
-
- break;
- default:
- return -IGC_ERR_PHY;
- }
-
- return ret_val;
-}
-
-/**
- * igc_get_cable_length_igp_2 - Determine cable length for igp2 PHY
- * @hw: pointer to the HW structure
- *
- * The automatic gain control (agc) normalizes the amplitude of the
- * received signal, adjusting for the attenuation produced by the
- * cable. By reading the AGC registers, which represent the
- * combination of coarse and fine gain value, the value can be put
- * into a lookup table to obtain the approximate cable length
- * for each channel.
- **/
-s32 igc_get_cable_length_igp_2(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data, i, agc_value = 0;
- u16 cur_agc_index, max_agc_index = 0;
- u16 min_agc_index = IGP02IGC_CABLE_LENGTH_TABLE_SIZE - 1;
- static const u16 agc_reg_array[IGP02IGC_PHY_CHANNEL_NUM] = {
- IGP02IGC_PHY_AGC_A,
- IGP02IGC_PHY_AGC_B,
- IGP02IGC_PHY_AGC_C,
- IGP02IGC_PHY_AGC_D
- };
-
- DEBUGFUNC("igc_get_cable_length_igp_2");
-
- /* Read the AGC registers for all channels */
- for (i = 0; i < IGP02IGC_PHY_CHANNEL_NUM; i++) {
- ret_val = phy->ops.read_reg(hw, agc_reg_array[i], &phy_data);
- if (ret_val)
- return ret_val;
-
- /* Getting bits 15:9, which represent the combination of
- * coarse and fine gain values. The result is a number
- * that can be put into the lookup table to obtain the
- * approximate cable length.
- */
- cur_agc_index = ((phy_data >> IGP02IGC_AGC_LENGTH_SHIFT) &
- IGP02IGC_AGC_LENGTH_MASK);
-
- /* Array index bound check. */
- if (cur_agc_index >= IGP02IGC_CABLE_LENGTH_TABLE_SIZE ||
- cur_agc_index == 0)
- return -IGC_ERR_PHY;
-
- /* Remove min & max AGC values from calculation. */
- if (igc_igp_2_cable_length_table[min_agc_index] >
- igc_igp_2_cable_length_table[cur_agc_index])
- min_agc_index = cur_agc_index;
- if (igc_igp_2_cable_length_table[max_agc_index] <
- igc_igp_2_cable_length_table[cur_agc_index])
- max_agc_index = cur_agc_index;
-
- agc_value += igc_igp_2_cable_length_table[cur_agc_index];
- }
-
- agc_value -= (igc_igp_2_cable_length_table[min_agc_index] +
- igc_igp_2_cable_length_table[max_agc_index]);
- agc_value /= (IGP02IGC_PHY_CHANNEL_NUM - 2);
-
- /* Calculate cable length with the error range of +/- 10 meters. */
- phy->min_cable_length = (((agc_value - IGP02IGC_AGC_RANGE) > 0) ?
- (agc_value - IGP02IGC_AGC_RANGE) : 0);
- phy->max_cable_length = agc_value + IGP02IGC_AGC_RANGE;
-
- phy->cable_length = (phy->min_cable_length + phy->max_cable_length) / 2;
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_get_phy_info_m88 - Retrieve PHY information
- * @hw: pointer to the HW structure
- *
- * Valid for only copper links. Read the PHY status register (sticky read)
- * to verify that link is up. Read the PHY special control register to
- * determine the polarity and 10base-T extended distance. Read the PHY
- * special status register to determine MDI/MDIx and current speed. If
- * speed is 1000, then determine cable length, local and remote receiver.
- **/
-s32 igc_get_phy_info_m88(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
- bool link;
-
- DEBUGFUNC("igc_get_phy_info_m88");
-
- if (phy->media_type != igc_media_type_copper) {
- DEBUGOUT("Phy info is only valid for copper media\n");
- return -IGC_ERR_CONFIG;
- }
-
- ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
- if (ret_val)
- return ret_val;
-
- if (!link) {
- DEBUGOUT("Phy info is only valid if link is up\n");
- return -IGC_ERR_CONFIG;
- }
-
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_CTRL, &phy_data);
- if (ret_val)
- return ret_val;
-
- phy->polarity_correction = !!(phy_data &
- M88IGC_PSCR_POLARITY_REVERSAL);
-
- ret_val = igc_check_polarity_m88(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, M88IGC_PHY_SPEC_STATUS, &phy_data);
- if (ret_val)
- return ret_val;
-
- phy->is_mdix = !!(phy_data & M88IGC_PSSR_MDIX);
-
- if ((phy_data & M88IGC_PSSR_SPEED) == M88IGC_PSSR_1000MBS) {
- ret_val = hw->phy.ops.get_cable_length(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &phy_data);
- if (ret_val)
- return ret_val;
-
- phy->local_rx = (phy_data & SR_1000T_LOCAL_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
-
- phy->remote_rx = (phy_data & SR_1000T_REMOTE_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
- } else {
- /* Set values to "undefined" */
- phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
- phy->local_rx = igc_1000t_rx_status_undefined;
- phy->remote_rx = igc_1000t_rx_status_undefined;
- }
-
- return ret_val;
-}
-
-/**
- * igc_get_phy_info_igp - Retrieve igp PHY information
- * @hw: pointer to the HW structure
- *
- * Read PHY status to determine if link is up. If link is up, then
- * set/determine 10base-T extended distance and polarity correction. Read
- * PHY port status to determine MDI/MDIx and speed. Based on the speed,
- * determine on the cable length, local and remote receiver.
- **/
-s32 igc_get_phy_info_igp(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 data;
- bool link;
-
- DEBUGFUNC("igc_get_phy_info_igp");
-
- ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
- if (ret_val)
- return ret_val;
-
- if (!link) {
- DEBUGOUT("Phy info is only valid if link is up\n");
- return -IGC_ERR_CONFIG;
- }
-
- phy->polarity_correction = true;
-
- ret_val = igc_check_polarity_igp(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, IGP01IGC_PHY_PORT_STATUS, &data);
- if (ret_val)
- return ret_val;
-
- phy->is_mdix = !!(data & IGP01IGC_PSSR_MDIX);
-
- if ((data & IGP01IGC_PSSR_SPEED_MASK) ==
- IGP01IGC_PSSR_SPEED_1000MBPS) {
- ret_val = phy->ops.get_cable_length(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &data);
- if (ret_val)
- return ret_val;
-
- phy->local_rx = (data & SR_1000T_LOCAL_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
-
- phy->remote_rx = (data & SR_1000T_REMOTE_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
- } else {
- phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
- phy->local_rx = igc_1000t_rx_status_undefined;
- phy->remote_rx = igc_1000t_rx_status_undefined;
- }
-
- return ret_val;
-}
-
-/**
- * igc_get_phy_info_ife - Retrieves various IFE PHY states
- * @hw: pointer to the HW structure
- *
- * Populates "phy" structure with various feature states.
- **/
-s32 igc_get_phy_info_ife(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 data;
- bool link;
-
- DEBUGFUNC("igc_get_phy_info_ife");
-
- ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
- if (ret_val)
- return ret_val;
-
- if (!link) {
- DEBUGOUT("Phy info is only valid if link is up\n");
- return -IGC_ERR_CONFIG;
- }
-
- ret_val = phy->ops.read_reg(hw, IFE_PHY_SPECIAL_CONTROL, &data);
- if (ret_val)
- return ret_val;
- phy->polarity_correction = !(data & IFE_PSC_AUTO_POLARITY_DISABLE);
-
- if (phy->polarity_correction) {
- ret_val = igc_check_polarity_ife(hw);
- if (ret_val)
- return ret_val;
- } else {
- /* Polarity is forced */
- phy->cable_polarity = ((data & IFE_PSC_FORCE_POLARITY)
- ? igc_rev_polarity_reversed
- : igc_rev_polarity_normal);
- }
-
- ret_val = phy->ops.read_reg(hw, IFE_PHY_MDIX_CONTROL, &data);
- if (ret_val)
- return ret_val;
-
- phy->is_mdix = !!(data & IFE_PMC_MDIX_STATUS);
-
- /* The following parameters are undefined for 10/100 operation. */
- phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
- phy->local_rx = igc_1000t_rx_status_undefined;
- phy->remote_rx = igc_1000t_rx_status_undefined;
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_phy_sw_reset_generic - PHY software reset
- * @hw: pointer to the HW structure
- *
- * Does a software reset of the PHY by reading the PHY control register and
- * setting/write the control register reset bit to the PHY.
- **/
-s32 igc_phy_sw_reset_generic(struct igc_hw *hw)
-{
- s32 ret_val;
- u16 phy_ctrl;
-
- DEBUGFUNC("igc_phy_sw_reset_generic");
-
- if (!hw->phy.ops.read_reg)
- return IGC_SUCCESS;
-
- ret_val = hw->phy.ops.read_reg(hw, PHY_CONTROL, &phy_ctrl);
- if (ret_val)
- return ret_val;
-
- phy_ctrl |= MII_CR_RESET;
- ret_val = hw->phy.ops.write_reg(hw, PHY_CONTROL, phy_ctrl);
- if (ret_val)
- return ret_val;
-
- usec_delay(1);
-
- return ret_val;
-}
-
-/**
- * igc_phy_hw_reset_generic - PHY hardware reset
- * @hw: pointer to the HW structure
- *
- * Verify the reset block is not blocking us from resetting. Acquire
- * semaphore (if necessary) and read/set/write the device control reset
- * bit in the PHY. Wait the appropriate delay time for the device to
- * reset and release the semaphore (if necessary).
- **/
-s32 igc_phy_hw_reset_generic(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u32 ctrl;
-
- DEBUGFUNC("igc_phy_hw_reset_generic");
-
- if (phy->ops.check_reset_block) {
- ret_val = phy->ops.check_reset_block(hw);
- if (ret_val)
- return IGC_SUCCESS;
- }
-
- ret_val = phy->ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ctrl = IGC_READ_REG(hw, IGC_CTRL);
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl | IGC_CTRL_PHY_RST);
- IGC_WRITE_FLUSH(hw);
-
- usec_delay(phy->reset_delay_us);
-
- IGC_WRITE_REG(hw, IGC_CTRL, ctrl);
- IGC_WRITE_FLUSH(hw);
-
- usec_delay(150);
-
- phy->ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_get_cfg_done_generic - Generic configuration done
- * @hw: pointer to the HW structure
- *
- * Generic function to wait 10 milli-seconds for configuration to complete
- * and return success.
- **/
-s32 igc_get_cfg_done_generic(struct igc_hw IGC_UNUSEDARG * hw)
-{
- DEBUGFUNC("igc_get_cfg_done_generic");
- UNREFERENCED_1PARAMETER(hw);
-
- msec_delay_irq(10);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_phy_init_script_igp3 - Inits the IGP3 PHY
- * @hw: pointer to the HW structure
- *
- * Initializes a Intel Gigabit PHY3 when an EEPROM is not present.
- **/
-s32 igc_phy_init_script_igp3(struct igc_hw *hw)
-{
- DEBUGOUT("Running IGP 3 PHY init script\n");
-
- /* PHY init IGP 3 */
- /* Enable rise/fall, 10-mode work in class-A */
- hw->phy.ops.write_reg(hw, 0x2F5B, 0x9018);
- /* Remove all caps from Replica path filter */
- hw->phy.ops.write_reg(hw, 0x2F52, 0x0000);
- /* Bias trimming for ADC, AFE and Driver (Default) */
- hw->phy.ops.write_reg(hw, 0x2FB1, 0x8B24);
- /* Increase Hybrid poly bias */
- hw->phy.ops.write_reg(hw, 0x2FB2, 0xF8F0);
- /* Add 4% to Tx amplitude in Gig mode */
- hw->phy.ops.write_reg(hw, 0x2010, 0x10B0);
- /* Disable trimming (TTT) */
- hw->phy.ops.write_reg(hw, 0x2011, 0x0000);
- /* Poly DC correction to 94.6% + 2% for all channels */
- hw->phy.ops.write_reg(hw, 0x20DD, 0x249A);
- /* ABS DC correction to 95.9% */
- hw->phy.ops.write_reg(hw, 0x20DE, 0x00D3);
- /* BG temp curve trim */
- hw->phy.ops.write_reg(hw, 0x28B4, 0x04CE);
- /* Increasing ADC OPAMP stage 1 currents to max */
- hw->phy.ops.write_reg(hw, 0x2F70, 0x29E4);
- /* Force 1000 ( required for enabling PHY regs configuration) */
- hw->phy.ops.write_reg(hw, 0x0000, 0x0140);
- /* Set upd_freq to 6 */
- hw->phy.ops.write_reg(hw, 0x1F30, 0x1606);
- /* Disable NPDFE */
- hw->phy.ops.write_reg(hw, 0x1F31, 0xB814);
- /* Disable adaptive fixed FFE (Default) */
- hw->phy.ops.write_reg(hw, 0x1F35, 0x002A);
- /* Enable FFE hysteresis */
- hw->phy.ops.write_reg(hw, 0x1F3E, 0x0067);
- /* Fixed FFE for short cable lengths */
- hw->phy.ops.write_reg(hw, 0x1F54, 0x0065);
- /* Fixed FFE for medium cable lengths */
- hw->phy.ops.write_reg(hw, 0x1F55, 0x002A);
- /* Fixed FFE for long cable lengths */
- hw->phy.ops.write_reg(hw, 0x1F56, 0x002A);
- /* Enable Adaptive Clip Threshold */
- hw->phy.ops.write_reg(hw, 0x1F72, 0x3FB0);
- /* AHT reset limit to 1 */
- hw->phy.ops.write_reg(hw, 0x1F76, 0xC0FF);
- /* Set AHT master delay to 127 msec */
- hw->phy.ops.write_reg(hw, 0x1F77, 0x1DEC);
- /* Set scan bits for AHT */
- hw->phy.ops.write_reg(hw, 0x1F78, 0xF9EF);
- /* Set AHT Preset bits */
- hw->phy.ops.write_reg(hw, 0x1F79, 0x0210);
- /* Change integ_factor of channel A to 3 */
- hw->phy.ops.write_reg(hw, 0x1895, 0x0003);
- /* Change prop_factor of channels BCD to 8 */
- hw->phy.ops.write_reg(hw, 0x1796, 0x0008);
- /* Change cg_icount + enable integbp for channels BCD */
- hw->phy.ops.write_reg(hw, 0x1798, 0xD008);
- /* Change cg_icount + enable integbp + change prop_factor_master
- * to 8 for channel A
- */
- hw->phy.ops.write_reg(hw, 0x1898, 0xD918);
- /* Disable AHT in Slave mode on channel A */
- hw->phy.ops.write_reg(hw, 0x187A, 0x0800);
- /* Enable LPLU and disable AN to 1000 in non-D0a states,
- * Enable SPD+B2B
- */
- hw->phy.ops.write_reg(hw, 0x0019, 0x008D);
- /* Enable restart AN on an1000_dis change */
- hw->phy.ops.write_reg(hw, 0x001B, 0x2080);
- /* Enable wh_fifo read clock in 10/100 modes */
- hw->phy.ops.write_reg(hw, 0x0014, 0x0045);
- /* Restart AN, Speed selection is 1000 */
- hw->phy.ops.write_reg(hw, 0x0000, 0x1340);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_get_phy_type_from_id - Get PHY type from id
- * @phy_id: phy_id read from the phy
- *
- * Returns the phy type from the id.
- **/
-enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
-{
- enum igc_phy_type phy_type = igc_phy_unknown;
-
- switch (phy_id) {
- case M88IGC_I_PHY_ID:
- case M88IGC_E_PHY_ID:
- case M88E1111_I_PHY_ID:
- case M88E1011_I_PHY_ID:
- case M88E1543_E_PHY_ID:
- case M88E1512_E_PHY_ID:
- case I347AT4_E_PHY_ID:
- case M88E1112_E_PHY_ID:
- case M88E1340M_E_PHY_ID:
- phy_type = igc_phy_m88;
+ case M88E1340M_E_PHY_ID:
+ phy_type = igc_phy_m88;
break;
case IGP01IGC_I_PHY_ID: /* IGP 1 & 2 share this */
phy_type = igc_phy_igp_2;
@@ -3056,1074 +1182,174 @@ enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id)
break;
case I217_E_PHY_ID:
phy_type = igc_phy_i217;
- break;
- case I82580_I_PHY_ID:
- phy_type = igc_phy_82580;
- break;
- case I210_I_PHY_ID:
- phy_type = igc_phy_i210;
- break;
- case I225_I_PHY_ID:
- phy_type = igc_phy_i225;
- break;
- default:
- phy_type = igc_phy_unknown;
- break;
- }
- return phy_type;
-}
-
-/**
- * igc_determine_phy_address - Determines PHY address.
- * @hw: pointer to the HW structure
- *
- * This uses a trial and error method to loop through possible PHY
- * addresses. It tests each by reading the PHY ID registers and
- * checking for a match.
- **/
-s32 igc_determine_phy_address(struct igc_hw *hw)
-{
- u32 phy_addr = 0;
- u32 i;
- enum igc_phy_type phy_type = igc_phy_unknown;
-
- hw->phy.id = phy_type;
-
- for (phy_addr = 0; phy_addr < IGC_MAX_PHY_ADDR; phy_addr++) {
- hw->phy.addr = phy_addr;
- i = 0;
-
- do {
- igc_get_phy_id(hw);
- phy_type = igc_get_phy_type_from_id(hw->phy.id);
-
- /* If phy_type is valid, break - we found our
- * PHY address
- */
- if (phy_type != igc_phy_unknown)
- return IGC_SUCCESS;
-
- msec_delay(1);
- i++;
- } while (i < 10);
- }
-
- return -IGC_ERR_PHY_TYPE;
-}
-
-/**
- * igc_get_phy_addr_for_bm_page - Retrieve PHY page address
- * @page: page to access
- * @reg: register to access
- *
- * Returns the phy address for the page requested.
- **/
-static u32 igc_get_phy_addr_for_bm_page(u32 page, u32 reg)
-{
- u32 phy_addr = 2;
-
- if (page >= 768 || (page == 0 && reg == 25) || reg == 31)
- phy_addr = 1;
-
- return phy_addr;
-}
-
-/**
- * igc_write_phy_reg_bm - Write BM PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 data)
-{
- s32 ret_val;
- u32 page = offset >> IGP_PAGE_SHIFT;
-
- DEBUGFUNC("igc_write_phy_reg_bm");
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
- false, false);
- goto release;
- }
-
- hw->phy.addr = igc_get_phy_addr_for_bm_page(page, offset);
-
- if (offset > MAX_PHY_MULTI_PAGE_REG) {
- u32 page_shift, page_select;
-
- /* Page select is register 31 for phy address 1 and 22 for
- * phy address 2 and 3. Page select is shifted only for
- * phy address 1.
- */
- if (hw->phy.addr == 1) {
- page_shift = IGP_PAGE_SHIFT;
- page_select = IGP01IGC_PHY_PAGE_SELECT;
- } else {
- page_shift = 0;
- page_select = BM_PHY_PAGE_SELECT;
- }
-
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_write_phy_reg_mdic(hw, page_select,
- (page << page_shift));
- if (ret_val)
- goto release;
- }
-
- ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-
-release:
- hw->phy.ops.release(hw);
- return ret_val;
-}
-
-/**
- * igc_read_phy_reg_bm - Read BM PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Acquires semaphore, if necessary, then reads the PHY register at offset
- * and storing the retrieved information in data. Release any acquired
- * semaphores before exiting.
- **/
-s32 igc_read_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 *data)
-{
- s32 ret_val;
- u32 page = offset >> IGP_PAGE_SHIFT;
-
- DEBUGFUNC("igc_read_phy_reg_bm");
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
- true, false);
- goto release;
- }
-
- hw->phy.addr = igc_get_phy_addr_for_bm_page(page, offset);
-
- if (offset > MAX_PHY_MULTI_PAGE_REG) {
- u32 page_shift, page_select;
-
- /* Page select is register 31 for phy address 1 and 22 for
- * phy address 2 and 3. Page select is shifted only for
- * phy address 1.
- */
- if (hw->phy.addr == 1) {
- page_shift = IGP_PAGE_SHIFT;
- page_select = IGP01IGC_PHY_PAGE_SELECT;
- } else {
- page_shift = 0;
- page_select = BM_PHY_PAGE_SELECT;
- }
-
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_write_phy_reg_mdic(hw, page_select,
- (page << page_shift));
- if (ret_val)
- goto release;
- }
-
- ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-release:
- hw->phy.ops.release(hw);
- return ret_val;
-}
-
-/**
- * igc_read_phy_reg_bm2 - Read BM PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Acquires semaphore, if necessary, then reads the PHY register at offset
- * and storing the retrieved information in data. Release any acquired
- * semaphores before exiting.
- **/
-s32 igc_read_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 *data)
-{
- s32 ret_val;
- u16 page = (u16)(offset >> IGP_PAGE_SHIFT);
-
- DEBUGFUNC("igc_read_phy_reg_bm2");
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
- true, false);
- goto release;
- }
-
- hw->phy.addr = 1;
-
- if (offset > MAX_PHY_MULTI_PAGE_REG) {
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_write_phy_reg_mdic(hw, BM_PHY_PAGE_SELECT,
- page);
-
- if (ret_val)
- goto release;
- }
-
- ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-release:
- hw->phy.ops.release(hw);
- return ret_val;
-}
-
-/**
- * igc_write_phy_reg_bm2 - Write BM PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 data)
-{
- s32 ret_val;
- u16 page = (u16)(offset >> IGP_PAGE_SHIFT);
-
- DEBUGFUNC("igc_write_phy_reg_bm2");
-
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
- false, false);
- goto release;
- }
-
- hw->phy.addr = 1;
-
- if (offset > MAX_PHY_MULTI_PAGE_REG) {
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_write_phy_reg_mdic(hw, BM_PHY_PAGE_SELECT,
- page);
-
- if (ret_val)
- goto release;
- }
-
- ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
- data);
-
-release:
- hw->phy.ops.release(hw);
- return ret_val;
-}
-
-/**
- * igc_enable_phy_wakeup_reg_access_bm - enable access to BM wakeup registers
- * @hw: pointer to the HW structure
- * @phy_reg: pointer to store original contents of BM_WUC_ENABLE_REG
- *
- * Assumes semaphore already acquired and phy_reg points to a valid memory
- * address to store contents of the BM_WUC_ENABLE_REG register.
- **/
-s32 igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
-{
- s32 ret_val;
- u16 temp;
-
- DEBUGFUNC("igc_enable_phy_wakeup_reg_access_bm");
-
- if (!phy_reg)
- return -IGC_ERR_PARAM;
-
- /* All page select, port ctrl and wakeup registers use phy address 1 */
- hw->phy.addr = 1;
-
- /* Select Port Control Registers page */
- ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
- if (ret_val) {
- DEBUGOUT("Could not set Port Control page\n");
- return ret_val;
- }
-
- ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
- if (ret_val) {
- DEBUGOUT2("Could not read PHY register %d.%d\n",
- BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
- return ret_val;
- }
-
- /* Enable both PHY wakeup mode and Wakeup register page writes.
- * Prevent a power state change by disabling ME and Host PHY wakeup.
- */
- temp = *phy_reg;
- temp |= BM_WUC_ENABLE_BIT;
- temp &= ~(BM_WUC_ME_WU_BIT | BM_WUC_HOST_WU_BIT);
-
- ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, temp);
- if (ret_val) {
- DEBUGOUT2("Could not write PHY register %d.%d\n",
- BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
- return ret_val;
- }
-
- /* Select Host Wakeup Registers page - caller now able to write
- * registers on the Wakeup registers page
- */
- return igc_set_page_igp(hw, (BM_WUC_PAGE << IGP_PAGE_SHIFT));
-}
-
-/**
- * igc_disable_phy_wakeup_reg_access_bm - disable access to BM wakeup regs
- * @hw: pointer to the HW structure
- * @phy_reg: pointer to original contents of BM_WUC_ENABLE_REG
- *
- * Restore BM_WUC_ENABLE_REG to its original value.
- *
- * Assumes semaphore already acquired and *phy_reg is the contents of the
- * BM_WUC_ENABLE_REG before register(s) on BM_WUC_PAGE were accessed by
- * caller.
- **/
-s32 igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
-{
- s32 ret_val;
-
- DEBUGFUNC("igc_disable_phy_wakeup_reg_access_bm");
-
- if (!phy_reg)
- return -IGC_ERR_PARAM;
-
- /* Select Port Control Registers page */
- ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
- if (ret_val) {
- DEBUGOUT("Could not set Port Control page\n");
- return ret_val;
- }
-
- /* Restore 769.17 to its original value */
- ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, *phy_reg);
- if (ret_val)
- DEBUGOUT2("Could not restore PHY register %d.%d\n",
- BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
-
- return ret_val;
-}
-
-/**
- * igc_access_phy_wakeup_reg_bm - Read/write BM PHY wakeup register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read or written
- * @data: pointer to the data to read or write
- * @read: determines if operation is read or write
- * @page_set: BM_WUC_PAGE already set and access enabled
- *
- * Read the PHY register at offset and store the retrieved information in
- * data, or write data to PHY register at offset. Note the procedure to
- * access the PHY wakeup registers is different than reading the other PHY
- * registers. It works as such:
- * 1) Set 769.17.2 (page 769, register 17, bit 2) = 1
- * 2) Set page to 800 for host (801 if we were manageability)
- * 3) Write the address using the address opcode (0x11)
- * 4) Read or write the data using the data opcode (0x12)
- * 5) Restore 769.17.2 to its original value
- *
- * Steps 1 and 2 are done by igc_enable_phy_wakeup_reg_access_bm() and
- * step 5 is done by igc_disable_phy_wakeup_reg_access_bm().
- *
- * Assumes semaphore is already acquired. When page_set==true, assumes
- * the PHY page is set to BM_WUC_PAGE (i.e. a function in the call stack
- * is responsible for calls to igc_[enable|disable]_phy_wakeup_reg_bm()).
- **/
-static s32 igc_access_phy_wakeup_reg_bm(struct igc_hw *hw, u32 offset,
- u16 *data, bool read, bool page_set)
-{
- s32 ret_val;
- u16 reg = BM_PHY_REG_NUM(offset);
- u16 page = BM_PHY_REG_PAGE(offset);
- u16 phy_reg = 0;
-
- DEBUGFUNC("igc_access_phy_wakeup_reg_bm");
-
- /* Gig must be disabled for MDIO accesses to Host Wakeup reg page */
- if (hw->mac.type == igc_pchlan &&
- !(IGC_READ_REG(hw, IGC_PHY_CTRL) & IGC_PHY_CTRL_GBE_DISABLE))
- DEBUGOUT1("Attempting to access page %d while gig enabled.\n",
- page);
-
- if (!page_set) {
- /* Enable access to PHY wakeup registers */
- ret_val = igc_enable_phy_wakeup_reg_access_bm(hw, &phy_reg);
- if (ret_val) {
- DEBUGOUT("Could not enable PHY wakeup reg access\n");
- return ret_val;
- }
- }
-
- DEBUGOUT2("Accessing PHY page %d reg 0x%x\n", page, reg);
-
- /* Write the Wakeup register page offset value using opcode 0x11 */
- ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ADDRESS_OPCODE, reg);
- if (ret_val) {
- DEBUGOUT1("Could not write address opcode to page %d\n", page);
- return ret_val;
- }
-
- if (read) {
- /* Read the Wakeup register page value using opcode 0x12 */
- ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
- data);
- } else {
- /* Write the Wakeup register page value using opcode 0x12 */
- ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
- *data);
- }
-
- if (ret_val) {
- DEBUGOUT2("Could not access PHY reg %d.%d\n", page, reg);
- return ret_val;
- }
-
- if (!page_set)
- ret_val = igc_disable_phy_wakeup_reg_access_bm(hw, &phy_reg);
-
- return ret_val;
-}
-
-/**
- * igc_power_up_phy_copper - Restore copper link in case of PHY power down
- * @hw: pointer to the HW structure
- *
- * In the case of a PHY power down to save power, or to turn off link during a
- * driver unload, or wake on lan is not enabled, restore the link to previous
- * settings.
- **/
-void igc_power_up_phy_copper(struct igc_hw *hw)
-{
- u16 mii_reg = 0;
-
- /* The PHY will retain its settings across a power down/up cycle */
- hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
- mii_reg &= ~MII_CR_POWER_DOWN;
- hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
-}
-
-/**
- * igc_power_down_phy_copper - Restore copper link in case of PHY power down
- * @hw: pointer to the HW structure
- *
- * In the case of a PHY power down to save power, or to turn off link during a
- * driver unload, or wake on lan is not enabled, restore the link to previous
- * settings.
- **/
-void igc_power_down_phy_copper(struct igc_hw *hw)
-{
- u16 mii_reg = 0;
-
- /* The PHY will retain its settings across a power down/up cycle */
- hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
- mii_reg |= MII_CR_POWER_DOWN;
- hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
- msec_delay(1);
-}
-
-/**
- * __igc_read_phy_reg_hv - Read HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- * @locked: semaphore has already been acquired or not
- * @page_set: BM_WUC_PAGE already set and access enabled
- *
- * Acquires semaphore, if necessary, then reads the PHY register at offset
- * and stores the retrieved information in data. Release any acquired
- * semaphore before exiting.
- **/
-static s32 __igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data,
- bool locked, bool page_set)
-{
- s32 ret_val;
- u16 page = BM_PHY_REG_PAGE(offset);
- u16 reg = BM_PHY_REG_NUM(offset);
- u32 phy_addr = hw->phy.addr = igc_get_phy_addr_for_hv_page(page);
-
- DEBUGFUNC("__igc_read_phy_reg_hv");
-
- if (!locked) {
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
- }
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, data,
- true, page_set);
- goto out;
- }
-
- if (page > 0 && page < HV_INTC_FC_PAGE_START) {
- ret_val = igc_access_phy_debug_regs_hv(hw, offset,
- data, true);
- goto out;
- }
-
- if (!page_set) {
- if (page == HV_INTC_FC_PAGE_START)
- page = 0;
-
- if (reg > MAX_PHY_MULTI_PAGE_REG) {
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_set_page_igp(hw,
- (page << IGP_PAGE_SHIFT));
-
- hw->phy.addr = phy_addr;
-
- if (ret_val)
- goto out;
- }
- }
-
- DEBUGOUT3("reading PHY page %d (or 0x%x shifted) reg 0x%x\n", page,
- page << IGP_PAGE_SHIFT, reg);
-
- ret_val = igc_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & reg,
- data);
-out:
- if (!locked)
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_read_phy_reg_hv - Read HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Acquires semaphore then reads the PHY register at offset and stores
- * the retrieved information in data. Release the acquired semaphore
- * before exiting.
- **/
-s32 igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_phy_reg_hv(hw, offset, data, false, false);
-}
-
-/**
- * igc_read_phy_reg_hv_locked - Read HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to be read
- * @data: pointer to the read data
- *
- * Reads the PHY register at offset and stores the retrieved information
- * in data. Assumes semaphore already acquired.
- **/
-s32 igc_read_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_phy_reg_hv(hw, offset, data, true, false);
-}
-
-/**
- * igc_read_phy_reg_page_hv - Read HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Reads the PHY register at offset and stores the retrieved information
- * in data. Assumes semaphore already acquired and page already set.
- **/
-s32 igc_read_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 *data)
-{
- return __igc_read_phy_reg_hv(hw, offset, data, true, true);
-}
-
-/**
- * __igc_write_phy_reg_hv - Write HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- * @locked: semaphore has already been acquired or not
- * @page_set: BM_WUC_PAGE already set and access enabled
- *
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
- **/
-static s32 __igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data,
- bool locked, bool page_set)
-{
- s32 ret_val;
- u16 page = BM_PHY_REG_PAGE(offset);
- u16 reg = BM_PHY_REG_NUM(offset);
- u32 phy_addr = hw->phy.addr = igc_get_phy_addr_for_hv_page(page);
-
- DEBUGFUNC("__igc_write_phy_reg_hv");
-
- if (!locked) {
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
- }
- /* Page 800 works differently than the rest so it has its own func */
- if (page == BM_WUC_PAGE) {
- ret_val = igc_access_phy_wakeup_reg_bm(hw, offset, &data,
- false, page_set);
- goto out;
- }
-
- if (page > 0 && page < HV_INTC_FC_PAGE_START) {
- ret_val = igc_access_phy_debug_regs_hv(hw, offset,
- &data, false);
- goto out;
- }
-
- if (!page_set) {
- if (page == HV_INTC_FC_PAGE_START)
- page = 0;
-
- /*
- * Workaround MDIO accesses being disabled after entering IEEE
- * Power Down (when bit 11 of the PHY Control register is set)
- */
- if (hw->phy.type == igc_phy_82578 &&
- hw->phy.revision >= 1 &&
- hw->phy.addr == 2 &&
- !(MAX_PHY_REG_ADDRESS & reg) &&
- (data & (1 << 11))) {
- u16 data2 = 0x7EFF;
- ret_val = igc_access_phy_debug_regs_hv(hw,
- (1 << 6) | 0x3,
- &data2, false);
- if (ret_val)
- goto out;
- }
-
- if (reg > MAX_PHY_MULTI_PAGE_REG) {
- /* Page is shifted left, PHY expects (page x 32) */
- ret_val = igc_set_page_igp(hw,
- (page << IGP_PAGE_SHIFT));
-
- hw->phy.addr = phy_addr;
-
- if (ret_val)
- goto out;
- }
- }
-
- DEBUGOUT3("writing PHY page %d (or 0x%x shifted) reg 0x%x\n", page,
- page << IGP_PAGE_SHIFT, reg);
-
- ret_val = igc_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & reg,
- data);
-
-out:
- if (!locked)
- hw->phy.ops.release(hw);
-
- return ret_val;
-}
-
-/**
- * igc_write_phy_reg_hv - Write HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Acquires semaphore then writes the data to PHY register at the offset.
- * Release the acquired semaphores before exiting.
- **/
-s32 igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_phy_reg_hv(hw, offset, data, false, false);
-}
-
-/**
- * igc_write_phy_reg_hv_locked - Write HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Writes the data to PHY register at the offset. Assumes semaphore
- * already acquired.
- **/
-s32 igc_write_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_phy_reg_hv(hw, offset, data, true, false);
-}
-
-/**
- * igc_write_phy_reg_page_hv - Write HV PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
- *
- * Writes the data to PHY register at the offset. Assumes semaphore
- * already acquired and page already set.
- **/
-s32 igc_write_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 data)
-{
- return __igc_write_phy_reg_hv(hw, offset, data, true, true);
-}
-
-/**
- * igc_get_phy_addr_for_hv_page - Get PHY address based on page
- * @page: page to be accessed
- **/
-static u32 igc_get_phy_addr_for_hv_page(u32 page)
-{
- u32 phy_addr = 2;
-
- if (page >= HV_INTC_FC_PAGE_START)
- phy_addr = 1;
-
- return phy_addr;
-}
-
-/**
- * igc_access_phy_debug_regs_hv - Read HV PHY vendor specific high registers
- * @hw: pointer to the HW structure
- * @offset: register offset to be read or written
- * @data: pointer to the data to be read or written
- * @read: determines if operation is read or write
- *
- * Reads the PHY register at offset and stores the retrieved information
- * in data. Assumes semaphore already acquired. Note that the procedure
- * to access these regs uses the address port and data port to read/write.
- * These accesses done with PHY address 2 and without using pages.
- **/
-static s32 igc_access_phy_debug_regs_hv(struct igc_hw *hw, u32 offset,
- u16 *data, bool read)
-{
- s32 ret_val;
- u32 addr_reg;
- u32 data_reg;
-
- DEBUGFUNC("igc_access_phy_debug_regs_hv");
-
- /* This takes care of the difference with desktop vs mobile phy */
- addr_reg = ((hw->phy.type == igc_phy_82578) ?
- I82578_ADDR_REG : I82577_ADDR_REG);
- data_reg = addr_reg + 1;
-
- /* All operations in this function are phy address 2 */
- hw->phy.addr = 2;
-
- /* masking with 0x3F to remove the page from offset */
- ret_val = igc_write_phy_reg_mdic(hw, addr_reg, (u16)offset & 0x3F);
- if (ret_val) {
- DEBUGOUT("Could not write the Address Offset port register\n");
- return ret_val;
- }
-
- /* Read or write the data value next */
- if (read)
- ret_val = igc_read_phy_reg_mdic(hw, data_reg, data);
- else
- ret_val = igc_write_phy_reg_mdic(hw, data_reg, *data);
-
- if (ret_val)
- DEBUGOUT("Could not access the Data port register\n");
-
- return ret_val;
-}
-
-/**
- * igc_link_stall_workaround_hv - Si workaround
- * @hw: pointer to the HW structure
- *
- * This function works around a Si bug where the link partner can get
- * a link up indication before the PHY does. If small packets are sent
- * by the link partner they can be placed in the packet buffer without
- * being properly accounted for by the PHY and will stall preventing
- * further packets from being received. The workaround is to clear the
- * packet buffer after the PHY detects link up.
- **/
-s32 igc_link_stall_workaround_hv(struct igc_hw *hw)
-{
- s32 ret_val = IGC_SUCCESS;
- u16 data;
-
- DEBUGFUNC("igc_link_stall_workaround_hv");
-
- if (hw->phy.type != igc_phy_82578)
- return IGC_SUCCESS;
-
- /* Do not apply workaround if in PHY loopback bit 14 set */
- hw->phy.ops.read_reg(hw, PHY_CONTROL, &data);
- if (data & PHY_CONTROL_LB)
- return IGC_SUCCESS;
-
- /* check if link is up and at 1Gbps */
- ret_val = hw->phy.ops.read_reg(hw, BM_CS_STATUS, &data);
- if (ret_val)
- return ret_val;
-
- data &= (BM_CS_STATUS_LINK_UP | BM_CS_STATUS_RESOLVED |
- BM_CS_STATUS_SPEED_MASK);
-
- if (data != (BM_CS_STATUS_LINK_UP | BM_CS_STATUS_RESOLVED |
- BM_CS_STATUS_SPEED_1000))
- return IGC_SUCCESS;
-
- msec_delay(200);
-
- /* flush the packets in the fifo buffer */
- ret_val = hw->phy.ops.write_reg(hw, HV_MUX_DATA_CTRL,
- (HV_MUX_DATA_CTRL_GEN_TO_MAC |
- HV_MUX_DATA_CTRL_FORCE_SPEED));
- if (ret_val)
- return ret_val;
-
- return hw->phy.ops.write_reg(hw, HV_MUX_DATA_CTRL,
- HV_MUX_DATA_CTRL_GEN_TO_MAC);
+ break;
+ case I82580_I_PHY_ID:
+ phy_type = igc_phy_82580;
+ break;
+ case I210_I_PHY_ID:
+ phy_type = igc_phy_i210;
+ break;
+ case I225_I_PHY_ID:
+ phy_type = igc_phy_i225;
+ break;
+ default:
+ phy_type = igc_phy_unknown;
+ break;
+ }
+ return phy_type;
}
/**
- * igc_check_polarity_82577 - Checks the polarity.
+ * igc_enable_phy_wakeup_reg_access_bm - enable access to BM wakeup registers
* @hw: pointer to the HW structure
+ * @phy_reg: pointer to store original contents of BM_WUC_ENABLE_REG
*
- * Success returns 0, Failure returns -IGC_ERR_PHY (-2)
- *
- * Polarity is determined based on the PHY specific status register.
+ * Assumes semaphore already acquired and phy_reg points to a valid memory
+ * address to store contents of the BM_WUC_ENABLE_REG register.
**/
-s32 igc_check_polarity_82577(struct igc_hw *hw)
+s32 igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
{
- struct igc_phy_info *phy = &hw->phy;
s32 ret_val;
- u16 data;
-
- DEBUGFUNC("igc_check_polarity_82577");
-
- ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
-
- if (!ret_val)
- phy->cable_polarity = ((data & I82577_PHY_STATUS2_REV_POLARITY)
- ? igc_rev_polarity_reversed
- : igc_rev_polarity_normal);
+ u16 temp;
- return ret_val;
-}
+ DEBUGFUNC("igc_enable_phy_wakeup_reg_access_bm");
-/**
- * igc_phy_force_speed_duplex_82577 - Force speed/duplex for I82577 PHY
- * @hw: pointer to the HW structure
- *
- * Calls the PHY setup function to force speed and duplex.
- **/
-s32 igc_phy_force_speed_duplex_82577(struct igc_hw *hw)
-{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data;
- bool link = false;
+ if (!phy_reg)
+ return -IGC_ERR_PARAM;
- DEBUGFUNC("igc_phy_force_speed_duplex_82577");
+ /* All page select, port ctrl and wakeup registers use phy address 1 */
+ hw->phy.addr = 1;
- ret_val = phy->ops.read_reg(hw, PHY_CONTROL, &phy_data);
- if (ret_val)
+ /* Select Port Control Registers page */
+ ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
+ if (ret_val) {
+ DEBUGOUT("Could not set Port Control page\n");
return ret_val;
+ }
- igc_phy_force_speed_duplex_setup(hw, &phy_data);
-
- ret_val = phy->ops.write_reg(hw, PHY_CONTROL, phy_data);
- if (ret_val)
+ ret_val = igc_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
+ if (ret_val) {
+ DEBUGOUT2("Could not read PHY register %d.%d\n",
+ BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
return ret_val;
+ }
- usec_delay(1);
-
- if (phy->autoneg_wait_to_complete) {
- DEBUGOUT("Waiting for forced speed/duplex link on 82577 phy\n");
-
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
- if (ret_val)
- return ret_val;
-
- if (!link)
- DEBUGOUT("Link taking longer than expected.\n");
+ /* Enable both PHY wakeup mode and Wakeup register page writes.
+ * Prevent a power state change by disabling ME and Host PHY wakeup.
+ */
+ temp = *phy_reg;
+ temp |= BM_WUC_ENABLE_BIT;
+ temp &= ~(BM_WUC_ME_WU_BIT | BM_WUC_HOST_WU_BIT);
- /* Try once more */
- ret_val = igc_phy_has_link_generic(hw, PHY_FORCE_LIMIT,
- 100000, &link);
+ ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, temp);
+ if (ret_val) {
+ DEBUGOUT2("Could not write PHY register %d.%d\n",
+ BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
+ return ret_val;
}
- return ret_val;
+ /* Select Host Wakeup Registers page - caller now able to write
+ * registers on the Wakeup registers page
+ */
+ return igc_set_page_igp(hw, (BM_WUC_PAGE << IGP_PAGE_SHIFT));
}
/**
- * igc_get_phy_info_82577 - Retrieve I82577 PHY information
+ * igc_disable_phy_wakeup_reg_access_bm - disable access to BM wakeup regs
* @hw: pointer to the HW structure
+ * @phy_reg: pointer to original contents of BM_WUC_ENABLE_REG
+ *
+ * Restore BM_WUC_ENABLE_REG to its original value.
*
- * Read PHY status to determine if link is up. If link is up, then
- * set/determine 10base-T extended distance and polarity correction. Read
- * PHY port status to determine MDI/MDIx and speed. Based on the speed,
- * determine on the cable length, local and remote receiver.
+ * Assumes semaphore already acquired and *phy_reg is the contents of the
+ * BM_WUC_ENABLE_REG before register(s) on BM_WUC_PAGE were accessed by
+ * caller.
**/
-s32 igc_get_phy_info_82577(struct igc_hw *hw)
+s32 igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg)
{
- struct igc_phy_info *phy = &hw->phy;
s32 ret_val;
- u16 data;
- bool link;
-
- DEBUGFUNC("igc_get_phy_info_82577");
-
- ret_val = igc_phy_has_link_generic(hw, 1, 0, &link);
- if (ret_val)
- return ret_val;
- if (!link) {
- DEBUGOUT("Phy info is only valid if link is up\n");
- return -IGC_ERR_CONFIG;
- }
+ DEBUGFUNC("igc_disable_phy_wakeup_reg_access_bm");
- phy->polarity_correction = true;
+ if (!phy_reg)
+ return -IGC_ERR_PARAM;
- ret_val = igc_check_polarity_82577(hw);
- if (ret_val)
+ /* Select Port Control Registers page */
+ ret_val = igc_set_page_igp(hw, (BM_PORT_CTRL_PAGE << IGP_PAGE_SHIFT));
+ if (ret_val) {
+ DEBUGOUT("Could not set Port Control page\n");
return ret_val;
+ }
- ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
+ /* Restore 769.17 to its original value */
+ ret_val = igc_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, *phy_reg);
if (ret_val)
- return ret_val;
-
- phy->is_mdix = !!(data & I82577_PHY_STATUS2_MDIX);
-
- if ((data & I82577_PHY_STATUS2_SPEED_MASK) ==
- I82577_PHY_STATUS2_SPEED_1000MBPS) {
- ret_val = hw->phy.ops.get_cable_length(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = phy->ops.read_reg(hw, PHY_1000T_STATUS, &data);
- if (ret_val)
- return ret_val;
-
- phy->local_rx = (data & SR_1000T_LOCAL_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
-
- phy->remote_rx = (data & SR_1000T_REMOTE_RX_STATUS)
- ? igc_1000t_rx_status_ok
- : igc_1000t_rx_status_not_ok;
- } else {
- phy->cable_length = IGC_CABLE_LENGTH_UNDEFINED;
- phy->local_rx = igc_1000t_rx_status_undefined;
- phy->remote_rx = igc_1000t_rx_status_undefined;
- }
+ DEBUGOUT2("Could not restore PHY register %d.%d\n",
+ BM_PORT_CTRL_PAGE, BM_WUC_ENABLE_REG);
- return IGC_SUCCESS;
+ return ret_val;
}
/**
- * igc_get_cable_length_82577 - Determine cable length for 82577 PHY
- * @hw: pointer to the HW structure
+ * igc_power_up_phy_copper - Restore copper link in case of PHY power down
+ * @hw: pointer to the HW structure
*
- * Reads the diagnostic status register and verifies result is valid before
- * placing it in the phy_cable_length field.
+ * In the case of a PHY power down to save power, or to turn off link during a
+ * driver unload, or wake on lan is not enabled, restore the link to previous
+ * settings.
**/
-s32 igc_get_cable_length_82577(struct igc_hw *hw)
+void igc_power_up_phy_copper(struct igc_hw *hw)
{
- struct igc_phy_info *phy = &hw->phy;
- s32 ret_val;
- u16 phy_data, length;
-
- DEBUGFUNC("igc_get_cable_length_82577");
-
- ret_val = phy->ops.read_reg(hw, I82577_PHY_DIAG_STATUS, &phy_data);
- if (ret_val)
- return ret_val;
-
- length = ((phy_data & I82577_DSTATUS_CABLE_LENGTH) >>
- I82577_DSTATUS_CABLE_LENGTH_SHIFT);
-
- if (length == IGC_CABLE_LENGTH_UNDEFINED)
- return -IGC_ERR_PHY;
-
- phy->cable_length = length;
+ u16 mii_reg = 0;
- return IGC_SUCCESS;
+ /* The PHY will retain its settings across a power down/up cycle */
+ hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
+ mii_reg &= ~MII_CR_POWER_DOWN;
+ hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
}
/**
- * igc_write_phy_reg_gs40g - Write GS40G PHY register
- * @hw: pointer to the HW structure
- * @offset: register offset to write to
- * @data: data to write at register offset
+ * igc_power_down_phy_copper - Restore copper link in case of PHY power down
+ * @hw: pointer to the HW structure
*
- * Acquires semaphore, if necessary, then writes the data to PHY register
- * at the offset. Release any acquired semaphores before exiting.
+ * In the case of a PHY power down to save power, or to turn off link during a
+ * driver unload, or wake on lan is not enabled, restore the link to previous
+ * settings.
**/
-s32 igc_write_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 data)
+void igc_power_down_phy_copper(struct igc_hw *hw)
{
- s32 ret_val;
- u16 page = offset >> GS40G_PAGE_SHIFT;
-
- DEBUGFUNC("igc_write_phy_reg_gs40g");
-
- offset = offset & GS40G_OFFSET_MASK;
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
-
- ret_val = igc_write_phy_reg_mdic(hw, GS40G_PAGE_SELECT, page);
- if (ret_val)
- goto release;
- ret_val = igc_write_phy_reg_mdic(hw, offset, data);
+ u16 mii_reg = 0;
-release:
- hw->phy.ops.release(hw);
- return ret_val;
+ /* The PHY will retain its settings across a power down/up cycle */
+ hw->phy.ops.read_reg(hw, PHY_CONTROL, &mii_reg);
+ mii_reg |= MII_CR_POWER_DOWN;
+ hw->phy.ops.write_reg(hw, PHY_CONTROL, mii_reg);
+ msec_delay(1);
}
/**
- * igc_read_phy_reg_gs40g - Read GS40G PHY register
+ * igc_check_polarity_82577 - Checks the polarity.
* @hw: pointer to the HW structure
- * @offset: lower half is register offset to read to
- * upper half is page to use.
- * @data: data to read at register offset
*
- * Acquires semaphore, if necessary, then reads the data in the PHY register
- * at the offset. Release any acquired semaphores before exiting.
+ * Success returns 0, Failure returns -IGC_ERR_PHY (-2)
+ *
+ * Polarity is determined based on the PHY specific status register.
**/
-s32 igc_read_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 *data)
+s32 igc_check_polarity_82577(struct igc_hw *hw)
{
+ struct igc_phy_info *phy = &hw->phy;
s32 ret_val;
- u16 page = offset >> GS40G_PAGE_SHIFT;
+ u16 data;
- DEBUGFUNC("igc_read_phy_reg_gs40g");
+ DEBUGFUNC("igc_check_polarity_82577");
- offset = offset & GS40G_OFFSET_MASK;
- ret_val = hw->phy.ops.acquire(hw);
- if (ret_val)
- return ret_val;
+ ret_val = phy->ops.read_reg(hw, I82577_PHY_STATUS_2, &data);
- ret_val = igc_write_phy_reg_mdic(hw, GS40G_PAGE_SELECT, page);
- if (ret_val)
- goto release;
- ret_val = igc_read_phy_reg_mdic(hw, offset, data);
+ if (!ret_val)
+ phy->cable_polarity = ((data & I82577_PHY_STATUS2_REV_POLARITY)
+ ? igc_rev_polarity_reversed
+ : igc_rev_polarity_normal);
-release:
- hw->phy.ops.release(hw);
return ret_val;
}
@@ -4194,132 +1420,6 @@ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data)
return ret_val;
}
-/**
- * igc_read_phy_reg_mphy - Read mPHY control register
- * @hw: pointer to the HW structure
- * @address: address to be read
- * @data: pointer to the read data
- *
- * Reads the mPHY control register in the PHY at offset and stores the
- * information read to data.
- **/
-s32 igc_read_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 *data)
-{
- u32 mphy_ctrl = 0;
- bool locked = false;
- bool ready;
-
- DEBUGFUNC("igc_read_phy_reg_mphy");
-
- /* Check if mPHY is ready to read/write operations */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
-
- /* Check if mPHY access is disabled and enable it if so */
- mphy_ctrl = IGC_READ_REG(hw, IGC_MPHY_ADDR_CTRL);
- if (mphy_ctrl & IGC_MPHY_DIS_ACCESS) {
- locked = true;
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- mphy_ctrl |= IGC_MPHY_ENA_ACCESS;
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
- }
-
- /* Set the address that we want to read */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
-
- /* We mask address, because we want to use only current lane */
- mphy_ctrl = (mphy_ctrl & ~IGC_MPHY_ADDRESS_MASK &
- ~IGC_MPHY_ADDRESS_FNC_OVERRIDE) |
- (address & IGC_MPHY_ADDRESS_MASK);
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-
- /* Read data from the address */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- *data = IGC_READ_REG(hw, IGC_MPHY_DATA);
-
- /* Disable access to mPHY if it was originally disabled */
- if (locked)
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL,
- IGC_MPHY_DIS_ACCESS);
-
- return IGC_SUCCESS;
-}
-
-/**
- * igc_write_phy_reg_mphy - Write mPHY control register
- * @hw: pointer to the HW structure
- * @address: address to write to
- * @data: data to write to register at offset
- * @line_override: used when we want to use different line than default one
- *
- * Writes data to mPHY control register.
- **/
-s32 igc_write_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 data,
- bool line_override)
-{
- u32 mphy_ctrl = 0;
- bool locked = false;
- bool ready;
-
- DEBUGFUNC("igc_write_phy_reg_mphy");
-
- /* Check if mPHY is ready to read/write operations */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
-
- /* Check if mPHY access is disabled and enable it if so */
- mphy_ctrl = IGC_READ_REG(hw, IGC_MPHY_ADDR_CTRL);
- if (mphy_ctrl & IGC_MPHY_DIS_ACCESS) {
- locked = true;
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- mphy_ctrl |= IGC_MPHY_ENA_ACCESS;
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
- }
-
- /* Set the address that we want to read */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
-
- /* We mask address, because we want to use only current lane */
- if (line_override)
- mphy_ctrl |= IGC_MPHY_ADDRESS_FNC_OVERRIDE;
- else
- mphy_ctrl &= ~IGC_MPHY_ADDRESS_FNC_OVERRIDE;
- mphy_ctrl = (mphy_ctrl & ~IGC_MPHY_ADDRESS_MASK) |
- (address & IGC_MPHY_ADDRESS_MASK);
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL, mphy_ctrl);
-
- /* Read data from the address */
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- IGC_WRITE_REG(hw, IGC_MPHY_DATA, data);
-
- /* Disable access to mPHY if it was originally disabled */
- if (locked)
- ready = igc_is_mphy_ready(hw);
- if (!ready)
- return -IGC_ERR_PHY;
- IGC_WRITE_REG(hw, IGC_MPHY_ADDR_CTRL,
- IGC_MPHY_DIS_ACCESS);
-
- return IGC_SUCCESS;
-}
-
/**
* igc_is_mphy_ready - Check if mPHY control register is not busy
* @hw: pointer to the HW structure
diff --git a/drivers/net/igc/base/igc_phy.h b/drivers/net/igc/base/igc_phy.h
index fbc0e7cbc9..25f6f9e165 100644
--- a/drivers/net/igc/base/igc_phy.h
+++ b/drivers/net/igc/base/igc_phy.h
@@ -22,75 +22,26 @@ s32 igc_check_polarity_ife(struct igc_hw *hw);
s32 igc_check_reset_block_generic(struct igc_hw *hw);
s32 igc_phy_setup_autoneg(struct igc_hw *hw);
s32 igc_copper_link_autoneg(struct igc_hw *hw);
-s32 igc_copper_link_setup_igp(struct igc_hw *hw);
-s32 igc_copper_link_setup_m88(struct igc_hw *hw);
-s32 igc_copper_link_setup_m88_gen2(struct igc_hw *hw);
-s32 igc_phy_force_speed_duplex_igp(struct igc_hw *hw);
-s32 igc_phy_force_speed_duplex_m88(struct igc_hw *hw);
-s32 igc_phy_force_speed_duplex_ife(struct igc_hw *hw);
-s32 igc_get_cable_length_m88(struct igc_hw *hw);
-s32 igc_get_cable_length_m88_gen2(struct igc_hw *hw);
-s32 igc_get_cable_length_igp_2(struct igc_hw *hw);
-s32 igc_get_cfg_done_generic(struct igc_hw *hw);
s32 igc_get_phy_id(struct igc_hw *hw);
-s32 igc_get_phy_info_igp(struct igc_hw *hw);
-s32 igc_get_phy_info_m88(struct igc_hw *hw);
-s32 igc_get_phy_info_ife(struct igc_hw *hw);
s32 igc_phy_sw_reset_generic(struct igc_hw *hw);
void igc_phy_force_speed_duplex_setup(struct igc_hw *hw, u16 *phy_ctrl);
-s32 igc_phy_hw_reset_generic(struct igc_hw *hw);
s32 igc_phy_reset_dsp_generic(struct igc_hw *hw);
s32 igc_read_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 *data);
s32 igc_set_page_igp(struct igc_hw *hw, u16 page);
-s32 igc_read_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_set_d3_lplu_state_generic(struct igc_hw *hw, bool active);
s32 igc_setup_copper_link_generic(struct igc_hw *hw);
s32 igc_write_kmrn_reg_generic(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_kmrn_reg_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_phy_reg_igp(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_phy_reg_igp_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_phy_reg_m88(struct igc_hw *hw, u32 offset, u16 data);
s32 igc_phy_has_link_generic(struct igc_hw *hw, u32 iterations,
u32 usec_interval, bool *success);
-s32 igc_phy_init_script_igp3(struct igc_hw *hw);
enum igc_phy_type igc_get_phy_type_from_id(u32 phy_id);
-s32 igc_determine_phy_address(struct igc_hw *hw);
-s32 igc_write_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_read_phy_reg_bm(struct igc_hw *hw, u32 offset, u16 *data);
s32 igc_enable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg);
s32 igc_disable_phy_wakeup_reg_access_bm(struct igc_hw *hw, u16 *phy_reg);
-s32 igc_read_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_phy_reg_bm2(struct igc_hw *hw, u32 offset, u16 data);
void igc_power_up_phy_copper(struct igc_hw *hw);
void igc_power_down_phy_copper(struct igc_hw *hw);
s32 igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data);
s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_read_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_phy_reg_i2c(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_read_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 *data);
-s32 igc_write_sfp_data_byte(struct igc_hw *hw, u16 offset, u8 data);
-s32 igc_read_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_write_phy_reg_hv(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_phy_reg_hv_locked(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_write_phy_reg_page_hv(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_link_stall_workaround_hv(struct igc_hw *hw);
-s32 igc_copper_link_setup_82577(struct igc_hw *hw);
s32 igc_check_polarity_82577(struct igc_hw *hw);
-s32 igc_get_phy_info_82577(struct igc_hw *hw);
-s32 igc_phy_force_speed_duplex_82577(struct igc_hw *hw);
-s32 igc_get_cable_length_82577(struct igc_hw *hw);
-s32 igc_write_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 data);
-s32 igc_read_phy_reg_gs40g(struct igc_hw *hw, u32 offset, u16 *data);
s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data);
s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data);
-s32 igc_read_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 *data);
-s32 igc_write_phy_reg_mphy(struct igc_hw *hw, u32 address, u32 data,
- bool line_override);
bool igc_is_mphy_ready(struct igc_hw *hw);
s32 igc_read_xmdio_reg(struct igc_hw *hw, u16 addr, u8 dev_addr,
diff --git a/drivers/net/ionic/ionic.h b/drivers/net/ionic/ionic.h
index 1538df3092..3536de39e9 100644
--- a/drivers/net/ionic/ionic.h
+++ b/drivers/net/ionic/ionic.h
@@ -73,10 +73,8 @@ int ionic_setup(struct ionic_adapter *adapter);
int ionic_identify(struct ionic_adapter *adapter);
int ionic_init(struct ionic_adapter *adapter);
-int ionic_reset(struct ionic_adapter *adapter);
int ionic_port_identify(struct ionic_adapter *adapter);
int ionic_port_init(struct ionic_adapter *adapter);
-int ionic_port_reset(struct ionic_adapter *adapter);
#endif /* _IONIC_H_ */
diff --git a/drivers/net/ionic/ionic_dev.c b/drivers/net/ionic/ionic_dev.c
index 5c2820b7a1..3700769aab 100644
--- a/drivers/net/ionic/ionic_dev.c
+++ b/drivers/net/ionic/ionic_dev.c
@@ -206,19 +206,6 @@ ionic_dev_cmd_port_speed(struct ionic_dev *idev, uint32_t speed)
ionic_dev_cmd_go(idev, &cmd);
}
-void
-ionic_dev_cmd_port_mtu(struct ionic_dev *idev, uint32_t mtu)
-{
- union ionic_dev_cmd cmd = {
- .port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
- .port_setattr.index = 0,
- .port_setattr.attr = IONIC_PORT_ATTR_MTU,
- .port_setattr.mtu = mtu,
- };
-
- ionic_dev_cmd_go(idev, &cmd);
-}
-
void
ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable)
{
@@ -232,19 +219,6 @@ ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable)
ionic_dev_cmd_go(idev, &cmd);
}
-void
-ionic_dev_cmd_port_fec(struct ionic_dev *idev, uint8_t fec_type)
-{
- union ionic_dev_cmd cmd = {
- .port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
- .port_setattr.index = 0,
- .port_setattr.attr = IONIC_PORT_ATTR_FEC,
- .port_setattr.fec_type = fec_type,
- };
-
- ionic_dev_cmd_go(idev, &cmd);
-}
-
void
ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type)
{
@@ -258,19 +232,6 @@ ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type)
ionic_dev_cmd_go(idev, &cmd);
}
-void
-ionic_dev_cmd_port_loopback(struct ionic_dev *idev, uint8_t loopback_mode)
-{
- union ionic_dev_cmd cmd = {
- .port_setattr.opcode = IONIC_CMD_PORT_SETATTR,
- .port_setattr.index = 0,
- .port_setattr.attr = IONIC_PORT_ATTR_LOOPBACK,
- .port_setattr.loopback_mode = loopback_mode,
- };
-
- ionic_dev_cmd_go(idev, &cmd);
-}
-
/* LIF commands */
void
diff --git a/drivers/net/ionic/ionic_dev.h b/drivers/net/ionic/ionic_dev.h
index 532255a603..dc47f0166a 100644
--- a/drivers/net/ionic/ionic_dev.h
+++ b/drivers/net/ionic/ionic_dev.h
@@ -224,12 +224,8 @@ void ionic_dev_cmd_port_init(struct ionic_dev *idev);
void ionic_dev_cmd_port_reset(struct ionic_dev *idev);
void ionic_dev_cmd_port_state(struct ionic_dev *idev, uint8_t state);
void ionic_dev_cmd_port_speed(struct ionic_dev *idev, uint32_t speed);
-void ionic_dev_cmd_port_mtu(struct ionic_dev *idev, uint32_t mtu);
void ionic_dev_cmd_port_autoneg(struct ionic_dev *idev, uint8_t an_enable);
-void ionic_dev_cmd_port_fec(struct ionic_dev *idev, uint8_t fec_type);
void ionic_dev_cmd_port_pause(struct ionic_dev *idev, uint8_t pause_type);
-void ionic_dev_cmd_port_loopback(struct ionic_dev *idev,
- uint8_t loopback_mode);
void ionic_dev_cmd_lif_identify(struct ionic_dev *idev, uint8_t type,
uint8_t ver);
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 60a5f3d537..9c36090a94 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -73,17 +73,6 @@ ionic_lif_stop(struct ionic_lif *lif __rte_unused)
return 0;
}
-void
-ionic_lif_reset(struct ionic_lif *lif)
-{
- struct ionic_dev *idev = &lif->adapter->idev;
-
- IONIC_PRINT_CALL();
-
- ionic_dev_cmd_lif_reset(idev, lif->index);
- ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
-}
-
static void
ionic_lif_get_abs_stats(const struct ionic_lif *lif, struct rte_eth_stats *stats)
{
diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h
index 425762d652..d66da559f1 100644
--- a/drivers/net/ionic/ionic_lif.h
+++ b/drivers/net/ionic/ionic_lif.h
@@ -131,7 +131,6 @@ int ionic_lif_start(struct ionic_lif *lif);
int ionic_lif_stop(struct ionic_lif *lif);
int ionic_lif_configure(struct ionic_lif *lif);
-void ionic_lif_reset(struct ionic_lif *lif);
int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr);
void ionic_intr_free(struct ionic_lif *lif, struct ionic_intr_info *intr);
diff --git a/drivers/net/ionic/ionic_main.c b/drivers/net/ionic/ionic_main.c
index 2ade213d2d..2853601f9d 100644
--- a/drivers/net/ionic/ionic_main.c
+++ b/drivers/net/ionic/ionic_main.c
@@ -306,17 +306,6 @@ ionic_init(struct ionic_adapter *adapter)
return err;
}
-int
-ionic_reset(struct ionic_adapter *adapter)
-{
- struct ionic_dev *idev = &adapter->idev;
- int err;
-
- ionic_dev_cmd_reset(idev);
- err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
- return err;
-}
-
int
ionic_port_identify(struct ionic_adapter *adapter)
{
@@ -419,25 +408,3 @@ ionic_port_init(struct ionic_adapter *adapter)
return 0;
}
-
-int
-ionic_port_reset(struct ionic_adapter *adapter)
-{
- struct ionic_dev *idev = &adapter->idev;
- int err;
-
- if (!idev->port_info)
- return 0;
-
- ionic_dev_cmd_port_reset(idev);
- err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
- if (err) {
- IONIC_PRINT(ERR, "Failed to reset port");
- return err;
- }
-
- idev->port_info = NULL;
- idev->port_info_pa = 0;
-
- return 0;
-}
diff --git a/drivers/net/ionic/ionic_rx_filter.c b/drivers/net/ionic/ionic_rx_filter.c
index fe624538df..0c2c937a17 100644
--- a/drivers/net/ionic/ionic_rx_filter.c
+++ b/drivers/net/ionic/ionic_rx_filter.c
@@ -18,20 +18,6 @@ ionic_rx_filter_free(struct ionic_rx_filter *f)
rte_free(f);
}
-int
-ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f)
-{
- struct ionic_admin_ctx ctx = {
- .pending_work = true,
- .cmd.rx_filter_del = {
- .opcode = IONIC_CMD_RX_FILTER_DEL,
- .filter_id = f->filter_id,
- },
- };
-
- return ionic_adminq_post(lif, &ctx);
-}
-
int
ionic_rx_filters_init(struct ionic_lif *lif)
{
diff --git a/drivers/net/ionic/ionic_rx_filter.h b/drivers/net/ionic/ionic_rx_filter.h
index 6204a7b535..851a56073b 100644
--- a/drivers/net/ionic/ionic_rx_filter.h
+++ b/drivers/net/ionic/ionic_rx_filter.h
@@ -34,7 +34,6 @@ struct ionic_admin_ctx;
struct ionic_lif;
void ionic_rx_filter_free(struct ionic_rx_filter *f);
-int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f);
int ionic_rx_filters_init(struct ionic_lif *lif);
void ionic_rx_filters_deinit(struct ionic_lif *lif);
int ionic_rx_filter_save(struct ionic_lif *lif, uint32_t flow_id,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b0c3a2286d..836798a40c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1041,7 +1041,6 @@ void mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn,
void mlx5_set_metadata_mask(struct rte_eth_dev *dev);
int mlx5_dev_check_sibling_config(struct mlx5_priv *priv,
struct mlx5_dev_config *config);
-int mlx5_dev_configure(struct rte_eth_dev *dev);
int mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info);
int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size);
int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index 9889437c56..d607cc4b96 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -287,12 +287,6 @@ cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse)
return entry;
}
-struct mlx5_cache_entry *
-mlx5_cache_lookup(struct mlx5_cache_list *list, void *ctx)
-{
- return cache_lookup(list, ctx, false);
-}
-
struct mlx5_cache_entry *
mlx5_cache_register(struct mlx5_cache_list *list, void *ctx)
{
@@ -734,21 +728,6 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool)
return 0;
}
-void
-mlx5_ipool_dump(struct mlx5_indexed_pool *pool)
-{
- printf("Pool %s entry size %u, trunks %u, %d entry per trunk, "
- "total: %d\n",
- pool->cfg.type, pool->cfg.size, pool->n_trunk_valid,
- pool->cfg.trunk_size, pool->n_trunk_valid);
-#ifdef POOL_DEBUG
- printf("Pool %s entry %u, trunk alloc %u, empty: %u, "
- "available %u free %u\n",
- pool->cfg.type, pool->n_entry, pool->trunk_new,
- pool->trunk_empty, pool->trunk_avail, pool->trunk_free);
-#endif
-}
-
struct mlx5_l3t_tbl *
mlx5_l3t_create(enum mlx5_l3t_type type)
{
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index be6e5f67aa..e6cf37c96f 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -562,23 +562,6 @@ int mlx5_cache_list_init(struct mlx5_cache_list *list,
mlx5_cache_match_cb cb_match,
mlx5_cache_remove_cb cb_remove);
-/**
- * Search an entry matching the key.
- *
- * Result returned might be destroyed by other thread, must use
- * this function only in main thread.
- *
- * @param list
- * Pointer to the cache list.
- * @param ctx
- * Common context parameter used by entry callback function.
- *
- * @return
- * Pointer of the cache entry if found, NULL otherwise.
- */
-struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list,
- void *ctx);
-
/**
* Reuse or create an entry to the cache list.
*
@@ -717,14 +700,6 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg);
*/
int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool);
-/**
- * This function dumps debug info of pool.
- *
- * @param pool
- * Pointer to indexed memory pool.
- */
-void mlx5_ipool_dump(struct mlx5_indexed_pool *pool);
-
/**
* This function allocates new empty Three-level table.
*
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2cd73919ce..afda8f2a50 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -862,24 +862,6 @@ mvneta_eth_dev_destroy(struct rte_eth_dev *eth_dev)
rte_eth_dev_release_port(eth_dev);
}
-/**
- * Cleanup previously created device representing Ethernet port.
- *
- * @param name
- * Pointer to the port name.
- */
-static void
-mvneta_eth_dev_destroy_name(const char *name)
-{
- struct rte_eth_dev *eth_dev;
-
- eth_dev = rte_eth_dev_allocated(name);
- if (!eth_dev)
- return;
-
- mvneta_eth_dev_destroy(eth_dev);
-}
-
/**
* DPDK callback to register the virtual device.
*
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 1ce260c89b..beb716f3c9 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -946,37 +946,6 @@ int hn_rndis_get_offload(struct hn_data *hv,
return 0;
}
-uint32_t
-hn_rndis_get_ptypes(struct hn_data *hv)
-{
- struct ndis_offload hwcaps;
- uint32_t ptypes;
- int error;
-
- memset(&hwcaps, 0, sizeof(hwcaps));
-
- error = hn_rndis_query_hwcaps(hv, &hwcaps);
- if (error) {
- PMD_DRV_LOG(ERR, "hwcaps query failed: %d", error);
- return RTE_PTYPE_L2_ETHER;
- }
-
- ptypes = RTE_PTYPE_L2_ETHER;
-
- if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
- ptypes |= RTE_PTYPE_L3_IPV4;
-
- if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) ||
- (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
- ptypes |= RTE_PTYPE_L4_TCP;
-
- if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) ||
- (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
- ptypes |= RTE_PTYPE_L4_UDP;
-
- return ptypes;
-}
-
int
hn_rndis_set_rxfilter(struct hn_data *hv, uint32_t filter)
{
diff --git a/drivers/net/netvsc/hn_rndis.h b/drivers/net/netvsc/hn_rndis.h
index 9a8251fc2f..11b89042dd 100644
--- a/drivers/net/netvsc/hn_rndis.h
+++ b/drivers/net/netvsc/hn_rndis.h
@@ -25,7 +25,6 @@ int hn_rndis_query_rsscaps(struct hn_data *hv,
int hn_rndis_query_rss(struct hn_data *hv,
struct rte_eth_rss_conf *rss_conf);
int hn_rndis_conf_rss(struct hn_data *hv, uint32_t flags);
-uint32_t hn_rndis_get_ptypes(struct hn_data *hv);
#ifdef RTE_LIBRTE_NETVSC_DEBUG_DUMP
void hn_rndis_dump(const void *buf);
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index bd874c6b4d..1fa8a50c1b 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -225,7 +225,6 @@ int hn_vf_configure(struct rte_eth_dev *dev,
const struct rte_eth_conf *dev_conf);
const uint32_t *hn_vf_supported_ptypes(struct rte_eth_dev *dev);
int hn_vf_start(struct rte_eth_dev *dev);
-void hn_vf_reset(struct rte_eth_dev *dev);
int hn_vf_close(struct rte_eth_dev *dev);
int hn_vf_stop(struct rte_eth_dev *dev);
@@ -241,7 +240,6 @@ int hn_vf_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t nb_desc,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
-void hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id);
int hn_vf_tx_queue_status(struct hn_data *hv, uint16_t queue_id, uint16_t offset);
int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
@@ -252,7 +250,6 @@ int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
void hn_vf_rx_queue_release(struct hn_data *hv, uint16_t queue_id);
int hn_vf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
-int hn_vf_stats_reset(struct rte_eth_dev *dev);
int hn_vf_xstats_get_names(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names,
unsigned int size);
diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c
index d43ebaa69f..996324282b 100644
--- a/drivers/net/netvsc/hn_vf.c
+++ b/drivers/net/netvsc/hn_vf.c
@@ -318,11 +318,6 @@ int hn_vf_stop(struct rte_eth_dev *dev)
return ret; \
}
-void hn_vf_reset(struct rte_eth_dev *dev)
-{
- VF_ETHDEV_FUNC(dev, rte_eth_dev_reset);
-}
-
int hn_vf_close(struct rte_eth_dev *dev)
{
struct hn_data *hv = dev->data->dev_private;
@@ -340,11 +335,6 @@ int hn_vf_close(struct rte_eth_dev *dev)
return ret;
}
-int hn_vf_stats_reset(struct rte_eth_dev *dev)
-{
- VF_ETHDEV_FUNC_RET_STATUS(dev, rte_eth_stats_reset);
-}
-
int hn_vf_allmulticast_enable(struct rte_eth_dev *dev)
{
VF_ETHDEV_FUNC_RET_STATUS(dev, rte_eth_allmulticast_enable);
@@ -401,21 +391,6 @@ int hn_vf_tx_queue_setup(struct rte_eth_dev *dev,
return ret;
}
-void hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id)
-{
- struct rte_eth_dev *vf_dev;
-
- rte_rwlock_read_lock(&hv->vf_lock);
- vf_dev = hn_get_vf_dev(hv);
- if (vf_dev && vf_dev->dev_ops->tx_queue_release) {
- void *subq = vf_dev->data->tx_queues[queue_id];
-
- (*vf_dev->dev_ops->tx_queue_release)(subq);
- }
-
- rte_rwlock_read_unlock(&hv->vf_lock);
-}
-
int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t nb_desc,
unsigned int socket_id,
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp.h b/drivers/net/nfp/nfpcore/nfp_cpp.h
index 1427954c17..8fe97a37b1 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp.h
+++ b/drivers/net/nfp/nfpcore/nfp_cpp.h
@@ -283,15 +283,6 @@ uint32_t nfp_cpp_model(struct nfp_cpp *cpp);
*/
uint16_t nfp_cpp_interface(struct nfp_cpp *cpp);
-/*
- * Retrieve the NFP Serial Number (unique per NFP)
- * @param[in] cpp NFP CPP handle
- * @param[out] serial Pointer to reference the serial number array
- *
- * @return size of the NFP6000 serial number, in bytes
- */
-int nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial);
-
/*
* Allocate a NFP CPP area handle, as an offset into a CPP ID
* @param[in] cpp NFP CPP handle
@@ -366,16 +357,6 @@ void nfp_cpp_area_release_free(struct nfp_cpp_area *area);
uint8_t *nfp_cpp_map_area(struct nfp_cpp *cpp, int domain, int target,
uint64_t addr, unsigned long size,
struct nfp_cpp_area **area);
-/*
- * Return an IO pointer to the beginning of the NFP CPP area handle. The area
- * must be acquired with 'nfp_cpp_area_acquire()' before calling this operation.
- *
- * @param[in] area NFP CPP area handle
- *
- * @return Pointer to IO memory, or NULL on failure (and set errno accordingly).
- */
-void *nfp_cpp_area_mapped(struct nfp_cpp_area *area);
-
/*
* Read from a NFP CPP area handle into a buffer. The area must be acquired with
* 'nfp_cpp_area_acquire()' before calling this operation.
@@ -417,18 +398,6 @@ int nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
*/
void *nfp_cpp_area_iomem(struct nfp_cpp_area *area);
-/*
- * Verify that IO can be performed on an offset in an area
- *
- * @param[in] area NFP CPP area handle
- * @param[in] offset Offset into the area
- * @param[in] size Size of region to validate
- *
- * @return 0 on success, -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_area_check_range(struct nfp_cpp_area *area,
- unsigned long long offset, unsigned long size);
-
/*
* Get the NFP CPP handle that is the parent of a NFP CPP area handle
*
@@ -437,14 +406,6 @@ int nfp_cpp_area_check_range(struct nfp_cpp_area *area,
*/
struct nfp_cpp *nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area);
-/*
- * Get the name passed during allocation of the NFP CPP area handle
- *
- * @param cpp_area NFP CPP area handle
- * @return Pointer to the area's name
- */
-const char *nfp_cpp_area_name(struct nfp_cpp_area *cpp_area);
-
/*
* Read a block of data from a NFP CPP ID
*
@@ -474,89 +435,6 @@ int nfp_cpp_write(struct nfp_cpp *cpp, uint32_t cpp_id,
unsigned long long address, const void *kernel_vaddr,
size_t length);
-
-
-/*
- * Fill a NFP CPP area handle and offset with a value
- *
- * @param[in] area NFP CPP area handle
- * @param[in] offset Offset into the NFP CPP ID address space
- * @param[in] value 32-bit value to fill area with
- * @param[in] length Size of the area to reserve
- *
- * @return bytes written on success, -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value, size_t length);
-
-/*
- * Read a single 32-bit value from a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t *value);
-
-/*
- * Write a single 32-bit value to a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value);
-
-/*
- * Read a single 64-bit value from a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t *value);
-
-/*
- * Write a single 64-bit value to a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error (and set errno accordingly).
- */
-int nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t value);
-
/*
* Write a single 32-bit value on the XPB bus
*
@@ -579,33 +457,6 @@ int nfp_xpb_writel(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t value);
*/
int nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t *value);
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param mask mask of bits to alter
- * @param value value to modify
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value);
-
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param mask mask of bits to alter
- * @param value value to monitor for
- * @param timeout_us maximum number of us to wait (-1 for forever)
- *
- * @return >= 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value, int timeout_us);
-
/*
* Read a 32-bit word from a NFP CPP ID
*
@@ -659,27 +510,6 @@ int nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id,
int nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id,
unsigned long long address, uint64_t value);
-/*
- * Initialize a mutex location
-
- * The CPP target:address must point to a 64-bit aligned location, and will
- * initialize 64 bits of data at the location.
- *
- * This creates the initial mutex state, as locked by this nfp_cpp_interface().
- *
- * This function should only be called when setting up the initial lock state
- * upon boot-up of the system.
- *
- * @param cpp NFP CPP handle
- * @param target NFP CPP target ID
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key_id Unique 32-bit value for this mutex
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target,
- unsigned long long address, uint32_t key_id);
-
/*
* Create a mutex handle from an address controlled by a MU Atomic engine
*
@@ -701,49 +531,6 @@ struct nfp_cpp_mutex *nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
unsigned long long address,
uint32_t key_id);
-/*
- * Get the NFP CPP handle the mutex was created with
- *
- * @param mutex NFP mutex handle
- * @return NFP CPP handle
- */
-struct nfp_cpp *nfp_cpp_mutex_cpp(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex key
- *
- * @param mutex NFP mutex handle
- * @return Mutex key
- */
-uint32_t nfp_cpp_mutex_key(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex owner
- *
- * @param mutex NFP mutex handle
- * @return Interface ID of the mutex owner
- *
- * NOTE: This is for debug purposes ONLY - the owner may change at any time,
- * unless it has been locked by this NFP CPP handle.
- */
-uint16_t nfp_cpp_mutex_owner(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex target
- *
- * @param mutex NFP mutex handle
- * @return Mutex CPP target (ie NFP_CPP_TARGET_MU)
- */
-int nfp_cpp_mutex_target(struct nfp_cpp_mutex *mutex);
-
-/*
- * Get the mutex address
- *
- * @param mutex NFP mutex handle
- * @return Mutex CPP address
- */
-uint64_t nfp_cpp_mutex_address(struct nfp_cpp_mutex *mutex);
-
/*
* Free a mutex handle - does not alter the lock state
*
diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c
index dec4a8b6d1..10b7f059a7 100644
--- a/drivers/net/nfp/nfpcore/nfp_cppcore.c
+++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c
@@ -61,13 +61,6 @@ nfp_cpp_interface_set(struct nfp_cpp *cpp, uint32_t interface)
cpp->interface = interface;
}
-int
-nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial)
-{
- *serial = cpp->serial;
- return cpp->serial_len;
-}
-
int
nfp_cpp_serial_set(struct nfp_cpp *cpp, const uint8_t *serial,
size_t serial_len)
@@ -106,12 +99,6 @@ nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area)
return cpp_area->cpp;
}
-const char *
-nfp_cpp_area_name(struct nfp_cpp_area *cpp_area)
-{
- return cpp_area->name;
-}
-
/*
* nfp_cpp_area_alloc - allocate a new CPP area
* @cpp: CPP handle
@@ -351,34 +338,6 @@ nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
return area->cpp->op->area_write(area, kernel_vaddr, offset, length);
}
-void *
-nfp_cpp_area_mapped(struct nfp_cpp_area *area)
-{
- if (area->cpp->op->area_mapped)
- return area->cpp->op->area_mapped(area);
- return NULL;
-}
-
-/*
- * nfp_cpp_area_check_range - check if address range fits in CPP area
- *
- * @area: CPP area handle
- * @offset: offset into CPP area
- * @length: size of address range in bytes
- *
- * Check if address range fits within CPP area. Return 0 if area fits
- * or -1 on error.
- */
-int
-nfp_cpp_area_check_range(struct nfp_cpp_area *area, unsigned long long offset,
- unsigned long length)
-{
- if (((offset + length) > area->size))
- return NFP_ERRNO(EFAULT);
-
- return 0;
-}
-
/*
* Return the correct CPP address, and fixup xpb_addr as needed,
* based upon NFP model.
@@ -423,55 +382,6 @@ nfp_xpb_to_cpp(struct nfp_cpp *cpp, uint32_t *xpb_addr)
return xpb;
}
-int
-nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t *value)
-{
- int sz;
- uint32_t tmp = 0;
-
- sz = nfp_cpp_area_read(area, offset, &tmp, sizeof(tmp));
- *value = rte_le_to_cpu_32(tmp);
-
- return (sz == sizeof(*value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value)
-{
- int sz;
-
- value = rte_cpu_to_le_32(value);
- sz = nfp_cpp_area_write(area, offset, &value, sizeof(value));
- return (sz == sizeof(value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t *value)
-{
- int sz;
- uint64_t tmp = 0;
-
- sz = nfp_cpp_area_read(area, offset, &tmp, sizeof(tmp));
- *value = rte_le_to_cpu_64(tmp);
-
- return (sz == sizeof(*value)) ? 0 : -1;
-}
-
-int
-nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t value)
-{
- int sz;
-
- value = rte_cpu_to_le_64(value);
- sz = nfp_cpp_area_write(area, offset, &value, sizeof(value));
-
- return (sz == sizeof(value)) ? 0 : -1;
-}
-
int
nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
uint32_t *value)
@@ -610,77 +520,6 @@ nfp_cpp_from_device_name(struct rte_pci_device *dev, int driver_lock_needed)
return nfp_cpp_alloc(dev, driver_lock_needed);
}
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param mask mask of bits to alter
- * @param value value to modify
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value)
-{
- int err;
- uint32_t tmp;
-
- err = nfp_xpb_readl(cpp, xpb_tgt, &tmp);
- if (err < 0)
- return err;
-
- tmp &= ~mask;
- tmp |= (mask & value);
- return nfp_xpb_writel(cpp, xpb_tgt, tmp);
-}
-
-/*
- * Modify bits of a 32-bit value from the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param mask mask of bits to alter
- * @param value value to monitor for
- * @param timeout_us maximum number of us to wait (-1 for forever)
- *
- * @return >= 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value, int timeout_us)
-{
- uint32_t tmp;
- int err;
-
- do {
- err = nfp_xpb_readl(cpp, xpb_tgt, &tmp);
- if (err < 0)
- goto exit;
-
- if ((tmp & mask) == (value & mask)) {
- if (timeout_us < 0)
- timeout_us = 0;
- break;
- }
-
- if (timeout_us < 0)
- continue;
-
- timeout_us -= 100;
- usleep(100);
- } while (timeout_us >= 0);
-
- if (timeout_us < 0)
- err = NFP_ERRNO(ETIMEDOUT);
- else
- err = timeout_us;
-
-exit:
- return err;
-}
-
/*
* nfp_cpp_read - read from CPP target
* @cpp: CPP handle
@@ -734,63 +573,6 @@ nfp_cpp_write(struct nfp_cpp *cpp, uint32_t destination,
return err;
}
-/*
- * nfp_cpp_area_fill - fill a CPP area with a value
- * @area: CPP area
- * @offset: offset into CPP area
- * @value: value to fill with
- * @length: length of area to fill
- */
-int
-nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value, size_t length)
-{
- int err;
- size_t i;
- uint64_t value64;
-
- value = rte_cpu_to_le_32(value);
- value64 = ((uint64_t)value << 32) | value;
-
- if ((offset + length) > area->size)
- return NFP_ERRNO(EINVAL);
-
- if ((area->offset + offset) & 3)
- return NFP_ERRNO(EINVAL);
-
- if (((area->offset + offset) & 7) == 4 && length >= 4) {
- err = nfp_cpp_area_write(area, offset, &value, sizeof(value));
- if (err < 0)
- return err;
- if (err != sizeof(value))
- return NFP_ERRNO(ENOSPC);
- offset += sizeof(value);
- length -= sizeof(value);
- }
-
- for (i = 0; (i + sizeof(value)) < length; i += sizeof(value64)) {
- err =
- nfp_cpp_area_write(area, offset + i, &value64,
- sizeof(value64));
- if (err < 0)
- return err;
- if (err != sizeof(value64))
- return NFP_ERRNO(ENOSPC);
- }
-
- if ((i + sizeof(value)) <= length) {
- err =
- nfp_cpp_area_write(area, offset + i, &value, sizeof(value));
- if (err < 0)
- return err;
- if (err != sizeof(value))
- return NFP_ERRNO(ENOSPC);
- i += sizeof(value);
- }
-
- return (int)i;
-}
-
/*
* NOTE: This code should not use nfp_xpb_* functions,
* as those are model-specific
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.c b/drivers/net/nfp/nfpcore/nfp_mip.c
index c86966df8b..d67ff220eb 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.c
+++ b/drivers/net/nfp/nfpcore/nfp_mip.c
@@ -121,12 +121,6 @@ nfp_mip_close(struct nfp_mip *mip)
free(mip);
}
-const char *
-nfp_mip_name(const struct nfp_mip *mip)
-{
- return mip->name;
-}
-
/*
* nfp_mip_symtab() - Get the address and size of the MIP symbol table
* @mip: MIP handle
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.h b/drivers/net/nfp/nfpcore/nfp_mip.h
index d0919b58fe..27300ba9cd 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.h
+++ b/drivers/net/nfp/nfpcore/nfp_mip.h
@@ -13,7 +13,6 @@ struct nfp_mip;
struct nfp_mip *nfp_mip_open(struct nfp_cpp *cpp);
void nfp_mip_close(struct nfp_mip *mip);
-const char *nfp_mip_name(const struct nfp_mip *mip);
void nfp_mip_symtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
void nfp_mip_strtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
int nfp_nffw_info_mip_first(struct nfp_nffw_info *state, uint32_t *cpp_id,
diff --git a/drivers/net/nfp/nfpcore/nfp_mutex.c b/drivers/net/nfp/nfpcore/nfp_mutex.c
index 318c5800d7..9a49635e2b 100644
--- a/drivers/net/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/nfp/nfpcore/nfp_mutex.c
@@ -52,51 +52,6 @@ _nfp_cpp_mutex_validate(uint32_t model, int *target, unsigned long long address)
return 0;
}
-/*
- * Initialize a mutex location
- *
- * The CPP target:address must point to a 64-bit aligned location, and
- * will initialize 64 bits of data at the location.
- *
- * This creates the initial mutex state, as locked by this
- * nfp_cpp_interface().
- *
- * This function should only be called when setting up
- * the initial lock state upon boot-up of the system.
- *
- * @param mutex NFP CPP Mutex handle
- * @param target NFP CPP target ID (ie NFP_CPP_TARGET_CLS or
- * NFP_CPP_TARGET_MU)
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key Unique 32-bit value for this mutex
- *
- * @return 0 on success, or -1 on failure (and set errno accordingly).
- */
-int
-nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target, unsigned long long address,
- uint32_t key)
-{
- uint32_t model = nfp_cpp_model(cpp);
- uint32_t muw = NFP_CPP_ID(target, 4, 0); /* atomic_write */
- int err;
-
- err = _nfp_cpp_mutex_validate(model, &target, address);
- if (err < 0)
- return err;
-
- err = nfp_cpp_writel(cpp, muw, address + 4, key);
- if (err < 0)
- return err;
-
- err =
- nfp_cpp_writel(cpp, muw, address + 0,
- MUTEX_LOCKED(nfp_cpp_interface(cpp)));
- if (err < 0)
- return err;
-
- return 0;
-}
-
/*
* Create a mutex handle from an address controlled by a MU Atomic engine
*
@@ -174,54 +129,6 @@ nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
return mutex;
}
-struct nfp_cpp *
-nfp_cpp_mutex_cpp(struct nfp_cpp_mutex *mutex)
-{
- return mutex->cpp;
-}
-
-uint32_t
-nfp_cpp_mutex_key(struct nfp_cpp_mutex *mutex)
-{
- return mutex->key;
-}
-
-uint16_t
-nfp_cpp_mutex_owner(struct nfp_cpp_mutex *mutex)
-{
- uint32_t mur = NFP_CPP_ID(mutex->target, 3, 0); /* atomic_read */
- uint32_t value, key;
- int err;
-
- err = nfp_cpp_readl(mutex->cpp, mur, mutex->address, &value);
- if (err < 0)
- return err;
-
- err = nfp_cpp_readl(mutex->cpp, mur, mutex->address + 4, &key);
- if (err < 0)
- return err;
-
- if (key != mutex->key)
- return NFP_ERRNO(EPERM);
-
- if (!MUTEX_IS_LOCKED(value))
- return 0;
-
- return MUTEX_INTERFACE(value);
-}
-
-int
-nfp_cpp_mutex_target(struct nfp_cpp_mutex *mutex)
-{
- return mutex->target;
-}
-
-uint64_t
-nfp_cpp_mutex_address(struct nfp_cpp_mutex *mutex)
-{
- return mutex->address;
-}
-
/*
* Free a mutex handle - does not alter the lock state
*
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 876a4017c9..63689f2cf7 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -146,12 +146,6 @@ nfp_nsp_close(struct nfp_nsp *state)
free(state);
}
-uint16_t
-nfp_nsp_get_abi_ver_major(struct nfp_nsp *state)
-{
- return state->ver.major;
-}
-
uint16_t
nfp_nsp_get_abi_ver_minor(struct nfp_nsp *state)
{
@@ -348,47 +342,12 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp, uint16_t code, uint32_t option,
return ret;
}
-int
-nfp_nsp_wait(struct nfp_nsp *state)
-{
- struct timespec wait;
- int count;
- int err;
-
- wait.tv_sec = 0;
- wait.tv_nsec = 25000000;
- count = 0;
-
- for (;;) {
- err = nfp_nsp_command(state, SPCODE_NOOP, 0, 0, 0);
- if (err != -EAGAIN)
- break;
-
- nanosleep(&wait, 0);
-
- if (count++ > 1000) {
- err = -ETIMEDOUT;
- break;
- }
- }
- if (err)
- printf("NSP failed to respond %d\n", err);
-
- return err;
-}
-
int
nfp_nsp_device_soft_reset(struct nfp_nsp *state)
{
return nfp_nsp_command(state, SPCODE_SOFT_RESET, 0, 0, 0);
}
-int
-nfp_nsp_mac_reinit(struct nfp_nsp *state)
-{
- return nfp_nsp_command(state, SPCODE_MAC_INIT, 0, 0, 0);
-}
-
int
nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size)
{
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h
index c9c7b0d0fb..66cad416da 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.h
@@ -106,12 +106,9 @@ struct nfp_nsp {
struct nfp_nsp *nfp_nsp_open(struct nfp_cpp *cpp);
void nfp_nsp_close(struct nfp_nsp *state);
-uint16_t nfp_nsp_get_abi_ver_major(struct nfp_nsp *state);
uint16_t nfp_nsp_get_abi_ver_minor(struct nfp_nsp *state);
-int nfp_nsp_wait(struct nfp_nsp *state);
int nfp_nsp_device_soft_reset(struct nfp_nsp *state);
int nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size);
-int nfp_nsp_mac_reinit(struct nfp_nsp *state);
int nfp_nsp_read_identify(struct nfp_nsp *state, void *buf, unsigned int size);
int nfp_nsp_read_sensors(struct nfp_nsp *state, unsigned int sensor_mask,
void *buf, unsigned int size);
@@ -229,12 +226,8 @@ struct nfp_eth_table {
struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
-int nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable);
int nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx,
int configed);
-int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode);
-
int nfp_nsp_read_eth_table(struct nfp_nsp *state, void *buf, unsigned int size);
int nfp_nsp_write_eth_table(struct nfp_nsp *state, const void *buf,
unsigned int size);
@@ -261,10 +254,6 @@ struct nfp_nsp *nfp_eth_config_start(struct nfp_cpp *cpp, unsigned int idx);
int nfp_eth_config_commit_end(struct nfp_nsp *nsp);
void nfp_eth_config_cleanup_end(struct nfp_nsp *nsp);
-int __nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode);
-int __nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed);
-int __nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes);
-
/**
* struct nfp_nsp_identify - NSP static information
* @version: opaque version string
@@ -289,8 +278,6 @@ struct nfp_nsp_identify {
uint64_t sensor_mask;
};
-struct nfp_nsp_identify *__nfp_nsp_identify(struct nfp_nsp *nsp);
-
enum nfp_nsp_sensor_id {
NFP_SENSOR_CHIP_TEMPERATURE,
NFP_SENSOR_ASSEMBLY_POWER,
@@ -298,7 +285,4 @@ enum nfp_nsp_sensor_id {
NFP_SENSOR_ASSEMBLY_3V3_POWER,
};
-int nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id,
- long *val);
-
#endif
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
index bfd1eddb3e..276e14bbeb 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
@@ -22,88 +22,9 @@ struct nsp_identify {
uint64_t sensor_mask;
};
-struct nfp_nsp_identify *
-__nfp_nsp_identify(struct nfp_nsp *nsp)
-{
- struct nfp_nsp_identify *nspi = NULL;
- struct nsp_identify *ni;
- int ret;
-
- if (nfp_nsp_get_abi_ver_minor(nsp) < 15)
- return NULL;
-
- ni = malloc(sizeof(*ni));
- if (!ni)
- return NULL;
-
- memset(ni, 0, sizeof(*ni));
- ret = nfp_nsp_read_identify(nsp, ni, sizeof(*ni));
- if (ret < 0) {
- printf("reading bsp version failed %d\n",
- ret);
- goto exit_free;
- }
-
- nspi = malloc(sizeof(*nspi));
- if (!nspi)
- goto exit_free;
-
- memset(nspi, 0, sizeof(*nspi));
- memcpy(nspi->version, ni->version, sizeof(nspi->version));
- nspi->version[sizeof(nspi->version) - 1] = '\0';
- nspi->flags = ni->flags;
- nspi->br_primary = ni->br_primary;
- nspi->br_secondary = ni->br_secondary;
- nspi->br_nsp = ni->br_nsp;
- nspi->primary = rte_le_to_cpu_16(ni->primary);
- nspi->secondary = rte_le_to_cpu_16(ni->secondary);
- nspi->nsp = rte_le_to_cpu_16(ni->nsp);
- nspi->sensor_mask = rte_le_to_cpu_64(ni->sensor_mask);
-
-exit_free:
- free(ni);
- return nspi;
-}
-
struct nfp_sensors {
uint32_t chip_temp;
uint32_t assembly_power;
uint32_t assembly_12v_power;
uint32_t assembly_3v3_power;
};
-
-int
-nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id, long *val)
-{
- struct nfp_sensors s;
- struct nfp_nsp *nsp;
- int ret;
-
- nsp = nfp_nsp_open(cpp);
- if (!nsp)
- return -EIO;
-
- ret = nfp_nsp_read_sensors(nsp, BIT(id), &s, sizeof(s));
- nfp_nsp_close(nsp);
-
- if (ret < 0)
- return ret;
-
- switch (id) {
- case NFP_SENSOR_CHIP_TEMPERATURE:
- *val = rte_le_to_cpu_32(s.chip_temp);
- break;
- case NFP_SENSOR_ASSEMBLY_POWER:
- *val = rte_le_to_cpu_32(s.assembly_power);
- break;
- case NFP_SENSOR_ASSEMBLY_12V_POWER:
- *val = rte_le_to_cpu_32(s.assembly_12v_power);
- break;
- case NFP_SENSOR_ASSEMBLY_3V3_POWER:
- *val = rte_le_to_cpu_32(s.assembly_3v3_power);
- break;
- default:
- return -EINVAL;
- }
- return 0;
-}
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 67946891ab..2d0fd1c5cc 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -145,18 +145,6 @@ nfp_eth_rate2speed(enum nfp_eth_rate rate)
return 0;
}
-static unsigned int
-nfp_eth_speed2rate(unsigned int speed)
-{
- int i;
-
- for (i = 0; i < (int)ARRAY_SIZE(nsp_eth_rate_tbl); i++)
- if (nsp_eth_rate_tbl[i].speed == speed)
- return nsp_eth_rate_tbl[i].rate;
-
- return RATE_INVALID;
-}
-
static void
nfp_eth_copy_mac_reverse(uint8_t *dst, const uint8_t *src)
{
@@ -421,47 +409,6 @@ nfp_eth_config_commit_end(struct nfp_nsp *nsp)
return ret;
}
-/*
- * nfp_eth_set_mod_enable() - set PHY module enable control bit
- * @cpp: NFP CPP handle
- * @idx: NFP chip-wide port index
- * @enable: Desired state
- *
- * Enable or disable PHY module (this usually means setting the TX lanes
- * disable bits).
- *
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
- */
-int
-nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable)
-{
- union eth_table_entry *entries;
- struct nfp_nsp *nsp;
- uint64_t reg;
-
- nsp = nfp_eth_config_start(cpp, idx);
- if (!nsp)
- return -1;
-
- entries = nfp_nsp_config_entries(nsp);
-
- /* Check if we are already in requested state */
- reg = rte_le_to_cpu_64(entries[idx].state);
- if (enable != (int)FIELD_GET(NSP_ETH_CTRL_ENABLED, reg)) {
- reg = rte_le_to_cpu_64(entries[idx].control);
- reg &= ~NSP_ETH_CTRL_ENABLED;
- reg |= FIELD_PREP(NSP_ETH_CTRL_ENABLED, enable);
- entries[idx].control = rte_cpu_to_le_64(reg);
-
- nfp_nsp_config_set_modified(nsp, 1);
- }
-
- return nfp_eth_config_commit_end(nsp);
-}
-
/*
* nfp_eth_set_configured() - set PHY module configured control bit
* @cpp: NFP CPP handle
@@ -510,156 +457,3 @@ nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx, int configed)
return nfp_eth_config_commit_end(nsp);
}
-
-static int
-nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
- const uint64_t mask, const unsigned int shift,
- unsigned int val, const uint64_t ctrl_bit)
-{
- union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
- unsigned int idx = nfp_nsp_config_idx(nsp);
- uint64_t reg;
-
- /*
- * Note: set features were added in ABI 0.14 but the error
- * codes were initially not populated correctly.
- */
- if (nfp_nsp_get_abi_ver_minor(nsp) < 17) {
- printf("set operations not supported, please update flash\n");
- return -EOPNOTSUPP;
- }
-
- /* Check if we are already in requested state */
- reg = rte_le_to_cpu_64(entries[idx].raw[raw_idx]);
- if (val == (reg & mask) >> shift)
- return 0;
-
- reg &= ~mask;
- reg |= (val << shift) & mask;
- entries[idx].raw[raw_idx] = rte_cpu_to_le_64(reg);
-
- entries[idx].control |= rte_cpu_to_le_64(ctrl_bit);
-
- nfp_nsp_config_set_modified(nsp, 1);
-
- return 0;
-}
-
-#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit) \
- (__extension__ ({ \
- typeof(mask) _x = (mask); \
- nfp_eth_set_bit_config(nsp, raw_idx, _x, __bf_shf(_x), \
- val, ctrl_bit); \
- }))
-
-/*
- * __nfp_eth_set_aneg() - set PHY autonegotiation control bit
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @mode: Desired autonegotiation mode
- *
- * Allow/disallow PHY module to advertise/perform autonegotiation.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode)
-{
- return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_ANEG, mode,
- NSP_ETH_CTRL_SET_ANEG);
-}
-
-/*
- * __nfp_eth_set_fec() - set PHY forward error correction control bit
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @mode: Desired fec mode
- *
- * Set the PHY module forward error correction mode.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-static int
-__nfp_eth_set_fec(struct nfp_nsp *nsp, enum nfp_eth_fec mode)
-{
- return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_FEC, mode,
- NSP_ETH_CTRL_SET_FEC);
-}
-
-/*
- * nfp_eth_set_fec() - set PHY forward error correction control mode
- * @cpp: NFP CPP handle
- * @idx: NFP chip-wide port index
- * @mode: Desired fec mode
- *
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
- */
-int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode)
-{
- struct nfp_nsp *nsp;
- int err;
-
- nsp = nfp_eth_config_start(cpp, idx);
- if (!nsp)
- return -EIO;
-
- err = __nfp_eth_set_fec(nsp, mode);
- if (err) {
- nfp_eth_config_cleanup_end(nsp);
- return err;
- }
-
- return nfp_eth_config_commit_end(nsp);
-}
-
-/*
- * __nfp_eth_set_speed() - set interface speed/rate
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @speed: Desired speed (per lane)
- *
- * Set lane speed. Provided @speed value should be subport speed divided
- * by number of lanes this subport is spanning (i.e. 10000 for 40G, 25000 for
- * 50G, etc.)
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed)
-{
- enum nfp_eth_rate rate;
-
- rate = nfp_eth_speed2rate(speed);
- if (rate == RATE_INVALID) {
- printf("could not find matching lane rate for speed %u\n",
- speed);
- return -EINVAL;
- }
-
- return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_RATE, rate,
- NSP_ETH_CTRL_SET_RATE);
-}
-
-/*
- * __nfp_eth_set_split() - set interface lane split
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @lanes: Desired lanes per port
- *
- * Set number of lanes in the port.
- * Will write to hwinfo overrides in the flash (persistent config).
- *
- * Return: 0 or -ERRNO.
- */
-int
-__nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes)
-{
- return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_PORT, NSP_ETH_PORT_LANES,
- lanes, NSP_ETH_CTRL_SET_LANES);
-}
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c
index dd41fa4de4..2a07a8e411 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/nfp/nfpcore/nfp_resource.c
@@ -229,18 +229,6 @@ nfp_resource_cpp_id(const struct nfp_resource *res)
return res->cpp_id;
}
-/*
- * nfp_resource_name() - Return the name of a resource handle
- * @res: NFP Resource handle
- *
- * Return: const char pointer to the name of the resource
- */
-const char
-*nfp_resource_name(const struct nfp_resource *res)
-{
- return res->name;
-}
-
/*
* nfp_resource_address() - Return the address of a resource handle
* @res: NFP Resource handle
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 06cc6f74f4..d846402aac 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -33,13 +33,6 @@ void nfp_resource_release(struct nfp_resource *res);
*/
uint32_t nfp_resource_cpp_id(const struct nfp_resource *res);
-/**
- * Return the name of a NFP Resource
- * @param[in] res NFP Resource handle
- * @return Name of the NFP Resource
- */
-const char *nfp_resource_name(const struct nfp_resource *res);
-
/**
* Return the target address of a NFP Resource
* @param[in] res NFP Resource handle
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c
index cb7d83db51..b02063f3b9 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.c
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c
@@ -176,40 +176,6 @@ __nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip)
return NULL;
}
-/*
- * nfp_rtsym_count() - Get the number of RTSYM descriptors
- * @rtbl: NFP RTsym table
- *
- * Return: Number of RTSYM descriptors
- */
-int
-nfp_rtsym_count(struct nfp_rtsym_table *rtbl)
-{
- if (!rtbl)
- return -EINVAL;
-
- return rtbl->num;
-}
-
-/*
- * nfp_rtsym_get() - Get the Nth RTSYM descriptor
- * @rtbl: NFP RTsym table
- * @idx: Index (0-based) of the RTSYM descriptor
- *
- * Return: const pointer to a struct nfp_rtsym descriptor, or NULL
- */
-const struct nfp_rtsym *
-nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx)
-{
- if (!rtbl)
- return NULL;
-
- if (idx >= rtbl->num)
- return NULL;
-
- return &rtbl->symtab[idx];
-}
-
/*
* nfp_rtsym_lookup() - Return the RTSYM descriptor for a symbol name
* @rtbl: NFP RTsym table
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.h b/drivers/net/nfp/nfpcore/nfp_rtsym.h
index 8b494211bc..c63bc05fff 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.h
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.h
@@ -46,10 +46,6 @@ struct nfp_rtsym_table *nfp_rtsym_table_read(struct nfp_cpp *cpp);
struct nfp_rtsym_table *
__nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip);
-int nfp_rtsym_count(struct nfp_rtsym_table *rtbl);
-
-const struct nfp_rtsym *nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx);
-
const struct nfp_rtsym *
nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name);
diff --git a/drivers/net/octeontx/base/octeontx_bgx.c b/drivers/net/octeontx/base/octeontx_bgx.c
index ac856ff86d..59249dcced 100644
--- a/drivers/net/octeontx/base/octeontx_bgx.c
+++ b/drivers/net/octeontx/base/octeontx_bgx.c
@@ -90,60 +90,6 @@ octeontx_bgx_port_stop(int port)
return res;
}
-int
-octeontx_bgx_port_get_config(int port, octeontx_mbox_bgx_port_conf_t *conf)
-{
- struct octeontx_mbox_hdr hdr;
- octeontx_mbox_bgx_port_conf_t bgx_conf;
- int len = sizeof(octeontx_mbox_bgx_port_conf_t);
- int res;
-
- hdr.coproc = OCTEONTX_BGX_COPROC;
- hdr.msg = MBOX_BGX_PORT_GET_CONFIG;
- hdr.vfid = port;
-
- memset(&bgx_conf, 0, sizeof(octeontx_mbox_bgx_port_conf_t));
- res = octeontx_mbox_send(&hdr, NULL, 0, &bgx_conf, len);
- if (res < 0)
- return -EACCES;
-
- conf->enable = bgx_conf.enable;
- conf->promisc = bgx_conf.promisc;
- conf->bpen = bgx_conf.bpen;
- conf->node = bgx_conf.node;
- conf->base_chan = bgx_conf.base_chan;
- conf->num_chans = bgx_conf.num_chans;
- conf->mtu = bgx_conf.mtu;
- conf->bgx = bgx_conf.bgx;
- conf->lmac = bgx_conf.lmac;
- conf->mode = bgx_conf.mode;
- conf->pkind = bgx_conf.pkind;
- memcpy(conf->macaddr, bgx_conf.macaddr, 6);
-
- return res;
-}
-
-int
-octeontx_bgx_port_status(int port, octeontx_mbox_bgx_port_status_t *stat)
-{
- struct octeontx_mbox_hdr hdr;
- octeontx_mbox_bgx_port_status_t bgx_stat;
- int len = sizeof(octeontx_mbox_bgx_port_status_t);
- int res;
-
- hdr.coproc = OCTEONTX_BGX_COPROC;
- hdr.msg = MBOX_BGX_PORT_GET_STATUS;
- hdr.vfid = port;
-
- res = octeontx_mbox_send(&hdr, NULL, 0, &bgx_stat, len);
- if (res < 0)
- return -EACCES;
-
- stat->link_up = bgx_stat.link_up;
-
- return res;
-}
-
int
octeontx_bgx_port_stats(int port, octeontx_mbox_bgx_port_stats_t *stats)
{
diff --git a/drivers/net/octeontx/base/octeontx_bgx.h b/drivers/net/octeontx/base/octeontx_bgx.h
index d126a0b7fc..fc61168b62 100644
--- a/drivers/net/octeontx/base/octeontx_bgx.h
+++ b/drivers/net/octeontx/base/octeontx_bgx.h
@@ -147,8 +147,6 @@ int octeontx_bgx_port_open(int port, octeontx_mbox_bgx_port_conf_t *conf);
int octeontx_bgx_port_close(int port);
int octeontx_bgx_port_start(int port);
int octeontx_bgx_port_stop(int port);
-int octeontx_bgx_port_get_config(int port, octeontx_mbox_bgx_port_conf_t *conf);
-int octeontx_bgx_port_status(int port, octeontx_mbox_bgx_port_status_t *stat);
int octeontx_bgx_port_stats(int port, octeontx_mbox_bgx_port_stats_t *stats);
int octeontx_bgx_port_stats_clr(int port);
int octeontx_bgx_port_link_status(int port);
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c
index 0ddff54886..30528c269e 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.c
+++ b/drivers/net/octeontx/base/octeontx_pkivf.c
@@ -114,28 +114,6 @@ octeontx_pki_port_create_qos(int port, pki_qos_cfg_t *qos_cfg)
return res;
}
-
-int
-octeontx_pki_port_errchk_config(int port, pki_errchk_cfg_t *cfg)
-{
- struct octeontx_mbox_hdr hdr;
- int res;
-
- pki_errchk_cfg_t e_cfg;
- e_cfg = *((pki_errchk_cfg_t *)(cfg));
- int len = sizeof(pki_errchk_cfg_t);
-
- hdr.coproc = OCTEONTX_PKI_COPROC;
- hdr.msg = MBOX_PKI_PORT_ERRCHK_CONFIG;
- hdr.vfid = port;
-
- res = octeontx_mbox_send(&hdr, &e_cfg, len, NULL, 0);
- if (res < 0)
- return -EACCES;
-
- return res;
-}
-
int
octeontx_pki_port_vlan_fltr_config(int port,
pki_port_vlan_filter_config_t *fltr_cfg)
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.h b/drivers/net/octeontx/base/octeontx_pkivf.h
index d41eaa57ed..06c409225f 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.h
+++ b/drivers/net/octeontx/base/octeontx_pkivf.h
@@ -363,7 +363,6 @@ int octeontx_pki_port_hash_config(int port, pki_hash_cfg_t *hash_cfg);
int octeontx_pki_port_pktbuf_config(int port, pki_pktbuf_cfg_t *buf_cfg);
int octeontx_pki_port_create_qos(int port, pki_qos_cfg_t *qos_cfg);
int octeontx_pki_port_close(int port);
-int octeontx_pki_port_errchk_config(int port, pki_errchk_cfg_t *cfg);
int octeontx_pki_port_vlan_fltr_config(int port,
pki_port_vlan_filter_config_t *fltr_cfg);
int octeontx_pki_port_vlan_fltr_entry_config(int port,
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6cebbe677d..b8f9eb188f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -160,32 +160,6 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
static int
npc_rx_enable(struct otx2_eth_dev *dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3b9871f4dc..f0ed59d89a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -471,7 +471,6 @@ int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
struct rte_dev_reg_info *regs);
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
/* Stats */
@@ -521,8 +520,6 @@ int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
struct rte_eth_rss_conf *rss_conf);
/* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index 6d951bc7e2..dab0b8e3cd 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -480,61 +480,6 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
return rc;
}
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
- nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
- cq->tag, cq->q, cq->node, cq->cqe_type);
-
- nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
- rx->chan, rx->desc_sizem1);
- nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
- rx->imm_copy, rx->express);
- nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
- rx->wqwd, rx->errlev, rx->errcode);
- nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
- rx->latype, rx->lbtype, rx->lctype);
- nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
- rx->ldtype, rx->letype, rx->lftype);
- nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
- rx->lgtype, rx->lhtype);
-
- nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
- nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
- rx->l2m, rx->l2b, rx->l3m, rx->l3b);
- nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
- rx->vtag0_valid, rx->vtag0_gone);
- nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
- rx->vtag1_valid, rx->vtag1_gone);
- nix_dump("W1: pkind \t%d", rx->pkind);
- nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
- rx->vtag0_tci, rx->vtag1_tci);
-
- nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
- rx->laflags, rx->lbflags, rx->lcflags);
- nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
- rx->ldflags, rx->leflags, rx->lfflags);
- nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
- rx->lgflags, rx->lhflags);
-
- nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
- rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
- nix_dump("W3: match_id \t%d", rx->match_id);
-
- nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
- rx->laptr, rx->lbptr, rx->lcptr);
- nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
- rx->ldptr, rx->leptr, rx->lfptr);
- nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
- nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
- rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
static uint8_t
prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 30a823c8a7..e390629b2f 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -360,8 +360,6 @@ int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
struct otx2_flow_item_info *info,
struct rte_flow_error *error);
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
struct otx2_mbox *mbox,
struct otx2_parse_state *pst,
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index 9a0a5f9fb4..79541c86c0 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -432,24 +432,6 @@ otx2_flow_parse_item_basic(const struct rte_flow_item *item,
return 0;
}
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
- uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
- int i, j = 0;
-
- for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
- if (nibble_mask & (1 << i)) {
- nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
- cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
- j += 1;
- }
- }
-
- data[0] = cdata[0];
- data[1] = cdata[1];
-}
-
static int
flow_first_set_bit(uint64_t slab)
{
diff --git a/drivers/net/pfe/base/pfe.h b/drivers/net/pfe/base/pfe.h
index 0a88e98c1b..884694985d 100644
--- a/drivers/net/pfe/base/pfe.h
+++ b/drivers/net/pfe/base/pfe.h
@@ -312,20 +312,16 @@ enum mac_loop {LB_NONE, LB_EXT, LB_LOCAL};
#endif
void gemac_init(void *base, void *config);
-void gemac_disable_rx_checksum_offload(void *base);
void gemac_enable_rx_checksum_offload(void *base);
void gemac_set_mdc_div(void *base, int mdc_div);
void gemac_set_speed(void *base, enum mac_speed gem_speed);
void gemac_set_duplex(void *base, int duplex);
void gemac_set_mode(void *base, int mode);
void gemac_enable(void *base);
-void gemac_tx_disable(void *base);
-void gemac_tx_enable(void *base);
void gemac_disable(void *base);
void gemac_reset(void *base);
void gemac_set_address(void *base, struct spec_addr *addr);
struct spec_addr gemac_get_address(void *base);
-void gemac_set_loop(void *base, enum mac_loop gem_loop);
void gemac_set_laddr1(void *base, struct pfe_mac_addr *address);
void gemac_set_laddr2(void *base, struct pfe_mac_addr *address);
void gemac_set_laddr3(void *base, struct pfe_mac_addr *address);
@@ -336,7 +332,6 @@ void gemac_clear_laddr1(void *base);
void gemac_clear_laddr2(void *base);
void gemac_clear_laddr3(void *base);
void gemac_clear_laddr4(void *base);
-void gemac_clear_laddrN(void *base, unsigned int entry_index);
struct pfe_mac_addr gemac_get_hash(void *base);
void gemac_set_hash(void *base, struct pfe_mac_addr *hash);
struct pfe_mac_addr gem_get_laddr1(void *base);
@@ -346,24 +341,17 @@ struct pfe_mac_addr gem_get_laddr4(void *base);
struct pfe_mac_addr gem_get_laddrN(void *base, unsigned int entry_index);
void gemac_set_config(void *base, struct gemac_cfg *cfg);
void gemac_allow_broadcast(void *base);
-void gemac_no_broadcast(void *base);
void gemac_enable_1536_rx(void *base);
void gemac_disable_1536_rx(void *base);
int gemac_set_rx(void *base, int mtu);
-void gemac_enable_rx_jmb(void *base);
void gemac_disable_rx_jmb(void *base);
void gemac_enable_stacked_vlan(void *base);
void gemac_disable_stacked_vlan(void *base);
void gemac_enable_pause_rx(void *base);
-void gemac_disable_pause_rx(void *base);
-void gemac_enable_pause_tx(void *base);
-void gemac_disable_pause_tx(void *base);
void gemac_enable_copy_all(void *base);
void gemac_disable_copy_all(void *base);
void gemac_set_bus_width(void *base, int width);
-void gemac_set_wol(void *base, u32 wol_conf);
-void gpi_init(void *base, struct gpi_cfg *cfg);
void gpi_reset(void *base);
void gpi_enable(void *base);
void gpi_disable(void *base);
diff --git a/drivers/net/pfe/pfe_hal.c b/drivers/net/pfe/pfe_hal.c
index 0d25ec0523..303308c35b 100644
--- a/drivers/net/pfe/pfe_hal.c
+++ b/drivers/net/pfe/pfe_hal.c
@@ -118,16 +118,6 @@ gemac_enable_rx_checksum_offload(__rte_unused void *base)
/*Do not find configuration to do this */
}
-/* Disable Rx Checksum Engine.
- *
- * @param[in] base GEMAC base address.
- */
-void
-gemac_disable_rx_checksum_offload(__rte_unused void *base)
-{
- /*Do not find configuration to do this */
-}
-
/* GEMAC set speed.
* @param[in] base GEMAC base address
* @param[in] speed GEMAC speed (10, 100 or 1000 Mbps)
@@ -214,23 +204,6 @@ gemac_disable(void *base)
EMAC_ECNTRL_REG);
}
-/* GEMAC TX disable function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_tx_disable(void *base)
-{
- writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_GTS, base +
- EMAC_TCNTRL_REG);
-}
-
-void
-gemac_tx_enable(void *base)
-{
- writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_GTS, base +
- EMAC_TCNTRL_REG);
-}
-
/* Sets the hash register of the MAC.
* This register is used for matching unicast and multicast frames.
*
@@ -264,40 +237,6 @@ gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
}
}
-void
-gemac_clear_laddrN(void *base, unsigned int entry_index)
-{
- if (entry_index < 1 || entry_index > EMAC_SPEC_ADDR_MAX)
- return;
-
- entry_index = entry_index - 1;
- if (entry_index < 1) {
- writel(0, base + EMAC_PHY_ADDR_LOW);
- writel(0, base + EMAC_PHY_ADDR_HIGH);
- } else {
- writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_0);
- writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_1);
- }
-}
-
-/* Set the loopback mode of the MAC. This can be either no loopback for
- * normal operation, local loopback through MAC internal loopback module or PHY
- * loopback for external loopback through a PHY. This asserts the external
- * loop pin.
- *
- * @param[in] base GEMAC base address.
- * @param[in] gem_loop Loopback mode to be enabled. LB_LOCAL - MAC
- * Loopback,
- * LB_EXT - PHY Loopback.
- */
-void
-gemac_set_loop(void *base, __rte_unused enum mac_loop gem_loop)
-{
- pr_info("%s()\n", __func__);
- writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_LOOP, (base +
- EMAC_RCNTRL_REG));
-}
-
/* GEMAC allow frames
* @param[in] base GEMAC base address
*/
@@ -328,16 +267,6 @@ gemac_allow_broadcast(void *base)
EMAC_RCNTRL_REG);
}
-/* GEMAC no broadcast function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_no_broadcast(void *base)
-{
- writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_BC_REJ, base +
- EMAC_RCNTRL_REG);
-}
-
/* GEMAC enable 1536 rx function.
* @param[in] base GEMAC base address
*/
@@ -373,21 +302,6 @@ gemac_set_rx(void *base, int mtu)
return 0;
}
-/* GEMAC enable jumbo function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_enable_rx_jmb(void *base)
-{
- if (pfe_svr == SVR_LS1012A_REV1) {
- PFE_PMD_ERR("Jumbo not supported on Rev1");
- return;
- }
-
- writel((readl(base + EMAC_RCNTRL_REG) & PFE_MTU_RESET_MASK) |
- (JUMBO_FRAME_SIZE << 16), base + EMAC_RCNTRL_REG);
-}
-
/* GEMAC enable stacked vlan function.
* @param[in] base GEMAC base address
*/
@@ -407,50 +321,6 @@ gemac_enable_pause_rx(void *base)
base + EMAC_RCNTRL_REG);
}
-/* GEMAC disable pause rx function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_disable_pause_rx(void *base)
-{
- writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_FCE,
- base + EMAC_RCNTRL_REG);
-}
-
-/* GEMAC enable pause tx function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_enable_pause_tx(void *base)
-{
- writel(EMAC_RX_SECTION_EMPTY_V, base + EMAC_RX_SECTION_EMPTY);
-}
-
-/* GEMAC disable pause tx function.
- * @param[in] base GEMAC base address
- */
-void
-gemac_disable_pause_tx(void *base)
-{
- writel(0x0, base + EMAC_RX_SECTION_EMPTY);
-}
-
-/* GEMAC wol configuration
- * @param[in] base GEMAC base address
- * @param[in] wol_conf WoL register configuration
- */
-void
-gemac_set_wol(void *base, u32 wol_conf)
-{
- u32 val = readl(base + EMAC_ECNTRL_REG);
-
- if (wol_conf)
- val |= (EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
- else
- val &= ~(EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
- writel(val, base + EMAC_ECNTRL_REG);
-}
-
/* Sets Gemac bus width to 64bit
* @param[in] base GEMAC base address
* @param[in] width gemac bus width to be set possible values are 32/64/128
@@ -488,20 +358,6 @@ gemac_set_config(void *base, struct gemac_cfg *cfg)
/**************************** GPI ***************************/
-/* Initializes a GPI block.
- * @param[in] base GPI base address
- * @param[in] cfg GPI configuration
- */
-void
-gpi_init(void *base, struct gpi_cfg *cfg)
-{
- gpi_reset(base);
-
- gpi_disable(base);
-
- gpi_set_config(base, cfg);
-}
-
/* Resets a GPI block.
* @param[in] base GPI base address
*/
diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c
index 799050dce3..83edbd64fc 100644
--- a/drivers/net/pfe/pfe_hif_lib.c
+++ b/drivers/net/pfe/pfe_hif_lib.c
@@ -318,26 +318,6 @@ hif_lib_client_register(struct hif_client_s *client)
return err;
}
-int
-hif_lib_client_unregister(struct hif_client_s *client)
-{
- struct pfe *pfe = client->pfe;
- u32 client_id = client->id;
-
- PFE_PMD_INFO("client: %p, client_id: %d, txQ_depth: %d, rxQ_depth: %d",
- client, client->id, client->tx_qsize, client->rx_qsize);
-
- rte_spinlock_lock(&pfe->hif.lock);
- hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_UNREGISTER, client->id, 0);
-
- hif_lib_client_release_tx_buffers(client);
- hif_lib_client_release_rx_buffers(client);
- pfe->hif_client[client_id] = NULL;
- rte_spinlock_unlock(&pfe->hif.lock);
-
- return 0;
-}
-
int
hif_lib_event_handler_start(struct hif_client_s *client, int event,
int qno)
diff --git a/drivers/net/pfe/pfe_hif_lib.h b/drivers/net/pfe/pfe_hif_lib.h
index d7c0606943..c89c8fed74 100644
--- a/drivers/net/pfe/pfe_hif_lib.h
+++ b/drivers/net/pfe/pfe_hif_lib.h
@@ -161,7 +161,6 @@ extern unsigned int emac_txq_cnt;
int pfe_hif_lib_init(struct pfe *pfe);
void pfe_hif_lib_exit(struct pfe *pfe);
int hif_lib_client_register(struct hif_client_s *client);
-int hif_lib_client_unregister(struct hif_client_s *client);
void hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno,
void *data, void *data1, unsigned int len,
u32 client_ctrl, unsigned int flags, void *client_data);
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 6c8e6d4072..b86674fdff 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -1027,7 +1027,6 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
int ecore_configure_pf_max_bandwidth(struct ecore_dev *p_dev, u8 max_bw);
int ecore_configure_pf_min_bandwidth(struct ecore_dev *p_dev, u8 min_bw);
-void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
int ecore_device_num_engines(struct ecore_dev *p_dev);
int ecore_device_num_ports(struct ecore_dev *p_dev);
void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
@@ -1055,7 +1054,6 @@ u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl);
const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
/* doorbell recovery mechanism */
-void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn);
void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
enum ecore_db_rec_exec);
@@ -1091,7 +1089,6 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
/* Utility functions for dumping the content of the NIG LLH filters */
enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid);
-enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
/**
* @brief ecore_set_platform_str - Set the debug dump platform string.
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index d3025724b6..2fe607d1fb 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -242,13 +242,6 @@ static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn,
return OSAL_NULL;
}
-static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
-{
- struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
-
- p_mgr->srq_count = num_srqs;
-}
-
u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn)
{
struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
@@ -283,31 +276,6 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
return p_hwfn->p_cxt_mngr->acquired[type].start_cid;
}
-u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
- enum protocol_type type)
-{
- u32 cnt = 0;
- int i;
-
- for (i = 0; i < TASK_SEGMENTS; i++)
- cnt += p_hwfn->p_cxt_mngr->conn_cfg[type].tid_seg[i].count;
-
- return cnt;
-}
-
-static OSAL_INLINE void
-ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn,
- enum protocol_type proto,
- u8 seg, u8 seg_type, u32 count, bool has_fl)
-{
- struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
- struct ecore_tid_seg *p_seg = &p_mngr->conn_cfg[proto].tid_seg[seg];
-
- p_seg->count = count;
- p_seg->has_fl_mem = has_fl;
- p_seg->type = seg_type;
-}
-
/* the *p_line parameter must be either 0 for the first invocation or the
* value returned in the previous invocation.
*/
@@ -1905,11 +1873,6 @@ void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
cid, rel_cid, vfid, type);
}
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid)
-{
- _ecore_cxt_release_cid(p_hwfn, cid, ECORE_CXT_PF_CID);
-}
-
enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
struct ecore_cxt_info *p_info)
{
@@ -1987,198 +1950,6 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
return ECORE_SUCCESS;
}
-/* This function is very RoCE oriented, if another protocol in the future
- * will want this feature we'll need to modify the function to be more generic
- */
-enum _ecore_status_t
-ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
- enum ecore_cxt_elem_type elem_type,
- u32 iid)
-{
- u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line;
- struct ecore_ilt_client_cfg *p_cli;
- struct ecore_ilt_cli_blk *p_blk;
- struct ecore_ptt *p_ptt;
- dma_addr_t p_phys;
- u64 ilt_hw_entry;
- void *p_virt;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- switch (elem_type) {
- case ECORE_ELEM_CXT:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
- elem_size = CONN_CXT_SIZE(p_hwfn);
- p_blk = &p_cli->pf_blks[CDUC_BLK];
- break;
- case ECORE_ELEM_SRQ:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
- elem_size = SRQ_CXT_SIZE;
- p_blk = &p_cli->pf_blks[SRQ_BLK];
- break;
- case ECORE_ELEM_TASK:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
- elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
- p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
- break;
- default:
- DP_NOTICE(p_hwfn, false,
- "ECORE_INVALID elem type = %d", elem_type);
- return ECORE_INVAL;
- }
-
- /* Calculate line in ilt */
- hw_p_size = p_cli->p_size.val;
- elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
- line = p_blk->start_line + (iid / elems_per_p);
- shadow_line = line - p_hwfn->p_cxt_mngr->pf_start_line;
-
- /* If line is already allocated, do nothing, otherwise allocate it and
- * write it to the PSWRQ2 registers.
- * This section can be run in parallel from different contexts and thus
- * a mutex protection is needed.
- */
-
- OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
-
- if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr)
- goto out0;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt) {
- DP_NOTICE(p_hwfn, false,
- "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
- rc = ECORE_TIMEOUT;
- goto out0;
- }
-
- p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
- &p_phys,
- p_blk->real_size_in_page);
- if (!p_virt) {
- rc = ECORE_NOMEM;
- goto out1;
- }
- OSAL_MEM_ZERO(p_virt, p_blk->real_size_in_page);
-
- p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr = p_virt;
- p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr = p_phys;
- p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size =
- p_blk->real_size_in_page;
-
- /* compute absolute offset */
- reg_offset = PSWRQ2_REG_ILT_MEMORY +
- (line * ILT_REG_SIZE_IN_BYTES * ILT_ENTRY_IN_REGS);
-
- ilt_hw_entry = 0;
- SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
- SET_FIELD(ilt_hw_entry,
- ILT_ENTRY_PHY_ADDR,
- (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12));
-
-/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
-
- ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry,
- reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
- OSAL_NULL /* default parameters */);
-
-out1:
- ecore_ptt_release(p_hwfn, p_ptt);
-out0:
- OSAL_MUTEX_RELEASE(&p_hwfn->p_cxt_mngr->mutex);
-
- return rc;
-}
-
-/* This function is very RoCE oriented, if another protocol in the future
- * will want this feature we'll need to modify the function to be more generic
- */
-static enum _ecore_status_t
-ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
- enum ecore_cxt_elem_type elem_type,
- u32 start_iid, u32 count)
-{
- u32 start_line, end_line, shadow_start_line, shadow_end_line;
- u32 reg_offset, elem_size, hw_p_size, elems_per_p;
- struct ecore_ilt_client_cfg *p_cli;
- struct ecore_ilt_cli_blk *p_blk;
- u32 end_iid = start_iid + count;
- struct ecore_ptt *p_ptt;
- u64 ilt_hw_entry = 0;
- u32 i;
-
- switch (elem_type) {
- case ECORE_ELEM_CXT:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
- elem_size = CONN_CXT_SIZE(p_hwfn);
- p_blk = &p_cli->pf_blks[CDUC_BLK];
- break;
- case ECORE_ELEM_SRQ:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
- elem_size = SRQ_CXT_SIZE;
- p_blk = &p_cli->pf_blks[SRQ_BLK];
- break;
- case ECORE_ELEM_TASK:
- p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
- elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
- p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
- break;
- default:
- DP_NOTICE(p_hwfn, false,
- "ECORE_INVALID elem type = %d", elem_type);
- return ECORE_INVAL;
- }
-
- /* Calculate line in ilt */
- hw_p_size = p_cli->p_size.val;
- elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
- start_line = p_blk->start_line + (start_iid / elems_per_p);
- end_line = p_blk->start_line + (end_iid / elems_per_p);
- if (((end_iid + 1) / elems_per_p) != (end_iid / elems_per_p))
- end_line--;
-
- shadow_start_line = start_line - p_hwfn->p_cxt_mngr->pf_start_line;
- shadow_end_line = end_line - p_hwfn->p_cxt_mngr->pf_start_line;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt) {
- DP_NOTICE(p_hwfn, false,
- "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
- return ECORE_TIMEOUT;
- }
-
- for (i = shadow_start_line; i < shadow_end_line; i++) {
- if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr)
- continue;
-
- OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
- p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr,
- p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr,
- p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
-
- p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL;
- p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0;
- p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0;
-
- /* compute absolute offset */
- reg_offset = PSWRQ2_REG_ILT_MEMORY +
- ((start_line++) * ILT_REG_SIZE_IN_BYTES *
- ILT_ENTRY_IN_REGS);
-
- /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a
- * wide-bus.
- */
- ecore_dmae_host2grc(p_hwfn, p_ptt,
- (u64)(osal_uintptr_t)&ilt_hw_entry,
- reg_offset,
- sizeof(ilt_hw_entry) / sizeof(u32),
- OSAL_NULL /* default parameters */);
- }
-
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return ECORE_SUCCESS;
-}
-
static u16 ecore_blk_calculate_pages(struct ecore_ilt_cli_blk *p_blk)
{
if (p_blk->real_size_in_page == 0)
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1a539bbc71..dc5f49ef57 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -38,9 +38,6 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
enum protocol_type type,
u32 *vf_cid);
-u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
- enum protocol_type type);
-
u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
enum protocol_type type);
u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
@@ -135,14 +132,6 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
#define ECORE_CXT_PF_CID (0xff)
-/**
- * @brief ecore_cxt_release - Release a cid
- *
- * @param p_hwfn
- * @param cid
- */
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
-
/**
* @brief ecore_cxt_release - Release a cid belonging to a vf-queue
*
@@ -181,22 +170,6 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
enum protocol_type type,
u32 *p_cid, u8 vfid);
-/**
- * @brief ecore_cxt_get_tid_mem_info - function checks if the
- * page containing the iid in the ilt is already
- * allocated, if it is not it allocates the page.
- *
- * @param p_hwfn
- * @param elem_type
- * @param iid
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
- enum ecore_cxt_elem_type elem_type,
- u32 iid);
-
/**
* @brief ecore_cxt_free_proto_ilt - function frees ilt pages
* associated with the protocol passed.
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 31234f18cf..024aad3f2c 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -70,23 +70,6 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
}
-static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
- u16 proto_id, bool ieee)
-{
- bool port;
-
- if (!p_hwfn->p_dcbx_info->iwarp_port)
- return false;
-
- if (ieee)
- port = ecore_dcbx_ieee_app_port(app_info_bitmap,
- DCBX_APP_SF_IEEE_TCP_PORT);
- else
- port = ecore_dcbx_app_port(app_info_bitmap);
-
- return !!(port && (proto_id == p_hwfn->p_dcbx_info->iwarp_port));
-}
-
static void
ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
struct ecore_dcbx_results *p_data)
@@ -1323,40 +1306,6 @@ enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_lldp_agent agent,
- u8 tlv_type)
-{
- u32 mb_param = 0, mcp_resp = 0, mcp_param = 0, val = 0;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- switch (agent) {
- case ECORE_LLDP_NEAREST_BRIDGE:
- val = LLDP_NEAREST_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
- val = LLDP_NEAREST_NON_TPMR_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
- val = LLDP_NEAREST_CUSTOMER_BRIDGE;
- break;
- default:
- DP_ERR(p_hwfn, "Invalid agent type %d\n", agent);
- return ECORE_INVAL;
- }
-
- SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_AGENT, val);
- SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_TLV_RX_TYPE, tlv_type);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX,
- mb_param, &mcp_resp, &mcp_param);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, false, "Failed to register TLV\n");
-
- return rc;
-}
-
enum _ecore_status_t
ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
@@ -1390,218 +1339,3 @@ ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
return rc;
}
-
-enum _ecore_status_t
-ecore_lldp_get_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_config_params *p_params)
-{
- struct lldp_config_params_s lldp_params;
- u32 addr, val;
- int i;
-
- switch (p_params->agent) {
- case ECORE_LLDP_NEAREST_BRIDGE:
- val = LLDP_NEAREST_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
- val = LLDP_NEAREST_NON_TPMR_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
- val = LLDP_NEAREST_CUSTOMER_BRIDGE;
- break;
- default:
- DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent);
- return ECORE_INVAL;
- }
-
- addr = p_hwfn->mcp_info->port_addr +
- offsetof(struct public_port, lldp_config_params[val]);
-
- ecore_memcpy_from(p_hwfn, p_ptt, &lldp_params, addr,
- sizeof(lldp_params));
-
- p_params->tx_interval = GET_MFW_FIELD(lldp_params.config,
- LLDP_CONFIG_TX_INTERVAL);
- p_params->tx_hold = GET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_HOLD);
- p_params->tx_credit = GET_MFW_FIELD(lldp_params.config,
- LLDP_CONFIG_MAX_CREDIT);
- p_params->rx_enable = GET_MFW_FIELD(lldp_params.config,
- LLDP_CONFIG_ENABLE_RX);
- p_params->tx_enable = GET_MFW_FIELD(lldp_params.config,
- LLDP_CONFIG_ENABLE_TX);
-
- OSAL_MEMCPY(p_params->chassis_id_tlv, lldp_params.local_chassis_id,
- sizeof(p_params->chassis_id_tlv));
- for (i = 0; i < ECORE_LLDP_CHASSIS_ID_STAT_LEN; i++)
- p_params->chassis_id_tlv[i] =
- OSAL_BE32_TO_CPU(p_params->chassis_id_tlv[i]);
-
- OSAL_MEMCPY(p_params->port_id_tlv, lldp_params.local_port_id,
- sizeof(p_params->port_id_tlv));
- for (i = 0; i < ECORE_LLDP_PORT_ID_STAT_LEN; i++)
- p_params->port_id_tlv[i] =
- OSAL_BE32_TO_CPU(p_params->port_id_tlv[i]);
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t
-ecore_lldp_set_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_config_params *p_params)
-{
- u32 mb_param = 0, mcp_resp = 0, mcp_param = 0;
- struct lldp_config_params_s lldp_params;
- enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 addr, val;
- int i;
-
- switch (p_params->agent) {
- case ECORE_LLDP_NEAREST_BRIDGE:
- val = LLDP_NEAREST_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
- val = LLDP_NEAREST_NON_TPMR_BRIDGE;
- break;
- case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
- val = LLDP_NEAREST_CUSTOMER_BRIDGE;
- break;
- default:
- DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent);
- return ECORE_INVAL;
- }
-
- SET_MFW_FIELD(mb_param, DRV_MB_PARAM_LLDP_AGENT, val);
- addr = p_hwfn->mcp_info->port_addr +
- offsetof(struct public_port, lldp_config_params[val]);
-
- OSAL_MEMSET(&lldp_params, 0, sizeof(lldp_params));
- SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_TX_INTERVAL,
- p_params->tx_interval);
- SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_HOLD, p_params->tx_hold);
- SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_MAX_CREDIT,
- p_params->tx_credit);
- SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_ENABLE_RX,
- !!p_params->rx_enable);
- SET_MFW_FIELD(lldp_params.config, LLDP_CONFIG_ENABLE_TX,
- !!p_params->tx_enable);
-
- for (i = 0; i < ECORE_LLDP_CHASSIS_ID_STAT_LEN; i++)
- p_params->chassis_id_tlv[i] =
- OSAL_CPU_TO_BE32(p_params->chassis_id_tlv[i]);
- OSAL_MEMCPY(lldp_params.local_chassis_id, p_params->chassis_id_tlv,
- sizeof(lldp_params.local_chassis_id));
-
- for (i = 0; i < ECORE_LLDP_PORT_ID_STAT_LEN; i++)
- p_params->port_id_tlv[i] =
- OSAL_CPU_TO_BE32(p_params->port_id_tlv[i]);
- OSAL_MEMCPY(lldp_params.local_port_id, p_params->port_id_tlv,
- sizeof(lldp_params.local_port_id));
-
- ecore_memcpy_to(p_hwfn, p_ptt, addr, &lldp_params, sizeof(lldp_params));
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LLDP,
- mb_param, &mcp_resp, &mcp_param);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, false, "SET_LLDP failed, error = %d\n", rc);
-
- return rc;
-}
-
-enum _ecore_status_t
-ecore_lldp_set_system_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_sys_tlvs *p_params)
-{
- u32 mb_param = 0, mcp_resp = 0, mcp_param = 0;
- enum _ecore_status_t rc = ECORE_SUCCESS;
- struct lldp_system_tlvs_buffer_s lld_tlv_buf;
- u32 addr, *p_val;
- u8 len;
- int i;
-
- p_val = (u32 *)p_params->buf;
- for (i = 0; i < ECORE_LLDP_SYS_TLV_SIZE / 4; i++)
- p_val[i] = OSAL_CPU_TO_BE32(p_val[i]);
-
- OSAL_MEMSET(&lld_tlv_buf, 0, sizeof(lld_tlv_buf));
- SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_VALID, 1);
- SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_MANDATORY,
- !!p_params->discard_mandatory_tlv);
- SET_MFW_FIELD(lld_tlv_buf.flags, LLDP_SYSTEM_TLV_LENGTH,
- p_params->buf_size);
- len = ECORE_LLDP_SYS_TLV_SIZE / 2;
- OSAL_MEMCPY(lld_tlv_buf.data, p_params->buf, len);
-
- addr = p_hwfn->mcp_info->port_addr +
- offsetof(struct public_port, system_lldp_tlvs_buf);
- ecore_memcpy_to(p_hwfn, p_ptt, addr, &lld_tlv_buf, sizeof(lld_tlv_buf));
-
- if (p_params->buf_size > len) {
- addr = p_hwfn->mcp_info->port_addr +
- offsetof(struct public_port, system_lldp_tlvs_buf2);
- ecore_memcpy_to(p_hwfn, p_ptt, addr, &p_params->buf[len],
- ECORE_LLDP_SYS_TLV_SIZE / 2);
- }
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LLDP,
- mb_param, &mcp_resp, &mcp_param);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, false, "SET_LLDP failed, error = %d\n", rc);
-
- return rc;
-}
-
-enum _ecore_status_t
-ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
- u8 dscp_index, u8 *p_dscp_pri)
-{
- struct ecore_dcbx_get *p_dcbx_info;
- enum _ecore_status_t rc;
-
- if (dscp_index >= ECORE_DCBX_DSCP_SIZE) {
- DP_ERR(p_hwfn, "Invalid dscp index %d\n", dscp_index);
- return ECORE_INVAL;
- }
-
- p_dcbx_info = OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
- sizeof(*p_dcbx_info));
- if (!p_dcbx_info)
- return ECORE_NOMEM;
-
- OSAL_MEMSET(p_dcbx_info, 0, sizeof(*p_dcbx_info));
- rc = ecore_dcbx_query_params(p_hwfn, p_dcbx_info,
- ECORE_DCBX_OPERATIONAL_MIB);
- if (rc) {
- OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
- return rc;
- }
-
- *p_dscp_pri = p_dcbx_info->dscp.dscp_pri_map[dscp_index];
- OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t
-ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u8 dscp_index, u8 pri_val)
-{
- struct ecore_dcbx_set dcbx_set;
- enum _ecore_status_t rc;
-
- if (dscp_index >= ECORE_DCBX_DSCP_SIZE ||
- pri_val >= ECORE_MAX_PFC_PRIORITIES) {
- DP_ERR(p_hwfn, "Invalid dscp params: index = %d pri = %d\n",
- dscp_index, pri_val);
- return ECORE_INVAL;
- }
-
- OSAL_MEMSET(&dcbx_set, 0, sizeof(dcbx_set));
- rc = ecore_dcbx_get_config_params(p_hwfn, &dcbx_set);
- if (rc)
- return rc;
-
- dcbx_set.override_flags = ECORE_DCBX_OVERRIDE_DSCP_CFG;
- dcbx_set.dscp.dscp_pri_map[dscp_index] = pri_val;
-
- return ecore_dcbx_config_params(p_hwfn, p_ptt, &dcbx_set, 1);
-}
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 6fad2ecc2e..5d7cd1b48b 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -211,33 +211,6 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
struct ecore_dcbx_set *,
bool);
-enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_lldp_agent agent,
- u8 tlv_type);
-
-enum _ecore_status_t
-ecore_lldp_get_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_config_params *p_params);
-
-enum _ecore_status_t
-ecore_lldp_set_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_config_params *p_params);
-
-enum _ecore_status_t
-ecore_lldp_set_system_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_lldp_sys_tlvs *p_params);
-
-/* Returns priority value for a given dscp index */
-enum _ecore_status_t
-ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
- u8 dscp_index, u8 *p_dscp_pri);
-
-/* Sets priority value for a given dscp index */
-enum _ecore_status_t
-ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u8 dscp_index, u8 pri_val);
-
static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
{DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI},
{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e895dee405..96676055b7 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -263,27 +263,6 @@ void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
p_hwfn->db_recovery_info.db_recovery_counter = 0;
}
-/* print the content of the doorbell recovery mechanism */
-void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn)
-{
- struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
-
- DP_NOTICE(p_hwfn, false,
- "Dispalying doorbell recovery database. Counter was %d\n",
- p_hwfn->db_recovery_info.db_recovery_counter);
-
- /* protect the list */
- OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
- OSAL_LIST_FOR_EACH_ENTRY(db_entry,
- &p_hwfn->db_recovery_info.list,
- list_entry,
- struct ecore_db_recovery_entry) {
- ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Printing");
- }
-
- OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
-}
-
/* ring the doorbell of a single doorbell recovery entry */
void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
struct ecore_db_recovery_entry *db_entry,
@@ -823,16 +802,6 @@ static enum _ecore_status_t ecore_llh_hw_init_pf(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev)
-{
- return p_dev->p_llh_info->num_ppfid;
-}
-
-enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev)
-{
- return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0;
-}
-
/* TBD - should be removed when these definitions are available in reg_addr.h */
#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK 0x3
#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT 0
@@ -1204,76 +1173,6 @@ ecore_llh_protocol_filter_to_hilo(struct ecore_dev *p_dev,
return ECORE_SUCCESS;
}
-enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
- enum ecore_llh_prot_filter_type_t type,
- u16 source_port_or_eth_type, u16 dest_port)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
- u8 filter_idx, abs_ppfid, type_bitmap;
- char str[32];
- union ecore_llh_filter filter;
- u32 high, low, ref_cnt;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- if (p_ptt == OSAL_NULL)
- return ECORE_AGAIN;
-
- if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
- goto out;
-
- rc = ecore_llh_protocol_filter_stringify(p_dev, type,
- source_port_or_eth_type,
- dest_port, str, sizeof(str));
- if (rc != ECORE_SUCCESS)
- goto err;
-
- OSAL_MEM_ZERO(&filter, sizeof(filter));
- filter.protocol.type = type;
- filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
- filter.protocol.dest_port = dest_port;
- rc = ecore_llh_shadow_add_filter(p_dev, ppfid,
- ECORE_LLH_FILTER_TYPE_PROTOCOL,
- &filter, &filter_idx, &ref_cnt);
- if (rc != ECORE_SUCCESS)
- goto err;
-
- rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
- if (rc != ECORE_SUCCESS)
- goto err;
-
- /* Configure the LLH only in case of a new the filter */
- if (ref_cnt == 1) {
- rc = ecore_llh_protocol_filter_to_hilo(p_dev, type,
- source_port_or_eth_type,
- dest_port, &high, &low);
- if (rc != ECORE_SUCCESS)
- goto err;
-
- type_bitmap = 0x1 << type;
- rc = ecore_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
- type_bitmap, high, low);
- if (rc != ECORE_SUCCESS)
- goto err;
- }
-
- DP_VERBOSE(p_dev, ECORE_MSG_SP,
- "LLH: Added protocol filter [%s] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
- str, ppfid, abs_ppfid, filter_idx, ref_cnt);
-
- goto out;
-
-err:
- DP_NOTICE(p_hwfn, false,
- "LLH: Failed to add protocol filter [%s] to ppfid %hhd\n",
- str, ppfid);
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
u8 mac_addr[ETH_ALEN])
{
@@ -1326,66 +1225,6 @@ void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
ecore_ptt_release(p_hwfn, p_ptt);
}
-void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
- enum ecore_llh_prot_filter_type_t type,
- u16 source_port_or_eth_type,
- u16 dest_port)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
- u8 filter_idx, abs_ppfid;
- char str[32];
- union ecore_llh_filter filter;
- enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 ref_cnt;
-
- if (p_ptt == OSAL_NULL)
- return;
-
- if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
- goto out;
-
- rc = ecore_llh_protocol_filter_stringify(p_dev, type,
- source_port_or_eth_type,
- dest_port, str, sizeof(str));
- if (rc != ECORE_SUCCESS)
- goto err;
-
- OSAL_MEM_ZERO(&filter, sizeof(filter));
- filter.protocol.type = type;
- filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
- filter.protocol.dest_port = dest_port;
- rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx,
- &ref_cnt);
- if (rc != ECORE_SUCCESS)
- goto err;
-
- rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
- if (rc != ECORE_SUCCESS)
- goto err;
-
- /* Remove from the LLH in case the filter is not in use */
- if (!ref_cnt) {
- rc = ecore_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
- filter_idx);
- if (rc != ECORE_SUCCESS)
- goto err;
- }
-
- DP_VERBOSE(p_dev, ECORE_MSG_SP,
- "LLH: Removed protocol filter [%s] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
- str, ppfid, abs_ppfid, filter_idx, ref_cnt);
-
- goto out;
-
-err:
- DP_NOTICE(p_dev, false,
- "LLH: Failed to remove protocol filter [%s] from ppfid %hhd\n",
- str, ppfid);
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-}
-
void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
@@ -1419,18 +1258,6 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
ecore_ptt_release(p_hwfn, p_ptt);
}
-void ecore_llh_clear_all_filters(struct ecore_dev *p_dev)
-{
- u8 ppfid;
-
- if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
- !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
- return;
-
- for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++)
- ecore_llh_clear_ppfid_filters(p_dev, ppfid);
-}
-
enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u32 addr,
u32 val)
@@ -1497,20 +1324,6 @@ ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
return rc;
}
-enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev)
-{
- u8 ppfid;
- enum _ecore_status_t rc;
-
- for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
- rc = ecore_llh_dump_ppfid(p_dev, ppfid);
- if (rc != ECORE_SUCCESS)
- return rc;
- }
-
- return ECORE_SUCCESS;
-}
-
/******************************* NIG LLH - End ********************************/
/* Configurable */
@@ -4000,18 +3813,6 @@ static void ecore_hw_timers_stop(struct ecore_dev *p_dev,
(u8)ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_TASK));
}
-void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
-{
- int j;
-
- for_each_hwfn(p_dev, j) {
- struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
- struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
-
- ecore_hw_timers_stop(p_dev, p_hwfn, p_ptt);
- }
-}
-
static enum _ecore_status_t ecore_verify_reg_val(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 addr, u32 expected_val)
@@ -5481,16 +5282,6 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
#define ECORE_MAX_DEVICE_NAME_LEN (8)
-void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
-{
- u8 n;
-
- n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN);
- OSAL_SNPRINTF((char *)name, n, "%s %c%d",
- ECORE_IS_BB(p_dev) ? "BB" : "AH",
- 'A' + p_dev->chip_rev, (int)p_dev->chip_metal);
-}
-
static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -5585,27 +5376,6 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-#ifndef LINUX_REMOVE
-void ecore_prepare_hibernate(struct ecore_dev *p_dev)
-{
- int j;
-
- if (IS_VF(p_dev))
- return;
-
- for_each_hwfn(p_dev, j) {
- struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
- "Mark hw/fw uninitialized\n");
-
- p_hwfn->hw_init_done = false;
-
- ecore_ptt_invalidate(p_hwfn);
- }
-}
-#endif
-
static enum _ecore_status_t
ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
void OSAL_IOMEM *p_doorbells, u64 db_phys_addr,
@@ -6219,23 +5989,6 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t
-ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) {
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR,
- 1 << p_hwfn->abs_pf_id / 2);
- ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, 0);
- return ECORE_SUCCESS;
- }
-
- DP_NOTICE(p_hwfn, false,
- "This function can't be set as default\n");
- return ECORE_INVAL;
-}
-
static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 hw_addr, void *p_eth_qzone,
@@ -6259,46 +6012,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
- u16 rx_coal, u16 tx_coal,
- void *p_handle)
-{
- struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)p_handle;
- enum _ecore_status_t rc = ECORE_SUCCESS;
- struct ecore_ptt *p_ptt;
-
- /* TODO - Configuring a single queue's coalescing but
- * claiming all queues are abiding same configuration
- * for PF and VF both.
- */
-
- if (IS_VF(p_hwfn->p_dev))
- return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
- tx_coal, p_cid);
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_AGAIN;
-
- if (rx_coal) {
- rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
- if (rc)
- goto out;
- p_hwfn->p_dev->rx_coalesce_usecs = rx_coal;
- }
-
- if (tx_coal) {
- rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal, p_cid);
- if (rc)
- goto out;
- p_hwfn->p_dev->tx_coalesce_usecs = tx_coal;
- }
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 coalesce,
@@ -6761,20 +6474,6 @@ int ecore_configure_pf_min_bandwidth(struct ecore_dev *p_dev, u8 min_bw)
return rc;
}
-void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
- struct ecore_mcp_link_state *p_link;
-
- p_link = &p_hwfn->mcp_info->link_output;
-
- if (p_link->min_pf_rate)
- ecore_disable_wfq_for_all_vports(p_hwfn, p_ptt);
-
- OSAL_MEMSET(p_hwfn->qm_info.wfq_data, 0,
- sizeof(*p_hwfn->qm_info.wfq_data) *
- p_hwfn->qm_info.num_vports);
-}
-
int ecore_device_num_engines(struct ecore_dev *p_dev)
{
return ECORE_IS_BB(p_dev) ? 2 : 1;
@@ -6810,8 +6509,3 @@ void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
len = OSAL_STRLEN(buf_str);
OSAL_SET_PLATFORM_STR(p_hwfn, &buf_str[len], buf_size - len);
}
-
-bool ecore_is_mf_fip_special(struct ecore_dev *p_dev)
-{
- return !!OSAL_GET_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits);
-}
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 9ddf502eb9..37a8d99712 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -132,15 +132,6 @@ struct ecore_hw_init_params {
enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
struct ecore_hw_init_params *p_params);
-/**
- * @brief ecore_hw_timers_stop_all -
- *
- * @param p_dev
- *
- * @return void
- */
-void ecore_hw_timers_stop_all(struct ecore_dev *p_dev);
-
/**
* @brief ecore_hw_stop -
*
@@ -162,15 +153,6 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev);
enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
#ifndef LINUX_REMOVE
-/**
- * @brief ecore_prepare_hibernate -should be called when
- * the system is going into the hibernate state
- *
- * @param p_dev
- *
- */
-void ecore_prepare_hibernate(struct ecore_dev *p_dev);
-
enum ecore_db_rec_width {
DB_REC_WIDTH_32B,
DB_REC_WIDTH_64B,
@@ -488,31 +470,12 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
u8 src_id,
u8 *dst_id);
-/**
- * @brief ecore_llh_get_num_ppfid - Return the allocated number of LLH filter
- * banks that are allocated to the PF.
- *
- * @param p_dev
- *
- * @return u8 - Number of LLH filter banks
- */
-u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev);
-
enum ecore_eng {
ECORE_ENG0,
ECORE_ENG1,
ECORE_BOTH_ENG,
};
-/**
- * @brief ecore_llh_get_l2_affinity_hint - Return the hint for the L2 affinity
- *
- * @param p_dev
- *
- * @return enum ecore_eng - L2 affintiy hint
- */
-enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev);
-
/**
* @brief ecore_llh_set_ppfid_affinity - Set the engine affinity for the given
* LLH filter bank.
@@ -571,38 +534,6 @@ enum ecore_llh_prot_filter_type_t {
ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT
};
-/**
- * @brief ecore_llh_add_protocol_filter - Add a LLH protocol filter into the
- * given filter bank.
- *
- * @param p_dev
- * @param ppfid - relative within the allocated ppfids ('0' is the default one).
- * @param type - type of filters and comparing
- * @param source_port_or_eth_type - source port or ethertype to add
- * @param dest_port - destination port to add
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
- enum ecore_llh_prot_filter_type_t type,
- u16 source_port_or_eth_type, u16 dest_port);
-
-/**
- * @brief ecore_llh_remove_protocol_filter - Remove a LLH protocol filter from
- * the given filter bank.
- *
- * @param p_dev
- * @param ppfid - relative within the allocated ppfids ('0' is the default one).
- * @param type - type of filters and comparing
- * @param source_port_or_eth_type - source port or ethertype to add
- * @param dest_port - destination port to add
- */
-void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
- enum ecore_llh_prot_filter_type_t type,
- u16 source_port_or_eth_type,
- u16 dest_port);
-
/**
* @brief ecore_llh_clear_ppfid_filters - Remove all LLH filters from the given
* filter bank.
@@ -612,23 +543,6 @@ void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
*/
void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid);
-/**
- * @brief ecore_llh_clear_all_filters - Remove all LLH filters
- *
- * @param p_dev
- */
-void ecore_llh_clear_all_filters(struct ecore_dev *p_dev);
-
-/**
- * @brief ecore_llh_set_function_as_default - set function as default per port
- *
- * @param p_hwfn
- * @param p_ptt
- */
-enum _ecore_status_t
-ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
/**
*@brief Cleanup of previous driver remains prior to load
*
@@ -644,39 +558,6 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
u16 id,
bool is_vf);
-/**
- * @brief ecore_get_queue_coalesce - Retrieve coalesce value for a given queue.
- *
- * @param p_hwfn
- * @param p_coal - store coalesce value read from the hardware.
- * @param p_handle
- *
- * @return enum _ecore_status_t
- **/
-enum _ecore_status_t
-ecore_get_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 *coal,
- void *handle);
-
-/**
- * @brief ecore_set_queue_coalesce - Configure coalesce parameters for Rx and
- * Tx queue. The fact that we can configure coalescing to up to 511, but on
- * varying accuracy [the bigger the value the less accurate] up to a mistake
- * of 3usec for the highest values.
- * While the API allows setting coalescing per-qid, all queues sharing a SB
- * should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
- * otherwise configuration would break.
- *
- * @param p_hwfn
- * @param rx_coal - Rx Coalesce value in micro seconds.
- * @param tx_coal - TX Coalesce value in micro seconds.
- * @param p_handle
- *
- * @return enum _ecore_status_t
- **/
-enum _ecore_status_t
-ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
- u16 tx_coal, void *p_handle);
-
/**
* @brief ecore_pglueb_set_pfid_enable - Enable or disable PCI BUS MASTER
*
@@ -690,12 +571,4 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
bool b_enable);
-/**
- * @brief Whether FIP discovery fallback special mode is enabled or not.
- *
- * @param cdev
- *
- * @return true if device is in FIP special mode, false otherwise.
- */
-bool ecore_is_mf_fip_special(struct ecore_dev *p_dev);
#endif
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 1db39d6a36..881682df25 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -407,22 +407,6 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
*(u32 *)&p_ptt->pxp.pretend);
}
-void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
-{
- u16 control = 0;
-
- SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
- SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
- SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
-
- p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
-
- REG_WR(p_hwfn,
- ecore_ptt_config_addr(p_ptt) +
- OFFSETOF(struct pxp_ptt_entry, pretend),
- *(u32 *)&p_ptt->pxp.pretend);
-}
-
void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u8 port_id, u16 fid)
{
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 238bdb9dbc..e1042eefec 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -191,16 +191,6 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u8 port_id);
-/**
- * @brief ecore_port_unpretend - cancel any previously set port
- * pretend
- *
- * @param p_hwfn
- * @param p_ptt
- */
-void ecore_port_unpretend(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
/**
* @brief ecore_port_fid_pretend - pretend to another port and another function
* when accessing the ptt window
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 6a52f32cc9..6a0c7935e6 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -936,33 +936,6 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
return 0;
}
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 vport_id,
- u32 vport_rl,
- u32 link_speed)
-{
- u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-
- if (vport_id >= max_qm_global_rls) {
- DP_NOTICE(p_hwfn, true,
- "Invalid VPORT ID for rate limiter configuration\n");
- return -1;
- }
-
- inc_val = QM_RL_INC_VAL(vport_rl ? vport_rl : link_speed);
- if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
- DP_NOTICE(p_hwfn, true,
- "Invalid VPORT rate-limit configuration\n");
- return -1;
- }
-
- ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
- (u32)QM_RL_CRD_REG_SIGN_BIT);
- ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
-
- return 0;
-}
-
bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
bool is_release_cmd,
@@ -1032,385 +1005,11 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
/* NIG: packet prioritry configuration constants */
#define NIG_PRIORITY_MAP_TC_BITS 4
-
-void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_ets_req *req, bool is_lb)
-{
- u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
- u32 tc_bound_base_addr, tc_bound_addr_diff;
- u8 sp_tc_map = 0, wfq_tc_map = 0;
- u8 tc, num_tc, tc_client_offset;
-
- num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
- tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
- NIG_TX_ETS_CLIENT_OFFSET;
- min_weight = 0xffffffff;
- tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
- NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
- tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 -
- NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
- NIG_REG_TX_ARB_CREDIT_WEIGHT_1 -
- NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
- tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
- NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
- tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 -
- NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 :
- NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 -
- NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0;
-
- for (tc = 0; tc < num_tc; tc++) {
- struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-
- /* Update SP map */
- if (tc_req->use_sp)
- sp_tc_map |= (1 << tc);
-
- if (!tc_req->use_wfq)
- continue;
-
- /* Update WFQ map */
- wfq_tc_map |= (1 << tc);
-
- /* Find minimal weight */
- if (tc_req->weight < min_weight)
- min_weight = tc_req->weight;
- }
-
- /* Write SP map */
- ecore_wr(p_hwfn, p_ptt,
- is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
- NIG_REG_TX_ARB_CLIENT_IS_STRICT,
- (sp_tc_map << tc_client_offset));
-
- /* Write WFQ map */
- ecore_wr(p_hwfn, p_ptt,
- is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
- NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
- (wfq_tc_map << tc_client_offset));
- /* write WFQ weights */
- for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
- struct init_ets_tc_req *tc_req = &req->tc_req[tc];
- u32 byte_weight;
-
- if (!tc_req->use_wfq)
- continue;
-
- /* Translate weight to bytes */
- byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
- min_weight;
-
- /* Write WFQ weight */
- ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
- tc_weight_addr_diff * tc_client_offset, byte_weight);
-
- /* Write WFQ upper bound */
- ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr +
- tc_bound_addr_diff * tc_client_offset,
- NIG_ETS_UP_BOUND(byte_weight, req->mtu));
- }
-}
-
-void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_lb_rl_req *req)
-{
- u32 ctrl, inc_val, reg_offset;
- u8 tc;
-
- /* Disable global MAC+LB RL */
- ctrl =
- NIG_RL_BASE_TYPE <<
- NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
- ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
-
- /* Configure and enable global MAC+LB RL */
- if (req->lb_mac_rate) {
- /* Configure */
- ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
- NIG_RL_PERIOD_CLK_25M);
- inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE,
- inc_val);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
- NIG_RL_MAX_VAL(inc_val, req->mtu));
-
- /* Enable */
- ctrl |=
- 1 <<
- NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
- ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
- }
-
- /* Disable global LB-only RL */
- ctrl =
- NIG_RL_BASE_TYPE <<
- NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
-
- /* Configure and enable global LB-only RL */
- if (req->lb_rate) {
- /* Configure */
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
- NIG_RL_PERIOD_CLK_25M);
- inc_val = NIG_RL_INC_VAL(req->lb_rate);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE,
- inc_val);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
- NIG_RL_MAX_VAL(inc_val, req->mtu));
-
- /* Enable */
- ctrl |=
- 1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
- }
-
- /* Per-TC RLs */
- for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
- tc++, reg_offset += 4) {
- /* Disable TC RL */
- ctrl =
- NIG_RL_BASE_TYPE <<
- NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
-
- /* Configure and enable TC RL */
- if (!req->tc_rate[tc])
- continue;
-
- /* Configure */
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
- reg_offset, NIG_RL_PERIOD_CLK_25M);
- inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
- reg_offset, inc_val);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
- reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
-
- /* Enable */
- ctrl |= 1 <<
- NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
- ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
- reg_offset, ctrl);
- }
-}
-
-void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_pri_tc_map_req *req)
-{
- u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 };
- u32 pri_tc_mask = 0;
- u8 pri, tc;
-
- for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) {
- if (!req->pri[pri].valid)
- continue;
-
- pri_tc_mask |= (req->pri[pri].tc_id <<
- (pri * NIG_PRIORITY_MAP_TC_BITS));
- tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
- }
-
- /* Write priority -> TC mask */
- ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask);
-
- /* Write TC -> priority mask */
- for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
- ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
- tc_pri_mask[tc]);
- ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4,
- tc_pri_mask[tc]);
- }
-}
-
-#endif /* UNUSED_HSI_FUNC */
-
-#ifndef UNUSED_HSI_FUNC
-
/* PRS: ETS configuration constants */
#define PRS_ETS_MIN_WFQ_BYTES 1600
#define PRS_ETS_UP_BOUND(weight, mtu) \
(2 * ((weight) > (mtu) ? (weight) : (mtu)))
-
-void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, struct init_ets_req *req)
-{
- u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
- u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
-
- tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
- PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
- tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
- PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
-
- for (tc = 0; tc < NUM_OF_TCS; tc++) {
- struct init_ets_tc_req *tc_req = &req->tc_req[tc];
-
- /* Update SP map */
- if (tc_req->use_sp)
- sp_tc_map |= (1 << tc);
-
- if (!tc_req->use_wfq)
- continue;
-
- /* Update WFQ map */
- wfq_tc_map |= (1 << tc);
-
- /* Find minimal weight */
- if (tc_req->weight < min_weight)
- min_weight = tc_req->weight;
- }
-
- /* write SP map */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
-
- /* write WFQ map */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
- wfq_tc_map);
-
- /* write WFQ weights */
- for (tc = 0; tc < NUM_OF_TCS; tc++) {
- struct init_ets_tc_req *tc_req = &req->tc_req[tc];
- u32 byte_weight;
-
- if (!tc_req->use_wfq)
- continue;
-
- /* Translate weight to bytes */
- byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
- min_weight;
-
- /* Write WFQ weight */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
- tc_weight_addr_diff, byte_weight);
-
- /* Write WFQ upper bound */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
- tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
- req->mtu));
- }
-}
-
-#endif /* UNUSED_HSI_FUNC */
-#ifndef UNUSED_HSI_FUNC
-
-/* BRB: RAM configuration constants */
-#define BRB_TOTAL_RAM_BLOCKS_BB 4800
-#define BRB_TOTAL_RAM_BLOCKS_K2 5632
-#define BRB_BLOCK_SIZE 128
-#define BRB_MIN_BLOCKS_PER_TC 9
-#define BRB_HYST_BYTES 10240
-#define BRB_HYST_BLOCKS (BRB_HYST_BYTES / BRB_BLOCK_SIZE)
-
-/* Temporary big RAM allocation - should be updated */
-void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
-{
- u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
- u32 active_port_blocks, reg_offset = 0;
- u8 port, active_ports = 0;
-
- tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
- BRB_BLOCK_SIZE);
- min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
- BRB_BLOCK_SIZE);
- total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
- BRB_TOTAL_RAM_BLOCKS_BB;
-
- /* Find number of active ports */
- for (port = 0; port < MAX_NUM_PORTS; port++)
- if (req->num_active_tcs[port])
- active_ports++;
-
- active_port_blocks = (u32)(total_blocks / active_ports);
-
- for (port = 0; port < req->max_ports_per_engine; port++) {
- u32 port_blocks, port_shared_blocks, port_guaranteed_blocks;
- u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th;
- u32 tc_guaranteed_blocks;
- u8 tc;
-
- /* Calculate per-port sizes */
- tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
- BRB_BLOCK_SIZE);
- port_blocks = req->num_active_tcs[port] ? active_port_blocks :
- 0;
- port_guaranteed_blocks = req->num_active_tcs[port] *
- tc_guaranteed_blocks;
- port_shared_blocks = port_blocks - port_guaranteed_blocks;
- full_xoff_th = req->num_active_tcs[port] *
- BRB_MIN_BLOCKS_PER_TC;
- full_xon_th = full_xoff_th + min_pkt_size_blocks;
- pause_xoff_th = tc_headroom_blocks;
- pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
-
- /* Init total size per port */
- ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
- port_blocks);
-
- /* Init shared size per port */
- ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
- port_shared_blocks);
-
- for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
- /* Clear init values for non-active TCs */
- if (tc == req->num_active_tcs[port]) {
- tc_guaranteed_blocks = 0;
- full_xoff_th = 0;
- full_xon_th = 0;
- pause_xoff_th = 0;
- pause_xon_th = 0;
- }
-
- /* Init guaranteed size per TC */
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_TC_GUARANTIED_0 + reg_offset,
- tc_guaranteed_blocks);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
- BRB_HYST_BLOCKS);
-
- /* Init pause/full thresholds per physical TC - for
- * loopback traffic.
- */
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
- reg_offset, full_xoff_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 +
- reg_offset, full_xon_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 +
- reg_offset, pause_xoff_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
- reg_offset, pause_xon_th);
-
- /* Init pause/full thresholds per physical TC - for
- * main traffic.
- */
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
- reg_offset, full_xoff_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 +
- reg_offset, full_xon_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 +
- reg_offset, pause_xoff_th);
- ecore_wr(p_hwfn, p_ptt,
- BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 +
- reg_offset, pause_xon_th);
- }
- }
-}
-
-#endif /* UNUSED_HSI_FUNC */
-#ifndef UNUSED_HSI_FUNC
-
#define ARR_REG_WR(dev, ptt, addr, arr, arr_size) \
do { \
u32 i; \
@@ -1423,7 +1022,6 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
#define DWORDS_TO_BYTES(dwords) ((dwords) * REG_SIZE)
#endif
-
/**
* @brief ecore_dmae_to_grc - is an internal function - writes from host to
* wide-bus registers (split registers are not supported yet)
@@ -1467,13 +1065,6 @@ static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
return len_in_dwords;
}
-/* In MF, should be called once per port to set EtherType of OuterTag */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
-{
- /* Update DORQ register */
- STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
-}
-
#endif /* UNUSED_HSI_FUNC */
#define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
@@ -1627,33 +1218,6 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
#define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET 3
#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT -925189872
-void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- bool enable)
-{
- u32 reg_val, cfg_mask;
-
- /* read PRS config register */
- reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_MSG_INFO);
-
- /* set VXLAN_NO_L2_ENABLE mask */
- cfg_mask = (1 << PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET);
-
- if (enable) {
- /* set VXLAN_NO_L2_ENABLE flag */
- reg_val |= cfg_mask;
-
- /* update PRS FIC Format register */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
- (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT);
- /* clear VXLAN_NO_L2_ENABLE flag */
- reg_val &= ~cfg_mask;
- }
-
- /* write PRS config register */
- ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, reg_val);
-}
-
#ifndef UNUSED_HSI_FUNC
#define T_ETH_PACKET_ACTION_GFT_EVENTID 23
@@ -1686,21 +1250,6 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
}
-
-void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- u32 rfs_cm_hdr_event_id;
-
- /* Set RFS event ID to be awakened i Tstorm By Prs */
- rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
- rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
- PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
- rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
- PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
- ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
-}
-
void ecore_gft_config(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 pf_id,
@@ -1825,76 +1374,6 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
#endif /* UNUSED_HSI_FUNC */
-/* Configure VF zone size mode */
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 mode,
- bool runtime_init)
-{
- u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
- u32 msdm_vf_offset_mask;
-
- if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
- msdm_vf_size_log += 1;
- else if (mode == VF_ZONE_SIZE_MODE_QUAD)
- msdm_vf_size_log += 2;
-
- msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
-
- if (runtime_init) {
- STORE_RT_REG(p_hwfn,
- PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
- msdm_vf_size_log);
- STORE_RT_REG(p_hwfn,
- PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET,
- msdm_vf_offset_mask);
- } else {
- ecore_wr(p_hwfn, p_ptt,
- PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log);
- ecore_wr(p_hwfn, p_ptt,
- PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask);
- }
-}
-
-/* Get mstorm statistics for offset by VF zone size mode */
-u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
- u16 stat_cnt_id,
- u16 vf_zone_size_mode)
-{
- u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
-
- if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
- (stat_cnt_id > MAX_NUM_PFS)) {
- if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
- offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
- (stat_cnt_id - MAX_NUM_PFS);
- else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
- offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
- (stat_cnt_id - MAX_NUM_PFS);
- }
-
- return offset;
-}
-
-/* Get mstorm VF producer offset by VF zone size mode */
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
- u8 vf_id,
- u8 vf_queue_id,
- u16 vf_zone_size_mode)
-{
- u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
-
- if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
- if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
- offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
- vf_id;
- else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
- offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
- vf_id;
- }
-
- return offset;
-}
-
#ifndef LINUX_REMOVE
#define CRC8_INIT_VALUE 0xFF
#endif
@@ -1964,101 +1443,6 @@ static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
return validation_byte;
}
-/* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
- void *p_ctx_mem, u16 ctx_size,
- u8 ctx_type, u32 cid)
-{
- u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
-
- p_ctx = (u8 *)p_ctx_mem;
-
- x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
- t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
- u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
-
- OSAL_MEMSET(p_ctx, 0, ctx_size);
-
- *x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
- *t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
- *u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
-}
-
-/* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
- u16 ctx_size, u8 ctx_type, u32 tid)
-{
- u8 *p_ctx, *region1_val_ptr;
-
- p_ctx = (u8 *)p_ctx_mem;
- region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
-
- OSAL_MEMSET(p_ctx, 0, ctx_size);
-
- *region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 1,
- tid);
-}
-
-/* Memset session context to 0 while preserving validation bytes */
-void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
- u32 ctx_size, u8 ctx_type)
-{
- u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
- u8 x_val, t_val, u_val;
-
- p_ctx = (u8 *)p_ctx_mem;
-
- x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
- t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
- u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
-
- x_val = *x_val_ptr;
- t_val = *t_val_ptr;
- u_val = *u_val_ptr;
-
- OSAL_MEMSET(p_ctx, 0, ctx_size);
-
- *x_val_ptr = x_val;
- *t_val_ptr = t_val;
- *u_val_ptr = u_val;
-}
-
-/* Memset task context to 0 while preserving validation bytes */
-void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
- u32 ctx_size, u8 ctx_type)
-{
- u8 *p_ctx, *region1_val_ptr;
- u8 region1_val;
-
- p_ctx = (u8 *)p_ctx_mem;
- region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
-
- region1_val = *region1_val_ptr;
-
- OSAL_MEMSET(p_ctx, 0, ctx_size);
-
- *region1_val_ptr = region1_val;
-}
-
-/* Enable and configure context validation */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- u32 ctx_validation;
-
- /* Enable validation for connection region 3: CCFC_CTX_VALID0[31:24] */
- ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 24;
- ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
-
- /* Enable validation for connection region 5: CCFC_CTX_VALID1[15:8] */
- ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
- ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
-
- /* Enable validation for connection region 1: TCFC_CTX_VALID0[15:8] */
- ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
- ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
-}
-
#define PHYS_ADDR_DWORDS DIV_ROUND_UP(sizeof(dma_addr_t), 4)
#define OVERLAY_HDR_SIZE_DWORDS (sizeof(struct fw_overlay_buf_hdr) / 4)
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index a393d088fe..54d169ed86 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -176,24 +176,6 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
u16 rl_id,
u32 rate_limit);
-/**
- * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
- * VPORT.
- *
- * @param p_hwfn - HW device data
- * @param p_ptt - ptt window used for writing the registers
- * @param vport_id - VPORT ID
- * @param vport_rl - rate limit in Mb/sec units
- * @param link_speed - link speed in Mbps.
- *
- * @return 0 on success, -1 on error.
- */
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 vport_id,
- u32 vport_rl,
- u32 link_speed);
-
/**
* @brief ecore_send_qm_stop_cmd Sends a stop command to the QM
*
@@ -213,100 +195,6 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
bool is_tx_pq,
u16 start_pq,
u16 num_pqs);
-#ifndef UNUSED_HSI_FUNC
-
-/**
- * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param req - the NIG ETS initialization requirements.
- * @param is_lb - if set, the loopback port arbiter is initialized, otherwise
- * the physical port arbiter is initialized. The pure-LB TC
- * requirements are ignored when is_lb is cleared.
- */
-void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_ets_req *req,
- bool is_lb);
-
-/**
- * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
- *
- * Based on global and per-TC rate requirements
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param req - the NIG LB RLs initialization requirements.
- */
-void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_lb_rl_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-/**
- * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
- *
- * Assumes valid arguments.
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param req - required mapping from prioirties to TCs.
- */
-void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_pri_tc_map_req *req);
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param req - the PRS ETS initialization requirements.
- */
-void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_ets_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
- *
- * Based on weight/priority requirements per-TC.
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param req - the BRB RAM initialization requirements.
- */
-void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_brb_ram_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-/**
- * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing
- *
- * @param p_ptt - ptt window used for writing the registers.
- * @param enable - VXLAN no L2 enable flag.
- */
-void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- bool enable);
-
-#ifndef UNUSED_HSI_FUNC
-/**
- * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
- * input ethType should Be called
- * once per port.
- *
- * @param p_hwfn - HW device data
- * @param ethType - etherType to configure
- */
-void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
- u32 ethType);
-#endif /* UNUSED_HSI_FUNC */
-
/**
* @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
* port.
@@ -369,14 +257,6 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
bool ip_geneve_enable);
#ifndef UNUSED_HSI_FUNC
-/**
-* @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
-*
-* @param p_ptt - ptt window used for writing the registers.
-*/
-void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
/**
* @brief ecore_gft_disable - Disable GFT
*
@@ -410,113 +290,6 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
enum gft_profile_type profile_type);
#endif /* UNUSED_HSI_FUNC */
-/**
-* @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
-* used before first ETH queue started.
-*
- * @param p_hwfn - HW device data
-* @param p_ptt - ptt window used for writing the registers. Don't care
- * if runtime_init used.
-* @param mode - VF zone size mode. Use enum vf_zone_size_mode.
- * @param runtime_init - Set 1 to init runtime registers in engine phase.
- * Set 0 if VF zone size mode configured after engine
- * phase.
-*/
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
- *p_ptt, u16 mode, bool runtime_init);
-
-/**
- * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
- * VF zone size mode.
-*
- * @param p_hwfn - HW device data
-* @param stat_cnt_id - statistic counter id
-* @param vf_zone_size_mode - VF zone size mode. Use enum vf_zone_size_mode.
-*/
-u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
- u16 stat_cnt_id, u16 vf_zone_size_mode);
-
-/**
- * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
- * size mode.
-*
- * @param p_hwfn - HW device data
-* @param vf_id - vf id.
-* @param vf_queue_id - per VF rx queue id.
-* @param vf_zone_size_mode - vf zone size mode. Use enum vf_zone_size_mode.
-*/
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
- vf_queue_id, u16 vf_zone_size_mode);
-/**
- * @brief ecore_enable_context_validation - Enable and configure context
- * validation.
- *
- * @param p_hwfn - HW device data
- * @param p_ptt - ptt window used for writing the registers.
- */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-/**
- * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
- * session context.
- *
- * @param p_hwfn - HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size - context size.
- * @param ctx_type - context type.
- * @param cid - context cid.
- */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
- void *p_ctx_mem,
- u16 ctx_size,
- u8 ctx_type,
- u32 cid);
-
-/**
- * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
- * context.
- *
- * @param p_hwfn - HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size - context size.
- * @param ctx_type - context type.
- * @param tid - context tid.
- */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
- void *p_ctx_mem,
- u16 ctx_size,
- u8 ctx_type,
- u32 tid);
-
-/**
- * @brief ecore_memset_session_ctx - Memset session context to 0 while
- * preserving validation bytes.
- *
- * @param p_hwfn - HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size - size to initialzie.
- * @param ctx_type - context type.
- */
-void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
- void *p_ctx_mem,
- u32 ctx_size,
- u8 ctx_type);
-
-/**
- * @brief ecore_memset_task_ctx - Memset task context to 0 while preserving
- * validation bytes.
- *
- * @param p_hwfn - HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size - size to initialzie.
- * @param ctx_type - context type.
- */
-void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
- void *p_ctx_mem,
- u32 ctx_size,
- u8 ctx_type);
-
-
/*******************************************************************************
* File name : rdma_init.h
* Author : Michael Shteinbok
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 4207b1853e..13464d060a 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1565,16 +1565,6 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
}
}
-void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_sb_info *p_sb, u32 pi_index,
- enum ecore_coalescing_fsm coalescing_fsm,
- u8 timeset)
-{
- _ecore_int_cau_conf_pi(p_hwfn, p_ptt, p_sb->igu_sb_id,
- pi_index, coalescing_fsm, timeset);
-}
-
void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
dma_addr_t sb_phys, u16 igu_sb_id,
@@ -1793,42 +1783,6 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
- struct ecore_sb_info *sb_info,
- u16 sb_id)
-{
- struct ecore_igu_info *p_info;
- struct ecore_igu_block *p_block;
-
- if (sb_info == OSAL_NULL)
- return ECORE_SUCCESS;
-
- /* zero status block and ack counter */
- sb_info->sb_ack = 0;
- OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
-
- if (IS_VF(p_hwfn->p_dev)) {
- ecore_vf_set_sb_info(p_hwfn, sb_id, OSAL_NULL);
- return ECORE_SUCCESS;
- }
-
- p_info = p_hwfn->hw_info.p_igu_info;
- p_block = &p_info->entry[sb_info->igu_sb_id];
-
- /* Vector 0 is reserved to Default SB */
- if (p_block->vector_number == 0) {
- DP_ERR(p_hwfn, "Do Not free sp sb using this function");
- return ECORE_INVAL;
- }
-
- /* Lose reference to client's SB info, and fix counters */
- p_block->sb_info = OSAL_NULL;
- p_block->status |= ECORE_IGU_STATUS_FREE;
- p_info->usage.free_cnt++;
-
- return ECORE_SUCCESS;
-}
-
static void ecore_int_sp_sb_free(struct ecore_hwfn *p_hwfn)
{
struct ecore_sb_sp_info *p_sb = p_hwfn->p_sp_sb;
@@ -1905,18 +1859,6 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi)
-{
- struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb;
-
- if (p_sp_sb->pi_info_arr[pi].comp_cb == OSAL_NULL)
- return ECORE_NOMEM;
-
- p_sp_sb->pi_info_arr[pi].comp_cb = OSAL_NULL;
- p_sp_sb->pi_info_arr[pi].cookie = OSAL_NULL;
- return ECORE_SUCCESS;
-}
-
u16 ecore_int_get_sp_sb_id(struct ecore_hwfn *p_hwfn)
{
return p_hwfn->p_sp_sb->sb_info.igu_sb_id;
@@ -2429,133 +2371,6 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t
-ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u16 sb_id, bool b_to_vf)
-{
- struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
- struct ecore_igu_block *p_block = OSAL_NULL;
- u16 igu_sb_id = 0, vf_num = 0;
- u32 val = 0;
-
- if (IS_VF(p_hwfn->p_dev) || !IS_PF_SRIOV(p_hwfn))
- return ECORE_INVAL;
-
- if (sb_id == ECORE_SP_SB_ID)
- return ECORE_INVAL;
-
- if (!p_info->b_allow_pf_vf_change) {
- DP_INFO(p_hwfn, "Can't relocate SBs as MFW is too old.\n");
- return ECORE_INVAL;
- }
-
- /* If we're moving a SB from PF to VF, the client had to specify
- * which vector it wants to move.
- */
- if (b_to_vf) {
- igu_sb_id = ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1);
- if (igu_sb_id == ECORE_SB_INVALID_IDX)
- return ECORE_INVAL;
- }
-
- /* If we're moving a SB from VF to PF, need to validate there isn't
- * already a line configured for that vector.
- */
- if (!b_to_vf) {
- if (ecore_get_pf_igu_sb_id(p_hwfn, sb_id + 1) !=
- ECORE_SB_INVALID_IDX)
- return ECORE_INVAL;
- }
-
- /* We need to validate that the SB can actually be relocated.
- * This would also handle the previous case where we've explicitly
- * stated which IGU SB needs to move.
- */
- for (; igu_sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
- igu_sb_id++) {
- p_block = &p_info->entry[igu_sb_id];
-
- if (!(p_block->status & ECORE_IGU_STATUS_VALID) ||
- !(p_block->status & ECORE_IGU_STATUS_FREE) ||
- (!!(p_block->status & ECORE_IGU_STATUS_PF) != b_to_vf)) {
- if (b_to_vf)
- return ECORE_INVAL;
- else
- continue;
- }
-
- break;
- }
-
- if (igu_sb_id == ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev)) {
- DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV),
- "Failed to find a free SB to move\n");
- return ECORE_INVAL;
- }
-
- /* At this point, p_block points to the SB we want to relocate */
- if (b_to_vf) {
- p_block->status &= ~ECORE_IGU_STATUS_PF;
-
- /* It doesn't matter which VF number we choose, since we're
- * going to disable the line; But let's keep it in range.
- */
- vf_num = (u16)p_hwfn->p_dev->p_iov_info->first_vf_in_pf;
-
- p_block->function_id = (u8)vf_num;
- p_block->is_pf = 0;
- p_block->vector_number = 0;
-
- p_info->usage.cnt--;
- p_info->usage.free_cnt--;
- p_info->usage.iov_cnt++;
- p_info->usage.free_cnt_iov++;
-
- /* TODO - if SBs aren't really the limiting factor,
- * then it might not be accurate [in the since that
- * we might not need decrement the feature].
- */
- p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]--;
- p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]++;
- } else {
- p_block->status |= ECORE_IGU_STATUS_PF;
- p_block->function_id = p_hwfn->rel_pf_id;
- p_block->is_pf = 1;
- p_block->vector_number = sb_id + 1;
-
- p_info->usage.cnt++;
- p_info->usage.free_cnt++;
- p_info->usage.iov_cnt--;
- p_info->usage.free_cnt_iov--;
-
- p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]++;
- p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]--;
- }
-
- /* Update the IGU and CAU with the new configuration */
- SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER,
- p_block->function_id);
- SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, p_block->is_pf);
- SET_FIELD(val, IGU_MAPPING_LINE_VALID, p_block->is_pf);
- SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER,
- p_block->vector_number);
-
- ecore_wr(p_hwfn, p_ptt,
- IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_sb_id,
- val);
-
- ecore_int_cau_conf_sb(p_hwfn, p_ptt, 0,
- igu_sb_id, vf_num,
- p_block->is_pf ? 0 : 1);
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
- "Relocation: [SB 0x%04x] func_id = %d is_pf = %d vector_num = 0x%x\n",
- igu_sb_id, p_block->function_id,
- p_block->is_pf, p_block->vector_number);
-
- return ECORE_SUCCESS;
-}
-
/**
* @brief Initialize igu runtime registers
*
@@ -2661,14 +2476,6 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
sizeof(*p_sb_cnt_info));
}
-void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
-{
- int i;
-
- for_each_hwfn(p_dev, i)
- p_dev->hwfns[i].b_int_requested = false;
-}
-
void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable)
{
p_dev->attn_clr_en = clr_enable;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 5042cd1d18..83ab4c9a97 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -136,19 +136,6 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
ecore_int_comp_cb_t comp_cb,
void *cookie,
u8 *sb_idx, __le16 **p_fw_cons);
-/**
- * @brief ecore_int_unregister_cb - Unregisters callback
- * function from sp sb.
- * Partner of ecore_int_register_cb -> should be called
- * when no longer required.
- *
- * @param p_hwfn
- * @param pi
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi);
-
/**
* @brief ecore_int_get_sp_sb_id - Get the slowhwfn sb id.
*
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index d7b6b86cc1..3c9ad653bb 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -177,24 +177,6 @@ enum ecore_coalescing_fsm {
ECORE_COAL_TX_STATE_MACHINE
};
-/**
- * @brief ecore_int_cau_conf_pi - configure cau for a given
- * status block
- *
- * @param p_hwfn
- * @param p_ptt
- * @param p_sb
- * @param pi_index
- * @param state
- * @param timeset
- */
-void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_sb_info *p_sb,
- u32 pi_index,
- enum ecore_coalescing_fsm coalescing_fsm,
- u8 timeset);
-
/**
*
* @brief ecore_int_igu_enable_int - enable device interrupts
@@ -261,23 +243,6 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info);
-/**
- * @brief ecore_int_sb_release - releases the sb_info structure.
- *
- * once the structure is released, it's memory can be freed
- *
- * @param p_hwfn
- * @param sb_info points to an allocated sb_info structure
- * @param sb_id the sb_id to be used (zero based in driver)
- * should never be equal to ECORE_SP_SB_ID
- * (SP Status block)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
- struct ecore_sb_info *sb_info,
- u16 sb_id);
-
/**
* @brief ecore_int_sp_dpc - To be called when an interrupt is received on the
* default status block.
@@ -299,16 +264,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie);
void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
struct ecore_sb_cnt_info *p_sb_cnt_info);
-/**
- * @brief ecore_int_disable_post_isr_release - performs the cleanup post ISR
- * release. The API need to be called after releasing all slowpath IRQs
- * of the device.
- *
- * @param p_dev
- *
- */
-void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
-
/**
* @brief ecore_int_attn_clr_enable - sets whether the general behavior is
* preventing attentions from being reasserted, or following the
@@ -335,21 +290,6 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
struct ecore_sb_info *p_sb,
struct ecore_sb_info_dbg *p_info);
-/**
- * @brief - Move a free Status block between PF and child VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param sb_id - The PF fastpath vector to be moved [re-assigned if claiming
- * from VF, given-up if moving to VF]
- * @param b_to_vf - PF->VF == true, VF->PF == false
- *
- * @return ECORE_SUCCESS if SB successfully moved.
- */
-enum _ecore_status_t
-ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u16 sb_id, bool b_to_vf);
-
/**
* @brief - Doorbell Recovery handler.
* Run DB_REAL_DEAL doorbell recovery in case of PF overflow
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index bd7c5703f6..e0e39d309a 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -119,39 +119,6 @@ struct ecore_iov_vf_init_params {
u8 rss_eng_id;
};
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/* This is SW channel related only... */
-enum mbx_state {
- VF_PF_UNKNOWN_STATE = 0,
- VF_PF_WAIT_FOR_START_REQUEST = 1,
- VF_PF_WAIT_FOR_NEXT_CHUNK_OF_REQUEST = 2,
- VF_PF_REQUEST_IN_PROCESSING = 3,
- VF_PF_RESPONSE_READY = 4,
-};
-
-struct ecore_iov_sw_mbx {
- enum mbx_state mbx_state;
-
- u32 request_size;
- u32 request_offset;
-
- u32 response_size;
- u32 response_offset;
-};
-
-/**
- * @brief Get the vf sw mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return struct ecore_iov_sw_mbx*
- */
-struct ecore_iov_sw_mbx*
-ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-#endif
-
/* This struct is part of ecore_dev and contains data relevant to all hwfns;
* Initialized only if SR-IOV cpabability is exposed in PCIe config space.
*/
@@ -176,16 +143,6 @@ struct ecore_hw_sriov_info {
#ifdef CONFIG_ECORE_SRIOV
#ifndef LINUX_REMOVE
-/**
- * @brief mark/clear all VFs before/after an incoming PCIe sriov
- * disable.
- *
- * @param p_dev
- * @param to_disable
- */
-void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
- u8 to_disable);
-
/**
* @brief mark/clear chosen VF before/after an incoming PCIe
* sriov disable.
@@ -227,35 +184,6 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
int vfid);
-/**
- * @brief ecore_iov_release_hw_for_vf - called once upper layer
- * knows VF is done with - can release any resources
- * allocated for VF at this point. this must be done once
- * we know VF is no longer loaded in VM.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param rel_vf_id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 rel_vf_id);
-
-/**
- * @brief ecore_iov_set_vf_ctx - set a context for a given VF
- *
- * @param p_hwfn
- * @param vf_id
- * @param ctx
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 vf_id,
- void *ctx);
-
/**
* @brief FLR cleanup for all VFs
*
@@ -267,20 +195,6 @@ enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
-/**
- * @brief FLR cleanup for single VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param rel_vf_id
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 rel_vf_id);
-
/**
* @brief Update the bulletin with link information. Notice this does NOT
* send a bulletin update, only updates the PF's bulletin.
@@ -297,32 +211,6 @@ void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_link_state *link,
struct ecore_mcp_link_capabilities *p_caps);
-/**
- * @brief Returns link information as perceived by VF.
- *
- * @param p_hwfn
- * @param p_vf
- * @param p_params - the link params visible to vf.
- * @param p_link - the link state visible to vf.
- * @param p_caps - the link default capabilities visible to vf.
- */
-void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
- u16 vfid,
- struct ecore_mcp_link_params *params,
- struct ecore_mcp_link_state *link,
- struct ecore_mcp_link_capabilities *p_caps);
-
-/**
- * @brief return if the VF is pending FLR
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return bool
- */
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
/**
* @brief Check if given VF ID @vfid is valid
* w.r.t. @b_enabled_only value
@@ -340,19 +228,6 @@ bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
int rel_vf_id,
bool b_enabled_only, bool b_non_malicious);
-/**
- * @brief Get VF's public info structure
- *
- * @param p_hwfn
- * @param vfid - Relative VF ID
- * @param b_enabled_only - false if want to access even if vf is disabled
- *
- * @return struct ecore_public_vf_info *
- */
-struct ecore_public_vf_info*
-ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
- u16 vfid, bool b_enabled_only);
-
/**
* @brief fills a bitmask of all VFs which have pending unhandled
* messages.
@@ -374,65 +249,6 @@ void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *ptt,
int vfid);
-/**
- * @brief Set forced MAC address in PFs copy of bulletin board
- * and configures FW/HW to support the configuration.
- *
- * @param p_hwfn
- * @param mac
- * @param vfid
- */
-void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
- u8 *mac, int vfid);
-
-/**
- * @brief Set MAC address in PFs copy of bulletin board without
- * configuring FW/HW.
- *
- * @param p_hwfn
- * @param mac
- * @param vfid
- */
-enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
- u8 *mac, int vfid);
-
-/**
- * @brief Set default behaviour of VF in case no vlans are configured for it
- * whether to accept only untagged traffic or all.
- * Must be called prior to the VF vport-start.
- *
- * @param p_hwfn
- * @param b_untagged_only
- * @param vfid
- *
- * @return ECORE_SUCCESS if configuration would stick.
- */
-enum _ecore_status_t
-ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
- bool b_untagged_only,
- int vfid);
-
-/**
- * @brief Get VFs opaque fid.
- *
- * @param p_hwfn
- * @param vfid
- * @param opaque_fid
- */
-void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
- u16 *opaque_fid);
-
-/**
- * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
- * and configures FW/HW to support the configuration.
- * Setting of pvid 0 would clear the feature.
- * @param p_hwfn
- * @param pvid
- * @param vfid
- */
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 pvid, int vfid);
-
/**
* @brief Check if VF has VPORT instance. This can be used
* to check if VPORT is active.
@@ -454,38 +270,6 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
int vfid,
struct ecore_ptt *p_ptt);
-/**
- * @brief Check if given VF (@vfid) is marked as stopped
- *
- * @param p_hwfn
- * @param vfid
- *
- * @return bool : true if stopped
- */
-bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid);
-
-/**
- * @brief Configure VF anti spoofing
- *
- * @param p_hwfn
- * @param vfid
- * @param val - spoofchk value - true/false
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
- int vfid, bool val);
-
-/**
- * @brief Get VF's configured spoof value.
- *
- * @param p_hwfn
- * @param vfid
- *
- * @return bool - spoofchk value - true/false
- */
-bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid);
-
/**
* @brief Check for SRIOV sanity by PF.
*
@@ -496,248 +280,8 @@ bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid);
*/
bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid);
-/**
- * @brief Get the num of VF chains.
- *
- * @param p_hwfn
- *
- * @return u8
- */
-u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn);
-
-/**
- * @brief Get vf request mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- * @param pp_req_virt_addr
- * @param p_req_virt_size
- */
-void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
- void **pp_req_virt_addr,
- u16 *p_req_virt_size);
-
-/**
- * @brief Get vf mailbox params
- *
- * @param p_hwfn
- * @param rel_vf_id
- * @param pp_reply_virt_addr
- * @param p_reply_virt_size
- */
-void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
- void **pp_reply_virt_addr,
- u16 *p_reply_virt_size);
-
-/**
- * @brief Validate if the given length is a valid vfpf message
- * length
- *
- * @param length
- *
- * @return bool
- */
-bool ecore_iov_is_valid_vfpf_msg_length(u32 length);
-
-/**
- * @brief Return the max pfvf message length
- *
- * @return u32
- */
-u32 ecore_iov_pfvf_msg_length(void);
-
-/**
- * @brief Returns MAC address if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return OSAL_NULL if mac isn't set; Otherwise, returns MAC.
- */
-u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief Returns forced MAC address if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return OSAL_NULL if mac isn't forced; Otherwise, returns MAC.
- */
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief Returns pvid if one is configured
- *
- * @parm p_hwfn
- * @parm rel_vf_id
- *
- * @return 0 if no pvid is configured, otherwise the pvid.
- */
-u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-/**
- * @brief Configure VFs tx rate
- *
- * @param p_hwfn
- * @param p_ptt
- * @param vfid
- * @param val - tx rate value in Mb/sec.
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- int vfid, int val);
-
-/**
- * @brief - Retrieves the statistics associated with a VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param vfid
- * @param p_stats - this will be filled with the VF statistics
- *
- * @return ECORE_SUCCESS iff statistics were retrieved. Error otherwise.
- */
-enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- int vfid,
- struct ecore_eth_stats *p_stats);
-
-/**
- * @brief - Retrieves num of rxqs chains
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return num of rxqs chains.
- */
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Retrieves num of active rxqs chains
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Retrieves ctx pointer
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Retrieves VF`s num sbs
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is waiting for acquire
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is acquired but not initialized
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF is acquired and initialized
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Returm true if VF has started in FW
- *
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return
- */
-bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
-
-/**
- * @brief - Get VF's vport min rate configured.
- * @param p_hwfn
- * @param rel_vf_id
- *
- * @return - rate in Mbps
- */
-int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
-
-/**
- * @brief - Configure min rate for VF's vport.
- * @param p_dev
- * @param vfid
- * @param - rate in Mbps
- *
- * @return
- */
-enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
- int vfid, u32 rate);
#endif
-/**
- * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce
- * parameters of VFs for Rx and Tx queue.
- * While the API allows setting coalescing per-qid, all queues sharing a SB
- * should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
- * otherwise configuration would break.
- *
- * @param p_hwfn
- * @param rx_coal - Rx Coalesce value in micro seconds.
- * @param tx_coal - TX Coalesce value in micro seconds.
- * @param vf_id
- * @param qid
- *
- * @return int
- **/
-enum _ecore_status_t
-ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
- u16 rx_coal, u16 tx_coal,
- u16 vf_id, u16 qid);
-
/**
* @brief - Given a VF index, return index of next [including that] active VF.
*
@@ -751,19 +295,6 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
u16 vxlan_port, u16 geneve_port);
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/**
- * @brief Set whether PF should communicate with VF using SW/HW channel
- * Needs to be called for an enabled VF before acquire is over
- * [latest good point for doing that is OSAL_IOV_VF_ACQUIRE()]
- *
- * @param p_hwfn
- * @param vfid - relative vf index
- * @param b_is_hw - true iff PF is to use HW channel for communication
- */
-void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
- bool b_is_hw);
-#endif
#endif /* CONFIG_ECORE_SRIOV */
#define ecore_for_each_vf(_p_hwfn, _i) \
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index af234dec84..f6180bf450 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2281,108 +2281,5 @@ enum _ecore_status_t ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t
-ecore_get_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 *p_coal,
- void *handle)
-{
- struct ecore_queue_cid *p_cid = (struct ecore_queue_cid *)handle;
- enum _ecore_status_t rc = ECORE_SUCCESS;
- struct ecore_ptt *p_ptt;
-
- if (IS_VF(p_hwfn->p_dev)) {
- rc = ecore_vf_pf_get_coalesce(p_hwfn, p_coal, p_cid);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, false,
- "Unable to read queue calescing\n");
-
- return rc;
- }
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_AGAIN;
-
- if (p_cid->b_is_rx) {
- rc = ecore_get_rxq_coalesce(p_hwfn, p_ptt, p_cid, p_coal);
- if (rc != ECORE_SUCCESS)
- goto out;
- } else {
- rc = ecore_get_txq_coalesce(p_hwfn, p_ptt, p_cid, p_coal);
- if (rc != ECORE_SUCCESS)
- goto out;
- }
-
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-enum _ecore_status_t
-ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_queue_cid *p_cid, u32 rate)
-{
- u16 rl_id;
- u8 vport;
-
- vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id);
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "About to rate limit qm vport %d for queue %d with rate %d\n",
- vport, p_cid->rel.queue_id, rate);
-
- rl_id = vport; /* The "rl_id" is set as the "vport_id" */
- return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate);
-}
-
#define RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT 100
#define RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US 1
-
-enum _ecore_status_t
-ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
- u8 vport_id,
- u8 ind_table_index,
- u16 ind_table_value)
-{
- struct eth_tstorm_rss_update_data update_data = { 0 };
- void OSAL_IOMEM *addr = OSAL_NULL;
- enum _ecore_status_t rc;
- u8 abs_vport_id;
- u32 cnt = 0;
-
- OSAL_BUILD_BUG_ON(sizeof(update_data) != sizeof(u64));
-
- rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
- TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
-
- *(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
-
- for (cnt = 0; update_data.valid &&
- cnt < RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT; cnt++) {
- OSAL_UDELAY(RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US);
- *(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
- }
-
- if (update_data.valid) {
- DP_NOTICE(p_hwfn, true,
- "rss update valid status is not clear! valid=0x%x vport id=%d ind_Table_idx=%d ind_table_value=%d.\n",
- update_data.valid, vport_id, ind_table_index,
- ind_table_value);
-
- return ECORE_AGAIN;
- }
-
- update_data.valid = 1;
- update_data.ind_table_index = ind_table_index;
- update_data.ind_table_value = ind_table_value;
- update_data.vport_id = abs_vport_id;
-
- DIRECT_REG_WR64(p_hwfn, addr, *(u64 *)(&update_data));
-
- return ECORE_SUCCESS;
-}
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index bebf412edb..0f2baedc3e 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -490,28 +490,4 @@ enum _ecore_status_t
ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
struct ecore_spq_comp_cb *p_cb,
struct ecore_ntuple_filter_params *p_params);
-
-/**
- * @brief - ecore_update_eth_rss_ind_table_entry
- *
- * This function being used to update RSS indirection table entry to FW RAM
- * instead of using the SP vport update ramrod with rss params.
- *
- * Notice:
- * This function supports only one outstanding command per engine. Ecore
- * clients which use this function should call ecore_mcp_ind_table_lock() prior
- * to it and ecore_mcp_ind_table_unlock() after it.
- *
- * @params p_hwfn
- * @params vport_id
- * @params ind_table_index
- * @params ind_table_value
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
- u8 vport_id,
- u8 ind_table_index,
- u16 ind_table_value);
#endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cab089d816..a4e4583ecd 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -342,58 +342,6 @@ static void ecore_mcp_reread_offsets(struct ecore_hwfn *p_hwfn,
}
}
-enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- u32 prev_generic_por_0, seq, delay = ECORE_MCP_RESP_ITER_US, cnt = 0;
- u32 retries = ECORE_MCP_RESET_RETRIES;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
-#ifndef ASIC_ONLY
- if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
- delay = ECORE_EMUL_MCP_RESP_ITER_US;
- retries = ECORE_EMUL_MCP_RESET_RETRIES;
- }
-#endif
- if (p_hwfn->mcp_info->b_block_cmd) {
- DP_NOTICE(p_hwfn, false,
- "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
- return ECORE_ABORTED;
- }
-
- /* Ensure that only a single thread is accessing the mailbox */
- OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
-
- prev_generic_por_0 = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
-
- /* Set drv command along with the updated sequence */
- ecore_mcp_reread_offsets(p_hwfn, p_ptt);
- seq = ++p_hwfn->mcp_info->drv_mb_seq;
- DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (DRV_MSG_CODE_MCP_RESET | seq));
-
- /* Give the MFW up to 500 second (50*1000*10usec) to resume */
- do {
- OSAL_UDELAY(delay);
-
- if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
- prev_generic_por_0)
- break;
- } while (cnt++ < retries);
-
- if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
- prev_generic_por_0) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "MCP was reset after %d usec\n", cnt * delay);
- } else {
- DP_ERR(p_hwfn, "Failed to reset MCP\n");
- rc = ECORE_AGAIN;
- }
-
- OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
-
- return rc;
-}
-
#ifndef ASIC_ONLY
static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_mb_params *p_mb_params)
@@ -1844,17 +1792,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
}
-enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- struct ecore_mdump_cmd_params mdump_cmd_params;
-
- OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
- mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
-
- return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
static enum _ecore_status_t
ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct mdump_config_stc *p_mdump_config)
@@ -1931,17 +1868,6 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- struct ecore_mdump_cmd_params mdump_cmd_params;
-
- OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
- mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLEAR_LOGS;
-
- return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
enum _ecore_status_t
ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct ecore_mdump_retain_data *p_mdump_retain)
@@ -1974,17 +1900,6 @@ ecore_mcp_mdump_get_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- struct ecore_mdump_cmd_params mdump_cmd_params;
-
- OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
- mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_CLR_RETAIN;
-
- return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
-}
-
static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -2282,37 +2197,6 @@ int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
return 0;
}
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *p_media_type)
-{
- *p_media_type = MEDIA_UNSPECIFIED;
-
- /* TODO - Add support for VFs */
- if (IS_VF(p_hwfn->p_dev))
- return ECORE_INVAL;
-
- if (!ecore_mcp_is_init(p_hwfn)) {
-#ifndef ASIC_ONLY
- if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
- DP_INFO(p_hwfn, "Emulation: Can't get media type\n");
- return ECORE_NOTIMPL;
- }
-#endif
- DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
- return ECORE_BUSY;
- }
-
- if (!p_ptt)
- return ECORE_INVAL;
-
- *p_media_type = ecore_rd(p_hwfn, p_ptt,
- p_hwfn->mcp_info->port_addr +
- OFFSETOF(struct public_port, media_type));
-
- return ECORE_SUCCESS;
-}
-
enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 *p_transceiver_state,
@@ -2361,156 +2245,6 @@ static int is_transceiver_ready(u32 transceiver_state, u32 transceiver_type)
return 0;
}
-enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *p_speed_mask)
-{
- u32 transceiver_type = ETH_TRANSCEIVER_TYPE_NONE, transceiver_state;
-
- ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state,
- &transceiver_type);
-
-
- if (is_transceiver_ready(transceiver_state, transceiver_type) == 0)
- return ECORE_INVAL;
-
- switch (transceiver_type) {
- case ETH_TRANSCEIVER_TYPE_1G_LX:
- case ETH_TRANSCEIVER_TYPE_1G_SX:
- case ETH_TRANSCEIVER_TYPE_1G_PCC:
- case ETH_TRANSCEIVER_TYPE_1G_ACC:
- case ETH_TRANSCEIVER_TYPE_1000BASET:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_10G_SR:
- case ETH_TRANSCEIVER_TYPE_10G_LR:
- case ETH_TRANSCEIVER_TYPE_10G_LRM:
- case ETH_TRANSCEIVER_TYPE_10G_ER:
- case ETH_TRANSCEIVER_TYPE_10G_PCC:
- case ETH_TRANSCEIVER_TYPE_10G_ACC:
- case ETH_TRANSCEIVER_TYPE_4x10G:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_40G_LR4:
- case ETH_TRANSCEIVER_TYPE_40G_SR4:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_SR:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_LR:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_100G_AOC:
- case ETH_TRANSCEIVER_TYPE_100G_SR4:
- case ETH_TRANSCEIVER_TYPE_100G_LR4:
- case ETH_TRANSCEIVER_TYPE_100G_ER4:
- case ETH_TRANSCEIVER_TYPE_100G_ACC:
- *p_speed_mask =
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_25G_SR:
- case ETH_TRANSCEIVER_TYPE_25G_LR:
- case ETH_TRANSCEIVER_TYPE_25G_AOC:
- case ETH_TRANSCEIVER_TYPE_25G_ACC_S:
- case ETH_TRANSCEIVER_TYPE_25G_ACC_M:
- case ETH_TRANSCEIVER_TYPE_25G_ACC_L:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_25G_CA_N:
- case ETH_TRANSCEIVER_TYPE_25G_CA_S:
- case ETH_TRANSCEIVER_TYPE_25G_CA_L:
- case ETH_TRANSCEIVER_TYPE_4x25G_CR:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_40G_CR4:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_100G_CR4:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR:
- *p_speed_mask =
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_SR:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR:
- case ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC:
- *p_speed_mask =
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_XLPPI:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G;
- break;
-
- case ETH_TRANSCEIVER_TYPE_10G_BASET:
- *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
- NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
- break;
-
- default:
- DP_INFO(p_hwfn, "Unknown transcevier type 0x%x\n",
- transceiver_type);
- *p_speed_mask = 0xff;
- break;
- }
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *p_board_config)
-{
- u32 nvm_cfg_addr, nvm_cfg1_offset, port_cfg_addr;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- /* TODO - Add support for VFs */
- if (IS_VF(p_hwfn->p_dev))
- return ECORE_INVAL;
-
- if (!ecore_mcp_is_init(p_hwfn)) {
- DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
- return ECORE_BUSY;
- }
- if (!p_ptt) {
- *p_board_config = NVM_CFG1_PORT_PORT_TYPE_UNDEFINED;
- rc = ECORE_INVAL;
- } else {
- nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt,
- MISC_REG_GEN_PURP_CR0);
- nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt,
- nvm_cfg_addr + 4);
- port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
- offsetof(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
- *p_board_config = ecore_rd(p_hwfn, p_ptt,
- port_cfg_addr +
- offsetof(struct nvm_cfg1_port,
- board_cfg));
- }
-
- return rc;
-}
-
/* @DPDK */
/* Old MFW has a global configuration for all PFs regarding RDMA support */
static void
@@ -2670,41 +2404,6 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
return rc;
}
-const struct ecore_mcp_function_info
-*ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn)
-{
- if (!p_hwfn || !p_hwfn->mcp_info)
- return OSAL_NULL;
- return &p_hwfn->mcp_info->func_info;
-}
-
-int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 personalities)
-{
- enum ecore_pci_personality protocol = ECORE_PCI_DEFAULT;
- struct public_func shmem_info;
- int i, count = 0, num_pfs;
-
- num_pfs = NUM_OF_ENG_PFS(p_hwfn->p_dev);
-
- for (i = 0; i < num_pfs; i++) {
- ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
- MCP_PF_ID_BY_REL(p_hwfn, i));
- if (shmem_info.config & FUNC_MF_CFG_FUNC_HIDE)
- continue;
-
- if (ecore_mcp_get_shmem_proto(p_hwfn, &shmem_info, p_ptt,
- &protocol) !=
- ECORE_SUCCESS)
- continue;
-
- if ((1 << ((u32)protocol)) & personalities)
- count++;
- }
-
- return count;
-}
-
enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 *p_flash_size)
@@ -2731,24 +2430,6 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- struct ecore_dev *p_dev = p_hwfn->p_dev;
-
- if (p_dev->recov_in_prog) {
- DP_NOTICE(p_hwfn, false,
- "Avoid triggering a recovery since such a process"
- " is already in progress\n");
- return ECORE_AGAIN;
- }
-
- DP_NOTICE(p_hwfn, false, "Triggering a recovery process\n");
- ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_GENERAL_ATTN_35, 0x1);
-
- return ECORE_SUCCESS;
-}
-
static enum _ecore_status_t
ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -2928,38 +2609,6 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t
-ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_ov_client client)
-{
- u32 resp = 0, param = 0;
- u32 drv_mb_param;
- enum _ecore_status_t rc;
-
- switch (client) {
- case ECORE_OV_CLIENT_DRV:
- drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OS;
- break;
- case ECORE_OV_CLIENT_USER:
- drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_OTHER;
- break;
- case ECORE_OV_CLIENT_VENDOR_SPEC:
- drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
- break;
- default:
- DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
- return ECORE_INVAL;
- }
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_CURR_CFG,
- drv_mb_param, &resp, ¶m);
- if (rc != ECORE_SUCCESS)
- DP_ERR(p_hwfn, "MCP response failure, aborting\n");
-
- return rc;
-}
-
enum _ecore_status_t
ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -2992,13 +2641,6 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t
-ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_fc_npiv_tbl *p_table)
-{
- return 0;
-}
-
enum _ecore_status_t
ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u16 mtu)
@@ -3015,28 +2657,6 @@ ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return rc;
}
-enum _ecore_status_t
-ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u8 *mac)
-{
- struct ecore_mcp_mb_params mb_params;
- union drv_union_data union_data;
- enum _ecore_status_t rc;
-
- OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
- mb_params.cmd = DRV_MSG_CODE_SET_VMAC;
- SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_VMAC_TYPE,
- DRV_MSG_CODE_VMAC_TYPE_MAC);
- mb_params.param |= MCP_PF_ID(p_hwfn);
- OSAL_MEMCPY(&union_data.raw_data, mac, ETH_ALEN);
- mb_params.p_data_src = &union_data;
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
- if (rc != ECORE_SUCCESS)
- DP_ERR(p_hwfn, "Failed to send mac address, rc = %d\n", rc);
-
- return rc;
-}
-
enum _ecore_status_t
ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
enum ecore_ov_eswitch eswitch)
@@ -3068,36 +2688,6 @@ ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return rc;
}
-enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_led_mode mode)
-{
- u32 resp = 0, param = 0, drv_mb_param;
- enum _ecore_status_t rc;
-
- switch (mode) {
- case ECORE_LED_MODE_ON:
- drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_ON;
- break;
- case ECORE_LED_MODE_OFF:
- drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_OFF;
- break;
- case ECORE_LED_MODE_RESTORE:
- drv_mb_param = DRV_MB_PARAM_SET_LED_MODE_OPER;
- break;
- default:
- DP_NOTICE(p_hwfn, true, "Invalid LED mode %d\n", mode);
- return ECORE_INVAL;
- }
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_LED_MODE,
- drv_mb_param, &resp, ¶m);
- if (rc != ECORE_SUCCESS)
- DP_ERR(p_hwfn, "MCP response failure, aborting\n");
-
- return rc;
-}
-
enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 mask_parities)
@@ -3176,482 +2766,37 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
return rc;
}
-enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 *p_len)
+enum _ecore_status_t
+ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u32 *num_images)
{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt;
- u32 resp = 0, param;
- enum _ecore_status_t rc;
+ u32 drv_mb_param = 0, rsp;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
+ SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
+ DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES);
- rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
- (cmd == ECORE_PHY_CORE_READ) ?
- DRV_MSG_CODE_PHY_CORE_READ :
- DRV_MSG_CODE_PHY_RAW_READ,
- addr, &resp, ¶m, p_len, (u32 *)p_buf);
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, num_images);
if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
+ return rc;
- p_dev->mcp_nvm_resp = resp;
- ecore_ptt_release(p_hwfn, p_ptt);
+ if (rsp == FW_MSG_CODE_UNSUPPORTED)
+ rc = ECORE_NOTIMPL;
+ else if (rsp != FW_MSG_CODE_OK)
+ rc = ECORE_UNKNOWN_ERROR;
return rc;
}
-enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf)
+enum _ecore_status_t
+ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct bist_nvm_image_att *p_image_att,
+ u32 image_index)
{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
-
- OSAL_MEMCPY(p_buf, &p_dev->mcp_nvm_resp, sizeof(p_dev->mcp_nvm_resp));
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt;
- u32 resp = 0, param;
- enum _ecore_status_t rc;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_DEL_FILE, addr,
- &resp, ¶m);
- p_dev->mcp_nvm_resp = resp;
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
- u32 addr)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt;
- u32 resp = 0, param;
- enum _ecore_status_t rc;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr,
- &resp, ¶m);
- p_dev->mcp_nvm_resp = resp;
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-/* rc receives ECORE_INVAL as default parameter because
- * it might not enter the while loop if the len is 0
- */
-enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 len)
-{
- u32 buf_idx, buf_size, nvm_cmd, nvm_offset;
- u32 resp = FW_MSG_CODE_ERROR, param;
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- enum _ecore_status_t rc = ECORE_INVAL;
- struct ecore_ptt *p_ptt;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
-
- switch (cmd) {
- case ECORE_PUT_FILE_DATA:
- nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA;
- break;
- case ECORE_NVM_WRITE_NVRAM:
- nvm_cmd = DRV_MSG_CODE_NVM_WRITE_NVRAM;
- break;
- case ECORE_EXT_PHY_FW_UPGRADE:
- nvm_cmd = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE;
- break;
- default:
- DP_NOTICE(p_hwfn, true, "Invalid nvm write command 0x%x\n",
- cmd);
- rc = ECORE_INVAL;
- goto out;
- }
-
- buf_idx = 0;
- while (buf_idx < len) {
- buf_size = OSAL_MIN_T(u32, (len - buf_idx),
- MCP_DRV_NVM_BUF_LEN);
- nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) |
- addr) +
- buf_idx;
- rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset,
- &resp, ¶m, buf_size,
- (u32 *)&p_buf[buf_idx]);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_dev, false,
- "ecore_mcp_nvm_write() failed, rc = %d\n",
- rc);
- resp = FW_MSG_CODE_ERROR;
- break;
- }
-
- if (resp != FW_MSG_CODE_OK &&
- resp != FW_MSG_CODE_NVM_OK &&
- resp != FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK) {
- DP_NOTICE(p_dev, false,
- "nvm write failed, resp = 0x%08x\n", resp);
- rc = ECORE_UNKNOWN_ERROR;
- break;
- }
-
- /* This can be a lengthy process, and it's possible scheduler
- * isn't preemptible. Sleep a bit to prevent CPU hogging.
- */
- if (buf_idx % 0x1000 >
- (buf_idx + buf_size) % 0x1000)
- OSAL_MSLEEP(1);
-
- buf_idx += buf_size;
- }
-
- p_dev->mcp_nvm_resp = resp;
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 len)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- u32 resp = 0, param, nvm_cmd;
- struct ecore_ptt *p_ptt;
- enum _ecore_status_t rc;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
-
- nvm_cmd = (cmd == ECORE_PHY_CORE_WRITE) ? DRV_MSG_CODE_PHY_CORE_WRITE :
- DRV_MSG_CODE_PHY_RAW_WRITE;
- rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, addr,
- &resp, ¶m, len, (u32 *)p_buf);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
- p_dev->mcp_nvm_resp = resp;
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
- u32 addr)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt;
- u32 resp = 0, param;
- enum _ecore_status_t rc;
-
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_BUSY;
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_SECURE_MODE, addr,
- &resp, ¶m);
- p_dev->mcp_nvm_resp = resp;
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 port, u32 addr, u32 offset,
- u32 len, u8 *p_buf)
-{
- u32 bytes_left, bytes_to_copy, buf_size, nvm_offset;
- u32 resp, param;
- enum _ecore_status_t rc;
-
- nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
- (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
- addr = offset;
- offset = 0;
- bytes_left = len;
- while (bytes_left > 0) {
- bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
- MAX_I2C_TRANSACTION_SIZE);
- nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
- DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
- nvm_offset |= ((addr + offset) <<
- DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
- nvm_offset |= (bytes_to_copy <<
- DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
- rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
- DRV_MSG_CODE_TRANSCEIVER_READ,
- nvm_offset, &resp, ¶m, &buf_size,
- (u32 *)(p_buf + offset));
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Failed to send a transceiver read command to the MFW. rc = %d.\n",
- rc);
- return rc;
- }
-
- if (resp == FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT)
- return ECORE_NODEV;
- else if (resp != FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
- return ECORE_UNKNOWN_ERROR;
-
- offset += buf_size;
- bytes_left -= buf_size;
- }
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 port, u32 addr, u32 offset,
- u32 len, u8 *p_buf)
-{
- u32 buf_idx, buf_size, nvm_offset, resp, param;
- enum _ecore_status_t rc;
-
- nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
- (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
- buf_idx = 0;
- while (buf_idx < len) {
- buf_size = OSAL_MIN_T(u32, (len - buf_idx),
- MAX_I2C_TRANSACTION_SIZE);
- nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
- DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
- nvm_offset |= ((offset + buf_idx) <<
- DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
- nvm_offset |= (buf_size <<
- DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
- rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
- DRV_MSG_CODE_TRANSCEIVER_WRITE,
- nvm_offset, &resp, ¶m, buf_size,
- (u32 *)&p_buf[buf_idx]);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Failed to send a transceiver write command to the MFW. rc = %d.\n",
- rc);
- return rc;
- }
-
- if (resp == FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT)
- return ECORE_NODEV;
- else if (resp != FW_MSG_CODE_TRANSCEIVER_DIAG_OK)
- return ECORE_UNKNOWN_ERROR;
-
- buf_idx += buf_size;
- }
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u32 *gpio_val)
-{
- enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 drv_mb_param = 0, rsp = 0;
-
- drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ,
- drv_mb_param, &rsp, gpio_val);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
- return ECORE_UNKNOWN_ERROR;
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u16 gpio_val)
-{
- enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 drv_mb_param = 0, param, rsp = 0;
-
- drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET) |
- (gpio_val << DRV_MB_PARAM_GPIO_VALUE_OFFSET);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE,
- drv_mb_param, &rsp, ¶m);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
- return ECORE_UNKNOWN_ERROR;
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u32 *gpio_direction,
- u32 *gpio_ctrl)
-{
- u32 drv_mb_param = 0, rsp, val = 0;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET;
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO,
- drv_mb_param, &rsp, &val);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- *gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >>
- DRV_MB_PARAM_GPIO_DIRECTION_OFFSET;
- *gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >>
- DRV_MB_PARAM_GPIO_CTRL_OFFSET;
-
- if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
- return ECORE_UNKNOWN_ERROR;
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- u32 drv_mb_param = 0, rsp, param;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
- DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
- drv_mb_param, &rsp, ¶m);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
- (param != DRV_MB_PARAM_BIST_RC_PASSED))
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- u32 drv_mb_param, rsp, param;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
- DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
- drv_mb_param, &rsp, ¶m);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
- (param != DRV_MB_PARAM_BIST_RC_PASSED))
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
- struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *num_images)
-{
- u32 drv_mb_param = 0, rsp = 0;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES <<
- DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
- drv_mb_param, &rsp, num_images);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
- struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct bist_nvm_image_att *p_image_att, u32 image_index)
-{
- u32 buf_size, nvm_offset, resp, param;
- enum _ecore_status_t rc;
-
- nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
- DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
- nvm_offset |= (image_index <<
- DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET);
- rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
- nvm_offset, &resp, ¶m, &buf_size,
- (u32 *)p_image_att);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
- (p_image_att->return_code != 1))
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 *num_images)
-{
- u32 drv_mb_param = 0, rsp;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
- DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES);
-
- rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
- drv_mb_param, &rsp, num_images);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (rsp == FW_MSG_CODE_UNSUPPORTED)
- rc = ECORE_NOTIMPL;
- else if (rsp != FW_MSG_CODE_OK)
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct bist_nvm_image_att *p_image_att,
- u32 image_index)
-{
- u32 buf_size, nvm_offset = 0, resp, param;
- enum _ecore_status_t rc;
+ u32 buf_size, nvm_offset = 0, resp, param;
+ enum _ecore_status_t rc;
SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_BIST_TEST_INDEX,
DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX);
@@ -3800,111 +2945,6 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
- enum ecore_nvm_images image_id,
- u8 *p_buffer, u32 buffer_len)
-{
- struct ecore_nvm_image_att image_att;
- enum _ecore_status_t rc;
-
- OSAL_MEM_ZERO(p_buffer, buffer_len);
-
- rc = ecore_mcp_get_nvm_image_att(p_hwfn, image_id, &image_att);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- /* Validate sizes - both the image's and the supplied buffer's */
- if (image_att.length <= 4) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "Image [%d] is too small - only %d bytes\n",
- image_id, image_att.length);
- return ECORE_INVAL;
- }
-
- if (image_att.length > buffer_len) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "Image [%d] is too big - %08x bytes where only %08x are available\n",
- image_id, image_att.length, buffer_len);
- return ECORE_NOMEM;
- }
-
- return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.start_addr,
- (u8 *)p_buffer, image_att.length);
-}
-
-enum _ecore_status_t
-ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_temperature_info *p_temp_info)
-{
- struct ecore_temperature_sensor *p_temp_sensor;
- struct temperature_status_stc mfw_temp_info;
- struct ecore_mcp_mb_params mb_params;
- u32 val;
- enum _ecore_status_t rc;
- u8 i;
-
- OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
- mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
- mb_params.p_data_dst = &mfw_temp_info;
- mb_params.data_dst_size = sizeof(mfw_temp_info);
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
- p_temp_info->num_sensors = OSAL_MIN_T(u32, mfw_temp_info.num_of_sensors,
- ECORE_MAX_NUM_OF_SENSORS);
- for (i = 0; i < p_temp_info->num_sensors; i++) {
- val = mfw_temp_info.sensor[i];
- p_temp_sensor = &p_temp_info->sensors[i];
- p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
- SENSOR_LOCATION_OFFSET;
- p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
- THRESHOLD_HIGH_OFFSET;
- p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
- CRITICAL_TEMPERATURE_OFFSET;
- p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
- CURRENT_TEMP_OFFSET;
- }
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_mcp_get_mba_versions(
- struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_mba_vers *p_mba_vers)
-{
- u32 buf_size, resp, param;
- enum _ecore_status_t rc;
-
- rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MBA_VERSION,
- 0, &resp, ¶m, &buf_size,
- &p_mba_vers->mba_vers[0]);
-
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if ((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_NVM_OK)
- rc = ECORE_UNKNOWN_ERROR;
-
- if (buf_size != MCP_DRV_NVM_BUF_LEN)
- rc = ECORE_UNKNOWN_ERROR;
-
- return rc;
-}
-
-enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u64 *num_events)
-{
- u32 rsp;
-
- return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MEM_ECC_EVENTS,
- 0, &rsp, (u32 *)num_events);
-}
-
static enum resource_id_enum
ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
{
@@ -3984,25 +3024,6 @@ struct ecore_resc_alloc_out_params {
#define ECORE_RECOVERY_PROLOG_SLEEP_MS 100
-enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
-{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
- struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
- enum _ecore_status_t rc;
-
- /* Allow ongoing PCIe transactions to complete */
- OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS);
-
- /* Clear the PF's internal FID_enable in the PXP */
- rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, false,
- "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n",
- rc);
-
- return rc;
-}
-
static enum _ecore_status_t
ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -4380,79 +3401,6 @@ enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
features, &mcp_resp, &mcp_param);
}
-enum _ecore_status_t
-ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_mcp_drv_attr *p_drv_attr)
-{
- struct attribute_cmd_write_stc attr_cmd_write;
- enum _attribute_commands_e mfw_attr_cmd;
- struct ecore_mcp_mb_params mb_params;
- enum _ecore_status_t rc;
-
- switch (p_drv_attr->attr_cmd) {
- case ECORE_MCP_DRV_ATTR_CMD_READ:
- mfw_attr_cmd = ATTRIBUTE_CMD_READ;
- break;
- case ECORE_MCP_DRV_ATTR_CMD_WRITE:
- mfw_attr_cmd = ATTRIBUTE_CMD_WRITE;
- break;
- case ECORE_MCP_DRV_ATTR_CMD_READ_CLEAR:
- mfw_attr_cmd = ATTRIBUTE_CMD_READ_CLEAR;
- break;
- case ECORE_MCP_DRV_ATTR_CMD_CLEAR:
- mfw_attr_cmd = ATTRIBUTE_CMD_CLEAR;
- break;
- default:
- DP_NOTICE(p_hwfn, false, "Unknown attribute command %d\n",
- p_drv_attr->attr_cmd);
- return ECORE_INVAL;
- }
-
- OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
- mb_params.cmd = DRV_MSG_CODE_ATTRIBUTE;
- SET_MFW_FIELD(mb_params.param, DRV_MB_PARAM_ATTRIBUTE_KEY,
- p_drv_attr->attr_num);
- SET_MFW_FIELD(mb_params.param, DRV_MB_PARAM_ATTRIBUTE_CMD,
- mfw_attr_cmd);
- if (p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_WRITE) {
- OSAL_MEM_ZERO(&attr_cmd_write, sizeof(attr_cmd_write));
- attr_cmd_write.val = p_drv_attr->val;
- attr_cmd_write.mask = p_drv_attr->mask;
- attr_cmd_write.offset = p_drv_attr->offset;
-
- mb_params.p_data_src = &attr_cmd_write;
- mb_params.data_src_size = sizeof(attr_cmd_write);
- }
-
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
- DP_INFO(p_hwfn,
- "The attribute command is not supported by the MFW\n");
- return ECORE_NOTIMPL;
- } else if (mb_params.mcp_resp != FW_MSG_CODE_OK) {
- DP_INFO(p_hwfn,
- "Failed to send an attribute command [mcp_resp 0x%x, attr_cmd %d, attr_num %d]\n",
- mb_params.mcp_resp, p_drv_attr->attr_cmd,
- p_drv_attr->attr_num);
- return ECORE_INVAL;
- }
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "Attribute Command: cmd %d [mfw_cmd %d], num %d, in={val 0x%08x, mask 0x%08x, offset 0x%08x}, out={val 0x%08x}\n",
- p_drv_attr->attr_cmd, mfw_attr_cmd, p_drv_attr->attr_num,
- p_drv_attr->val, p_drv_attr->mask, p_drv_attr->offset,
- mb_params.mcp_param);
-
- if (p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_READ ||
- p_drv_attr->attr_cmd == ECORE_MCP_DRV_ATTR_CMD_READ_CLEAR)
- p_drv_attr->val = mb_params.mcp_param;
-
- return ECORE_SUCCESS;
-}
-
enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -4521,30 +3469,3 @@ enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u32 offset, u32 val)
-{
- enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 dword = val;
- struct ecore_mcp_mb_params mb_params;
-
- OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
- mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG;
- mb_params.param = offset;
- mb_params.p_data_src = &dword;
- mb_params.data_src_size = sizeof(dword);
-
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Failed to wol write request, rc = %d\n", rc);
- }
-
- if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) {
- DP_NOTICE(p_hwfn, false,
- "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n",
- val, offset, mb_params.mcp_resp);
- rc = ECORE_UNKNOWN_ERROR;
- }
-}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 185cc23394..7dda431d99 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -253,17 +253,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
-/**
- * @brief - Reset the MCP using mailbox command.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
/**
* @brief indicates whether the MFW objects [under mcp_info] are accessible
*
@@ -331,18 +320,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 epoch);
-/**
- * @brief - Triggers a MFW crash dump procedure.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param epoch
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
struct ecore_mdump_retain_data {
u32 valid;
u32 epoch;
@@ -545,17 +522,6 @@ struct ecore_mcp_drv_attr {
u32 offset;
};
-/**
- * @brief Handle the drivers' attributes that are kept by the MFW.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param p_drv_attr
- */
-enum _ecore_status_t
-ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_mcp_drv_attr *p_drv_attr);
-
/**
* @brief Read ufp config from the shared memory.
*
@@ -565,9 +531,6 @@ ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
void
ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u32 offset, u32 val);
-
/**
* @brief Get the engine affinity configuration.
*
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index c3922ba43a..8bea0dc4a9 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -603,21 +603,6 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u32 *p_mbi_ver);
-/**
- * @brief Get media type value of the port.
- *
- * @param p_dev - ecore dev pointer
- * @param p_ptt
- * @param mfw_ver - media type value
- *
- * @return enum _ecore_status_t -
- * ECORE_SUCCESS - Operation was successful.
- * ECORE_BUSY - Operation failed
- */
-enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *media_type);
-
/**
* @brief Get transceiver data of the port.
*
@@ -635,37 +620,6 @@ enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
u32 *p_transceiver_state,
u32 *p_tranceiver_type);
-/**
- * @brief Get transceiver supported speed mask.
- *
- * @param p_dev - ecore dev pointer
- * @param p_ptt
- * @param p_speed_mask - Bit mask of all supported speeds.
- *
- * @return enum _ecore_status_t -
- * ECORE_SUCCESS - Operation was successful.
- * ECORE_BUSY - Operation failed
- */
-
-enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *p_speed_mask);
-
-/**
- * @brief Get board configuration.
- *
- * @param p_dev - ecore dev pointer
- * @param p_ptt
- * @param p_board_config - Board config.
- *
- * @return enum _ecore_status_t -
- * ECORE_SUCCESS - Operation was successful.
- * ECORE_BUSY - Operation failed
- */
-enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *p_board_config);
-
/**
* @brief - Sends a command to the MCP mailbox.
*
@@ -694,34 +648,6 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
-#ifndef LINUX_REMOVE
-/**
- * @brief - return the mcp function info of the hw function
- *
- * @param p_hwfn
- *
- * @returns pointer to mcp function info
- */
-const struct ecore_mcp_function_info
-*ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn);
-#endif
-
-#ifndef LINUX_REMOVE
-/**
- * @brief - count number of function with a matching personality on engine.
- *
- * @param p_hwfn
- * @param p_ptt
- * @param personalities - a bitmask of ecore_pci_personality values
- *
- * @returns the count of all devices on engine whose personality match one of
- * the bitsmasks.
- */
-int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 personalities);
-#endif
-
/**
* @brief Get the flash size value
*
@@ -760,42 +686,6 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
-/**
- * @brief Trigger a recovery process
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
-/**
- * @brief A recovery handler must call this function as its first step.
- * It is assumed that the handler is not run from an interrupt context.
- *
- * @param p_dev
- * @param p_ptt
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev);
-
-/**
- * @brief Notify MFW about the change in base device properties
- *
- * @param p_hwfn
- * @param p_ptt
- * @param client - ecore client type
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_ov_client client);
-
/**
* @brief Notify MFW about the driver state
*
@@ -810,21 +700,6 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
enum ecore_ov_driver_state drv_state);
-/**
- * @brief Read NPIV settings form the MFW
- *
- * @param p_hwfn
- * @param p_ptt
- * @param p_table - Array to hold the FC NPIV data. Client need allocate the
- * required buffer. The field 'count' specifies number of NPIV
- * entries. A value of 0 means the table was not populated.
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- struct ecore_fc_npiv_tbl *p_table);
-
/**
* @brief Send MTU size to MFW
*
@@ -837,19 +712,6 @@ ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
enum _ecore_status_t ecore_mcp_ov_update_mtu(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u16 mtu);
-/**
- * @brief Send MAC address to MFW
- *
- * @param p_hwfn
- * @param p_ptt
- * @param mac - MAC address
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- u8 *mac);
-
/**
* @brief Send eswitch mode to MFW
*
@@ -863,104 +725,6 @@ enum _ecore_status_t
ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
enum ecore_ov_eswitch eswitch);
-/**
- * @brief Set LED status
- *
- * @param p_hwfn
- * @param p_ptt
- * @param mode - LED mode
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_led_mode mode);
-
-/**
- * @brief Set secure mode
- *
- * @param p_dev
- * @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
- u32 addr);
-
-/**
- * @brief Write to phy
- *
- * @param p_dev
- * @param addr - nvm offset
- * @param cmd - nvm command
- * @param p_buf - nvm write buffer
- * @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 len);
-
-/**
- * @brief Write to nvm
- *
- * @param p_dev
- * @param addr - nvm offset
- * @param cmd - nvm command
- * @param p_buf - nvm write buffer
- * @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 len);
-
-/**
- * @brief Put file begin
- *
- * @param p_dev
- * @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
- u32 addr);
-
-/**
- * @brief Delete file
- *
- * @param p_dev
- * @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev,
- u32 addr);
-
-/**
- * @brief Check latest response
- *
- * @param p_dev
- * @param p_buf - nvm write buffer
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf);
-
-/**
- * @brief Read from phy
- *
- * @param p_dev
- * @param addr - nvm offset
- * @param cmd - nvm command
- * @param p_buf - nvm read buffer
- * @param len - buffer len
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
- u32 addr, u8 *p_buf, u32 *p_len);
-
/**
* @brief Read from nvm
*
@@ -993,20 +757,6 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
enum ecore_nvm_images image_id,
struct ecore_nvm_image_att *p_image_att);
-/**
- * @brief Allows reading a whole nvram image
- *
- * @param p_hwfn
- * @param image_id - image requested for reading
- * @param p_buffer - allocated buffer into which to fill data
- * @param buffer_len - length of the allocated buffer.
- *
- * @return ECORE_SUCCESS if p_buffer now contains the nvram image.
- */
-enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
- enum ecore_nvm_images image_id,
- u8 *p_buffer, u32 buffer_len);
-
/**
* @brief - Sends an NVM write command request to the MFW with
* payload.
@@ -1057,183 +807,6 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
u32 *o_txn_size,
u32 *o_buf);
-/**
- * @brief Read from sfp
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param port - transceiver port
- * @param addr - I2C address
- * @param offset - offset in sfp
- * @param len - buffer length
- * @param p_buf - buffer to read into
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 port, u32 addr, u32 offset,
- u32 len, u8 *p_buf);
-
-/**
- * @brief Write to sfp
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param port - transceiver port
- * @param addr - I2C address
- * @param offset - offset in sfp
- * @param len - buffer length
- * @param p_buf - buffer to write from
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 port, u32 addr, u32 offset,
- u32 len, u8 *p_buf);
-
-/**
- * @brief Gpio read
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param gpio - gpio number
- * @param gpio_val - value read from gpio
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u32 *gpio_val);
-
-/**
- * @brief Gpio write
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param gpio - gpio number
- * @param gpio_val - value to write to gpio
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u16 gpio_val);
-
-/**
- * @brief Gpio get information
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param gpio - gpio number
- * @param gpio_direction - gpio is output (0) or input (1)
- * @param gpio_ctrl - gpio control is uninitialized (0),
- * path 0 (1), path 1 (2) or shared(3)
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 gpio, u32 *gpio_direction,
- u32 *gpio_ctrl);
-
-/**
- * @brief Bist register test
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
-/**
- * @brief Bist clock test
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
-/**
- * @brief Bist nvm test - get number of images
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param num_images - number of images if operation was
- * successful. 0 if not.
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
- struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *num_images);
-
-/**
- * @brief Bist nvm test - get image attributes by index
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param p_image_att - Attributes of image
- * @param image_index - Index of image to get information for
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
- struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct bist_nvm_image_att *p_image_att,
- u32 image_index);
-
-/**
- * @brief ecore_mcp_get_temperature_info - get the status of the temperature
- * sensors
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param p_temp_status - A pointer to an ecore_temperature_info structure to
- * be filled with the temperature data
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t
-ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_temperature_info *p_temp_info);
-
-/**
- * @brief Get MBA versions - get MBA sub images versions
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param p_mba_vers - MBA versions array to fill
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_get_mba_versions(
- struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_mba_vers *p_mba_vers);
-
-/**
- * @brief Count memory ecc events
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param num_events - number of memory ecc events
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u64 *num_events);
-
struct ecore_mdump_info {
u32 reason;
u32 version;
@@ -1256,28 +829,6 @@ enum _ecore_status_t
ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct ecore_mdump_info *p_mdump_info);
-/**
- * @brief - Clears the MFW crash dump logs.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
-/**
- * @brief - Clear the mdump retained data.
- *
- * @param p_hwfn
- * @param p_ptt
- *
- * @param return ECORE_SUCCESS upon success.
- */
-enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
-
/**
* @brief - Processes the TLV request from MFW i.e., get the required TLV info
* from the ecore client and send it to the MFW.
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 44ced135d6..86fceb36ba 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -486,70 +486,6 @@ u16 ecore_sp_rl_gd_denom(u32 gd)
return gd ? (u16)OSAL_MIN_T(u32, (u16)(~0U), FW_GD_RESOLUTION(gd)) : 0;
}
-enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
- struct ecore_rl_update_params *params)
-{
- struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
- struct rl_update_ramrod_data *rl_update;
- struct ecore_sp_init_data init_data;
-
- /* Get SPQ entry */
- OSAL_MEMSET(&init_data, 0, sizeof(init_data));
- init_data.cid = ecore_spq_get_cid(p_hwfn);
- init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
- init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
- rc = ecore_sp_init_request(p_hwfn, &p_ent,
- COMMON_RAMROD_RL_UPDATE, PROTOCOLID_COMMON,
- &init_data);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- rl_update = &p_ent->ramrod.rl_update;
-
- rl_update->qcn_update_param_flg = params->qcn_update_param_flg;
- rl_update->dcqcn_update_param_flg = params->dcqcn_update_param_flg;
- rl_update->rl_init_flg = params->rl_init_flg;
- rl_update->rl_start_flg = params->rl_start_flg;
- rl_update->rl_stop_flg = params->rl_stop_flg;
- rl_update->rl_id_first = params->rl_id_first;
- rl_update->rl_id_last = params->rl_id_last;
- rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg;
- rl_update->dcqcn_reset_alpha_on_idle =
- params->dcqcn_reset_alpha_on_idle;
- rl_update->rl_bc_stage_th = params->rl_bc_stage_th;
- rl_update->rl_timer_stage_th = params->rl_timer_stage_th;
- rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate);
- rl_update->rl_max_rate =
- OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate));
- rl_update->rl_r_ai =
- OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai));
- rl_update->rl_r_hai =
- OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai));
- rl_update->dcqcn_g =
- OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd));
- rl_update->dcqcn_k_us = OSAL_CPU_TO_LE32(params->dcqcn_k_us);
- rl_update->dcqcn_timeuot_us =
- OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us);
- rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us);
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x,dcqcn_reset_alpha_on_idle %x, rl_bc_stage_th %x, rl_timer_stage_th %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
- rl_update->qcn_update_param_flg,
- rl_update->dcqcn_update_param_flg,
- rl_update->rl_init_flg, rl_update->rl_start_flg,
- rl_update->rl_stop_flg, rl_update->rl_id_first,
- rl_update->rl_id_last, rl_update->rl_dc_qcn_flg,
- rl_update->dcqcn_reset_alpha_on_idle,
- rl_update->rl_bc_stage_th, rl_update->rl_timer_stage_th,
- rl_update->rl_bc_rate, rl_update->rl_max_rate,
- rl_update->rl_r_ai, rl_update->rl_r_hai,
- rl_update->dcqcn_g, rl_update->dcqcn_k_us,
- rl_update->dcqcn_timeuot_us, rl_update->qcn_timeuot_us);
-
- return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
/* Set pf update ramrod command params */
enum _ecore_status_t
ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
@@ -620,31 +556,6 @@ enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn)
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
-enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
-{
- struct ecore_spq_entry *p_ent = OSAL_NULL;
- struct ecore_sp_init_data init_data;
- enum _ecore_status_t rc;
-
- /* Get SPQ entry */
- OSAL_MEMSET(&init_data, 0, sizeof(init_data));
- init_data.cid = ecore_spq_get_cid(p_hwfn);
- init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
- init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
- rc = ecore_sp_init_request(p_hwfn, &p_ent,
- COMMON_RAMROD_EMPTY, PROTOCOLID_COMMON,
- &init_data);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
- p_ent->ramrod.pf_update.mf_vlan |=
- OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
-
- return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn)
{
struct ecore_spq_entry *p_ent = OSAL_NULL;
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 524fe57a14..7d9ec82c7c 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -101,16 +101,6 @@ enum _ecore_status_t ecore_sp_pf_update_dcbx(struct ecore_hwfn *p_hwfn);
enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn);
-/**
- * @brief ecore_sp_heartbeat_ramrod - Send empty Ramrod
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn);
-
struct ecore_rl_update_params {
u8 qcn_update_param_flg;
u8 dcqcn_update_param_flg;
@@ -133,17 +123,6 @@ struct ecore_rl_update_params {
u32 qcn_timeuot_us;
};
-/**
- * @brief ecore_sp_rl_update - Update rate limiters
- *
- * @param p_hwfn
- * @param params
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
- struct ecore_rl_update_params *params);
-
/**
* @brief ecore_sp_pf_update_stag - PF STAG value update Ramrod
*
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index ed8cc695fe..a7a0a40a74 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -772,39 +772,6 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
}
}
-void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
- u8 to_disable)
-{
- u16 i;
-
- if (!IS_ECORE_SRIOV(p_dev))
- return;
-
- for (i = 0; i < p_dev->p_iov_info->total_vfs; i++)
- ecore_iov_set_vf_to_disable(p_dev, i, to_disable);
-}
-
-#ifndef LINUX_REMOVE
-/* @@@TBD Consider taking outside of ecore... */
-enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 vf_id,
- void *ctx)
-{
- enum _ecore_status_t rc = ECORE_SUCCESS;
- struct ecore_vf_info *vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true);
-
- if (vf != OSAL_NULL) {
- vf->ctx = ctx;
-#ifdef CONFIG_ECORE_SW_CHANNEL
- vf->vf_mbx.sw_mbx.mbx_state = VF_PF_WAIT_FOR_START_REQUEST;
-#endif
- } else {
- rc = ECORE_UNKNOWN_ERROR;
- }
- return rc;
-}
-#endif
-
static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u8 abs_vfid)
@@ -1269,70 +1236,6 @@ static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
}
#endif
-enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 rel_vf_id)
-{
- struct ecore_mcp_link_capabilities caps;
- struct ecore_mcp_link_params params;
- struct ecore_mcp_link_state link;
- struct ecore_vf_info *vf = OSAL_NULL;
-
- vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!vf) {
- DP_ERR(p_hwfn, "ecore_iov_release_hw_for_vf : vf is NULL\n");
- return ECORE_UNKNOWN_ERROR;
- }
-
- if (vf->bulletin.p_virt)
- OSAL_MEMSET(vf->bulletin.p_virt, 0,
- sizeof(*vf->bulletin.p_virt));
-
- OSAL_MEMSET(&vf->p_vf_info, 0, sizeof(vf->p_vf_info));
-
- /* Get the link configuration back in bulletin so
- * that when VFs are re-enabled they get the actual
- * link configuration.
- */
- OSAL_MEMCPY(¶ms, ecore_mcp_get_link_params(p_hwfn), sizeof(params));
- OSAL_MEMCPY(&link, ecore_mcp_get_link_state(p_hwfn), sizeof(link));
- OSAL_MEMCPY(&caps, ecore_mcp_get_link_capabilities(p_hwfn),
- sizeof(caps));
- ecore_iov_set_link(p_hwfn, rel_vf_id, ¶ms, &link, &caps);
-
- /* Forget the VF's acquisition message */
- OSAL_MEMSET(&vf->acquire, 0, sizeof(vf->acquire));
-
- /* disablng interrupts and resetting permission table was done during
- * vf-close, however, we could get here without going through vf_close
- */
- /* Disable Interrupts for VF */
- ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
-
- /* Reset Permission table */
- ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
-
- vf->num_rxqs = 0;
- vf->num_txqs = 0;
- ecore_iov_free_vf_igu_sbs(p_hwfn, p_ptt, vf);
-
- if (vf->b_init) {
- vf->b_init = false;
- p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] &=
- ~(1ULL << (vf->relative_vf_id / 64));
-
- if (IS_LEAD_HWFN(p_hwfn))
- p_hwfn->p_dev->p_iov_info->num_vfs--;
- }
-
-#ifndef ASIC_ONLY
- if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
- ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt);
-#endif
-
- return ECORE_SUCCESS;
-}
-
static bool ecore_iov_tlv_supported(u16 tlvtype)
{
return tlvtype > CHANNEL_TLV_NONE && tlvtype < CHANNEL_TLV_MAX;
@@ -1573,20 +1476,6 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
}
-struct ecore_public_vf_info
-*ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
- u16 relative_vf_id,
- bool b_enabled_only)
-{
- struct ecore_vf_info *vf = OSAL_NULL;
-
- vf = ecore_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
- if (!vf)
- return OSAL_NULL;
-
- return &vf->p_vf_info;
-}
-
static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf)
{
@@ -3820,93 +3709,6 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
sizeof(struct pfvf_def_resp_tlv), status);
}
-enum _ecore_status_t
-ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
- u16 rx_coal, u16 tx_coal,
- u16 vf_id, u16 qid)
-{
- struct ecore_queue_cid *p_cid;
- struct ecore_vf_info *vf;
- struct ecore_ptt *p_ptt;
- int rc = 0;
- u32 i;
-
- if (!ecore_iov_is_valid_vfid(p_hwfn, vf_id, true, true)) {
- DP_NOTICE(p_hwfn, true,
- "VF[%d] - Can not set coalescing: VF is not active\n",
- vf_id);
- return ECORE_INVAL;
- }
-
- vf = &p_hwfn->pf_iov_info->vfs_array[vf_id];
- p_ptt = ecore_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return ECORE_AGAIN;
-
- if (!ecore_iov_validate_rxq(p_hwfn, vf, qid,
- ECORE_IOV_VALIDATE_Q_ENABLE) &&
- rx_coal) {
- DP_ERR(p_hwfn, "VF[%d]: Invalid Rx queue_id = %d\n",
- vf->abs_vf_id, qid);
- goto out;
- }
-
- if (!ecore_iov_validate_txq(p_hwfn, vf, qid,
- ECORE_IOV_VALIDATE_Q_ENABLE) &&
- tx_coal) {
- DP_ERR(p_hwfn, "VF[%d]: Invalid Tx queue_id = %d\n",
- vf->abs_vf_id, qid);
- goto out;
- }
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF[%d]: Setting coalesce for VF rx_coal = %d, tx_coal = %d at queue = %d\n",
- vf->abs_vf_id, rx_coal, tx_coal, qid);
-
- if (rx_coal) {
- p_cid = ecore_iov_get_vf_rx_queue_cid(&vf->vf_queues[qid]);
-
- rc = ecore_set_rxq_coalesce(p_hwfn, p_ptt, rx_coal, p_cid);
- if (rc != ECORE_SUCCESS) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF[%d]: Unable to set rx queue = %d coalesce\n",
- vf->abs_vf_id, vf->vf_queues[qid].fw_rx_qid);
- goto out;
- }
- vf->rx_coal = rx_coal;
- }
-
- /* TODO - in future, it might be possible to pass this in a per-cid
- * granularity. For now, do this for all Tx queues.
- */
- if (tx_coal) {
- struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
-
- for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
- if (p_queue->cids[i].p_cid == OSAL_NULL)
- continue;
-
- if (!p_queue->cids[i].b_is_tx)
- continue;
-
- rc = ecore_set_txq_coalesce(p_hwfn, p_ptt, tx_coal,
- p_queue->cids[i].p_cid);
- if (rc != ECORE_SUCCESS) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF[%d]: Unable to set tx queue coalesce\n",
- vf->abs_vf_id);
- goto out;
- }
- }
- vf->tx_coal = tx_coal;
- }
-
-out:
- ecore_ptt_release(p_hwfn, p_ptt);
-
- return rc;
-}
-
static enum _ecore_status_t
ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
@@ -4116,24 +3918,6 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t
-ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 rel_vf_id)
-{
- u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
-
- /* Wait instead of polling the BRB <-> PRS interface */
- OSAL_MSLEEP(100);
-
- ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, rel_vf_id, ack_vfs);
-
- rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
- return rc;
-}
-
bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
{
bool found = false;
@@ -4184,28 +3968,6 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
return found;
}
-void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
- u16 vfid,
- struct ecore_mcp_link_params *p_params,
- struct ecore_mcp_link_state *p_link,
- struct ecore_mcp_link_capabilities *p_caps)
-{
- struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
- struct ecore_bulletin_content *p_bulletin;
-
- if (!p_vf)
- return;
-
- p_bulletin = p_vf->bulletin.p_virt;
-
- if (p_params)
- __ecore_vf_get_link_params(p_params, p_bulletin);
- if (p_link)
- __ecore_vf_get_link_state(p_link, p_bulletin);
- if (p_caps)
- __ecore_vf_get_link_caps(p_caps, p_bulletin);
-}
-
void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, int vfid)
{
@@ -4466,12 +4228,6 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
}
}
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- return !!(p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
- (1ULL << (rel_vf_id % 64)));
-}
-
u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
{
struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
@@ -4516,172 +4272,6 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
- u8 *mac, int vfid)
-{
- struct ecore_vf_info *vf_info;
- u64 feature;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info) {
- DP_NOTICE(p_hwfn->p_dev, true,
- "Can not set forced MAC, invalid vfid [%d]\n", vfid);
- return;
- }
- if (vf_info->b_malicious) {
- DP_NOTICE(p_hwfn->p_dev, false,
- "Can't set forced MAC to malicious VF [%d]\n",
- vfid);
- return;
- }
-
- if (p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change ||
- vf_info->p_vf_info.is_trusted_configured) {
- feature = 1 << VFPF_BULLETIN_MAC_ADDR;
- /* Trust mode will disable Forced MAC */
- vf_info->bulletin.p_virt->valid_bitmap &=
- ~(1 << MAC_ADDR_FORCED);
- } else {
- feature = 1 << MAC_ADDR_FORCED;
- /* Forced MAC will disable MAC_ADDR */
- vf_info->bulletin.p_virt->valid_bitmap &=
- ~(1 << VFPF_BULLETIN_MAC_ADDR);
- }
-
- OSAL_MEMCPY(vf_info->bulletin.p_virt->mac,
- mac, ETH_ALEN);
-
- vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
- ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-}
-
-enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
- u8 *mac, int vfid)
-{
- struct ecore_vf_info *vf_info;
- u64 feature;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info) {
- DP_NOTICE(p_hwfn->p_dev, true,
- "Can not set MAC, invalid vfid [%d]\n", vfid);
- return ECORE_INVAL;
- }
- if (vf_info->b_malicious) {
- DP_NOTICE(p_hwfn->p_dev, false,
- "Can't set MAC to malicious VF [%d]\n",
- vfid);
- return ECORE_INVAL;
- }
-
- if (vf_info->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Can not set MAC, Forced MAC is configured\n");
- return ECORE_INVAL;
- }
-
- feature = 1 << VFPF_BULLETIN_MAC_ADDR;
- OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
-
- vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
- if (p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change ||
- vf_info->p_vf_info.is_trusted_configured)
- ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-
- return ECORE_SUCCESS;
-}
-
-#ifndef LINUX_REMOVE
-enum _ecore_status_t
-ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
- bool b_untagged_only, int vfid)
-{
- struct ecore_vf_info *vf_info;
- u64 feature;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info) {
- DP_NOTICE(p_hwfn->p_dev, true,
- "Can not set untagged default, invalid vfid [%d]\n",
- vfid);
- return ECORE_INVAL;
- }
- if (vf_info->b_malicious) {
- DP_NOTICE(p_hwfn->p_dev, false,
- "Can't set untagged default to malicious VF [%d]\n",
- vfid);
- return ECORE_INVAL;
- }
-
- /* Since this is configurable only during vport-start, don't take it
- * if we're past that point.
- */
- if (vf_info->state == VF_ENABLED) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Can't support untagged change for vfid[%d] -"
- " VF is already active\n",
- vfid);
- return ECORE_INVAL;
- }
-
- /* Set configuration; This will later be taken into account during the
- * VF initialization.
- */
- feature = (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT) |
- (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED);
- vf_info->bulletin.p_virt->valid_bitmap |= feature;
-
- vf_info->bulletin.p_virt->default_only_untagged = b_untagged_only ? 1
- : 0;
-
- return ECORE_SUCCESS;
-}
-
-void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
- u16 *opaque_fid)
-{
- struct ecore_vf_info *vf_info;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info)
- return;
-
- *opaque_fid = vf_info->opaque_fid;
-}
-#endif
-
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 pvid, int vfid)
-{
- struct ecore_vf_info *vf_info;
- u64 feature;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info) {
- DP_NOTICE(p_hwfn->p_dev, true,
- "Can not set forced MAC, invalid vfid [%d]\n",
- vfid);
- return;
- }
- if (vf_info->b_malicious) {
- DP_NOTICE(p_hwfn->p_dev, false,
- "Can't set forced vlan to malicious VF [%d]\n",
- vfid);
- return;
- }
-
- feature = 1 << VLAN_ADDR_FORCED;
- vf_info->bulletin.p_virt->pvid = pvid;
- if (pvid)
- vf_info->bulletin.p_virt->valid_bitmap |= feature;
- else
- vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
-
- ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-}
-
void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
int vfid, u16 vxlan_port, u16 geneve_port)
{
@@ -4715,360 +4305,3 @@ bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
return !!p_vf_info->vport_instance;
}
-
-bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
-{
- struct ecore_vf_info *p_vf_info;
-
- p_vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!p_vf_info)
- return true;
-
- return p_vf_info->state == VF_STOPPED;
-}
-
-bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
-{
- struct ecore_vf_info *vf_info;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info)
- return false;
-
- return vf_info->spoof_chk;
-}
-
-enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
- int vfid, bool val)
-{
- struct ecore_vf_info *vf;
- enum _ecore_status_t rc = ECORE_INVAL;
-
- if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
- DP_NOTICE(p_hwfn, true,
- "SR-IOV sanity check failed, can't set spoofchk\n");
- goto out;
- }
-
- vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf)
- goto out;
-
- if (!ecore_iov_vf_has_vport_instance(p_hwfn, vfid)) {
- /* After VF VPORT start PF will configure spoof check */
- vf->req_spoofchk_val = val;
- rc = ECORE_SUCCESS;
- goto out;
- }
-
- rc = __ecore_iov_spoofchk_set(p_hwfn, vf, val);
-
-out:
- return rc;
-}
-
-u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
-{
- u8 max_chains_per_vf = p_hwfn->hw_info.max_chains_per_vf;
-
- max_chains_per_vf = (max_chains_per_vf) ? max_chains_per_vf
- : ECORE_MAX_VF_CHAINS_PER_PF;
-
- return max_chains_per_vf;
-}
-
-void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
- void **pp_req_virt_addr,
- u16 *p_req_virt_size)
-{
- struct ecore_vf_info *vf_info =
- ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
- if (!vf_info)
- return;
-
- if (pp_req_virt_addr)
- *pp_req_virt_addr = vf_info->vf_mbx.req_virt;
-
- if (p_req_virt_size)
- *p_req_virt_size = sizeof(*vf_info->vf_mbx.req_virt);
-}
-
-void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
- void **pp_reply_virt_addr,
- u16 *p_reply_virt_size)
-{
- struct ecore_vf_info *vf_info =
- ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
- if (!vf_info)
- return;
-
- if (pp_reply_virt_addr)
- *pp_reply_virt_addr = vf_info->vf_mbx.reply_virt;
-
- if (p_reply_virt_size)
- *p_reply_virt_size = sizeof(*vf_info->vf_mbx.reply_virt);
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- struct ecore_vf_info *vf_info =
- ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
-
- if (!vf_info)
- return OSAL_NULL;
-
- return &vf_info->vf_mbx.sw_mbx;
-}
-#endif
-
-bool ecore_iov_is_valid_vfpf_msg_length(u32 length)
-{
- return (length >= sizeof(struct vfpf_first_tlv) &&
- (length <= sizeof(union vfpf_tlvs)));
-}
-
-u32 ecore_iov_pfvf_msg_length(void)
-{
- return sizeof(union pfvf_tlvs);
-}
-
-u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf || !p_vf->bulletin.p_virt)
- return OSAL_NULL;
-
- if (!(p_vf->bulletin.p_virt->valid_bitmap &
- (1 << VFPF_BULLETIN_MAC_ADDR)))
- return OSAL_NULL;
-
- return p_vf->bulletin.p_virt->mac;
-}
-
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf || !p_vf->bulletin.p_virt)
- return OSAL_NULL;
-
- if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)))
- return OSAL_NULL;
-
- return p_vf->bulletin.p_virt->mac;
-}
-
-u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf || !p_vf->bulletin.p_virt)
- return 0;
-
- if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
- return 0;
-
- return p_vf->bulletin.p_virt->pvid;
-}
-
-enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- int vfid, int val)
-{
- struct ecore_vf_info *vf;
- u8 abs_vp_id = 0;
- u16 rl_id;
- enum _ecore_status_t rc;
-
- vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-
- if (!vf)
- return ECORE_INVAL;
-
- rc = ecore_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */
- return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val);
-}
-
-enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
- int vfid, u32 rate)
-{
- struct ecore_vf_info *vf;
- int i;
-
- for_each_hwfn(p_dev, i) {
- struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
-
- if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
- DP_NOTICE(p_hwfn, true,
- "SR-IOV sanity check failed, can't set min rate\n");
- return ECORE_INVAL;
- }
- }
-
- vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
- if (!vf) {
- DP_NOTICE(p_dev, true,
- "Getting vf info failed, can't set min rate\n");
- return ECORE_INVAL;
- }
-
- return ecore_configure_vport_wfq(p_dev, vf->vport_id, rate);
-}
-
-enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- int vfid,
- struct ecore_eth_stats *p_stats)
-{
- struct ecore_vf_info *vf;
-
- vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf)
- return ECORE_INVAL;
-
- if (vf->state != VF_ENABLED)
- return ECORE_INVAL;
-
- __ecore_get_vport_stats(p_hwfn, p_ptt, p_stats,
- vf->abs_vf_id + 0x10, false);
-
- return ECORE_SUCCESS;
-}
-
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return 0;
-
- return p_vf->num_rxqs;
-}
-
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return 0;
-
- return p_vf->num_active_rxqs;
-}
-
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return OSAL_NULL;
-
- return p_vf->ctx;
-}
-
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return 0;
-
- return p_vf->num_sbs;
-}
-
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return false;
-
- return (p_vf->state == VF_FREE);
-}
-
-bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return false;
-
- return (p_vf->state == VF_ACQUIRED);
-}
-
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return false;
-
- return (p_vf->state == VF_ENABLED);
-}
-
-bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- struct ecore_vf_info *p_vf;
-
- p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
- if (!p_vf)
- return false;
-
- return (p_vf->state != VF_FREE && p_vf->state != VF_STOPPED);
-}
-
-int
-ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
-{
- struct ecore_wfq_data *vf_vp_wfq;
- struct ecore_vf_info *vf_info;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info)
- return 0;
-
- vf_vp_wfq = &p_hwfn->qm_info.wfq_data[vf_info->vport_id];
-
- if (vf_vp_wfq->configured)
- return vf_vp_wfq->min_speed;
- else
- return 0;
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
- bool b_is_hw)
-{
- struct ecore_vf_info *vf_info;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info)
- return;
-
- vf_info->b_hw_channel = b_is_hw;
-}
-#endif
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index db03bc494f..68a22283d1 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1926,55 +1926,7 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
return true;
}
-void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
- u16 *p_vxlan_port,
- u16 *p_geneve_port)
-{
- struct ecore_bulletin_content *p_bulletin;
-
- p_bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
-
- *p_vxlan_port = p_bulletin->vxlan_udp_port;
- *p_geneve_port = p_bulletin->geneve_udp_port;
-}
-
-bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
-{
- struct ecore_bulletin_content *bulletin;
-
- bulletin = &hwfn->vf_iov_info->bulletin_shadow;
-
- if (!(bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
- return false;
-
- if (dst_pvid)
- *dst_pvid = bulletin->pvid;
-
- return true;
-}
-
bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn)
{
return p_hwfn->vf_iov_info->b_pre_fp_hsi;
}
-
-void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
- u16 *fw_major, u16 *fw_minor, u16 *fw_rev,
- u16 *fw_eng)
-{
- struct pf_vf_pfdev_info *info;
-
- info = &p_hwfn->vf_iov_info->acquire_resp.pfdev_info;
-
- *fw_major = info->fw_major;
- *fw_minor = info->fw_minor;
- *fw_rev = info->fw_rev;
- *fw_eng = info->fw_eng;
-}
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw)
-{
- p_hwfn->vf_iov_info->b_hw_channel = b_is_hw;
-}
-#endif
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 43951a9a34..68286355bf 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -125,16 +125,6 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
u8 *p_is_forced);
-/**
- * @brief Check if force vlan is set and copy the forced vlan
- * from bulletin board
- *
- * @param hwfn
- * @param dst_pvid
- * @return bool
- */
-bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
-
/**
* @brief Check if VF is based on PF whose driver is pre-fp-hsi version;
* This affects the fastpath implementation of the driver.
@@ -147,35 +137,5 @@ bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn);
#endif
-/**
- * @brief Set firmware version information in dev_info from VFs acquire
- * response tlv
- *
- * @param p_hwfn
- * @param fw_major
- * @param fw_minor
- * @param fw_rev
- * @param fw_eng
- */
-void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
- u16 *fw_major,
- u16 *fw_minor,
- u16 *fw_rev,
- u16 *fw_eng);
-void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
- u16 *p_vxlan_port, u16 *p_geneve_port);
-
-#ifdef CONFIG_ECORE_SW_CHANNEL
-/**
- * @brief set the VF to use a SW/HW channel when communicating with PF.
- * NOTICE: today the likely first place to call this from VF
- * would be OSAL_VF_FILL_ACQUIRE_RESC_REQ(); Might want to consider
- * something a bit more appropriate.
- *
- * @param p_hwfn
- * @param b_is_hw - true iff VF is to use a HW-channel
- */
-void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw);
-#endif
#endif
#endif
diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index 2297d245c4..ae4ebd186a 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -828,15 +828,6 @@ static u32 qed_read_unaligned_dword(u8 *buf)
return dword;
}
-/* Sets the value of the specified GRC param */
-static void qed_grc_set_param(struct ecore_hwfn *p_hwfn,
- enum dbg_grc_params grc_param, u32 val)
-{
- struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
-
- dev_data->grc.param_val[grc_param] = val;
-}
-
/* Returns the value of the specified GRC param */
static u32 qed_grc_get_param(struct ecore_hwfn *p_hwfn,
enum dbg_grc_params grc_param)
@@ -4893,69 +4884,6 @@ bool qed_read_fw_info(struct ecore_hwfn *p_hwfn,
return false;
}
-enum dbg_status qed_dbg_grc_config(struct ecore_hwfn *p_hwfn,
- enum dbg_grc_params grc_param, u32 val)
-{
- struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
- enum dbg_status status;
- int i;
-
- DP_VERBOSE(p_hwfn->p_dev,
- ECORE_MSG_DEBUG,
- "dbg_grc_config: paramId = %d, val = %d\n", grc_param, val);
-
- status = qed_dbg_dev_init(p_hwfn);
- if (status != DBG_STATUS_OK)
- return status;
-
- /* Initializes the GRC parameters (if not initialized). Needed in order
- * to set the default parameter values for the first time.
- */
- qed_dbg_grc_init_params(p_hwfn);
-
- if (grc_param >= MAX_DBG_GRC_PARAMS)
- return DBG_STATUS_INVALID_ARGS;
- if (val < s_grc_param_defs[grc_param].min ||
- val > s_grc_param_defs[grc_param].max)
- return DBG_STATUS_INVALID_ARGS;
-
- if (s_grc_param_defs[grc_param].is_preset) {
- /* Preset param */
-
- /* Disabling a preset is not allowed. Call
- * dbg_grc_set_params_default instead.
- */
- if (!val)
- return DBG_STATUS_INVALID_ARGS;
-
- /* Update all params with the preset values */
- for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) {
- struct grc_param_defs *defs = &s_grc_param_defs[i];
- u32 preset_val;
- /* Skip persistent params */
- if (defs->is_persistent)
- continue;
-
- /* Find preset value */
- if (grc_param == DBG_GRC_PARAM_EXCLUDE_ALL)
- preset_val =
- defs->exclude_all_preset_val;
- else if (grc_param == DBG_GRC_PARAM_CRASH)
- preset_val =
- defs->crash_preset_val[dev_data->chip_id];
- else
- return DBG_STATUS_INVALID_ARGS;
-
- qed_grc_set_param(p_hwfn, i, preset_val);
- }
- } else {
- /* Regular param - set its value */
- qed_grc_set_param(p_hwfn, grc_param, val);
- }
-
- return DBG_STATUS_OK;
-}
-
/* Assign default GRC param values */
void qed_dbg_grc_set_params_default(struct ecore_hwfn *p_hwfn)
{
@@ -5362,79 +5290,6 @@ static enum dbg_status qed_dbg_ilt_dump(struct ecore_hwfn *p_hwfn,
return DBG_STATUS_OK;
}
-enum dbg_status qed_dbg_read_attn(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum block_id block_id,
- enum dbg_attn_type attn_type,
- bool clear_status,
- struct dbg_attn_block_result *results)
-{
- enum dbg_status status = qed_dbg_dev_init(p_hwfn);
- u8 reg_idx, num_attn_regs, num_result_regs = 0;
- const struct dbg_attn_reg *attn_reg_arr;
-
- if (status != DBG_STATUS_OK)
- return status;
-
- if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr ||
- !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr ||
- !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr)
- return DBG_STATUS_DBG_ARRAY_NOT_SET;
-
- attn_reg_arr = qed_get_block_attn_regs(p_hwfn,
- block_id,
- attn_type, &num_attn_regs);
-
- for (reg_idx = 0; reg_idx < num_attn_regs; reg_idx++) {
- const struct dbg_attn_reg *reg_data = &attn_reg_arr[reg_idx];
- struct dbg_attn_reg_result *reg_result;
- u32 sts_addr, sts_val;
- u16 modes_buf_offset;
- bool eval_mode;
-
- /* Check mode */
- eval_mode = GET_FIELD(reg_data->mode.data,
- DBG_MODE_HDR_EVAL_MODE) > 0;
- modes_buf_offset = GET_FIELD(reg_data->mode.data,
- DBG_MODE_HDR_MODES_BUF_OFFSET);
- if (eval_mode && !qed_is_mode_match(p_hwfn, &modes_buf_offset))
- continue;
-
- /* Mode match - read attention status register */
- sts_addr = DWORDS_TO_BYTES(clear_status ?
- reg_data->sts_clr_address :
- GET_FIELD(reg_data->data,
- DBG_ATTN_REG_STS_ADDRESS));
- sts_val = ecore_rd(p_hwfn, p_ptt, sts_addr);
- if (!sts_val)
- continue;
-
- /* Non-zero attention status - add to results */
- reg_result = &results->reg_results[num_result_regs];
- SET_FIELD(reg_result->data,
- DBG_ATTN_REG_RESULT_STS_ADDRESS, sts_addr);
- SET_FIELD(reg_result->data,
- DBG_ATTN_REG_RESULT_NUM_REG_ATTN,
- GET_FIELD(reg_data->data, DBG_ATTN_REG_NUM_REG_ATTN));
- reg_result->block_attn_offset = reg_data->block_attn_offset;
- reg_result->sts_val = sts_val;
- reg_result->mask_val = ecore_rd(p_hwfn,
- p_ptt,
- DWORDS_TO_BYTES
- (reg_data->mask_address));
- num_result_regs++;
- }
-
- results->block_id = (u8)block_id;
- results->names_offset =
- qed_get_block_attn_data(p_hwfn, block_id, attn_type)->names_offset;
- SET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE, attn_type);
- SET_FIELD(results->data,
- DBG_ATTN_BLOCK_RESULT_NUM_REGS, num_result_regs);
-
- return DBG_STATUS_OK;
-}
-
/******************************* Data Types **********************************/
/* REG fifo element */
@@ -6067,19 +5922,6 @@ static u32 qed_print_section_params(u32 *dump_buf,
return dump_offset;
}
-/* Returns the block name that matches the specified block ID,
- * or NULL if not found.
- */
-static const char *qed_dbg_get_block_name(struct ecore_hwfn *p_hwfn,
- enum block_id block_id)
-{
- const struct dbg_block_user *block =
- (const struct dbg_block_user *)
- p_hwfn->dbg_arrays[BIN_BUF_DBG_BLOCKS_USER_DATA].ptr + block_id;
-
- return (const char *)block->name;
-}
-
static struct dbg_tools_user_data *qed_dbg_get_user_data(struct ecore_hwfn
*p_hwfn)
{
@@ -7180,15 +7022,6 @@ enum dbg_status qed_print_idle_chk_results(struct ecore_hwfn *p_hwfn,
num_errors, num_warnings);
}
-void qed_dbg_mcp_trace_set_meta_data(struct ecore_hwfn *p_hwfn,
- const u32 *meta_buf)
-{
- struct dbg_tools_user_data *dev_user_data =
- qed_dbg_get_user_data(p_hwfn);
-
- dev_user_data->mcp_trace_user_meta_buf = meta_buf;
-}
-
enum dbg_status
qed_get_mcp_trace_results_buf_size(struct ecore_hwfn *p_hwfn,
u32 *dump_buf,
@@ -7211,31 +7044,6 @@ enum dbg_status qed_print_mcp_trace_results(struct ecore_hwfn *p_hwfn,
results_buf, &parsed_buf_size, true);
}
-enum dbg_status qed_print_mcp_trace_results_cont(struct ecore_hwfn *p_hwfn,
- u32 *dump_buf,
- char *results_buf)
-{
- u32 parsed_buf_size;
-
- return qed_parse_mcp_trace_dump(p_hwfn, dump_buf, results_buf,
- &parsed_buf_size, false);
-}
-
-enum dbg_status qed_print_mcp_trace_line(struct ecore_hwfn *p_hwfn,
- u8 *dump_buf,
- u32 num_dumped_bytes,
- char *results_buf)
-{
- u32 parsed_results_bytes;
-
- return qed_parse_mcp_trace_buf(p_hwfn,
- dump_buf,
- num_dumped_bytes,
- 0,
- num_dumped_bytes,
- results_buf, &parsed_results_bytes);
-}
-
/* Frees the specified MCP Trace meta data */
void qed_mcp_trace_free_meta_data(struct ecore_hwfn *p_hwfn)
{
@@ -7350,90 +7158,6 @@ qed_print_fw_asserts_results(__rte_unused struct ecore_hwfn *p_hwfn,
results_buf, &parsed_buf_size);
}
-enum dbg_status qed_dbg_parse_attn(struct ecore_hwfn *p_hwfn,
- struct dbg_attn_block_result *results)
-{
- const u32 *block_attn_name_offsets;
- const char *attn_name_base;
- const char *block_name;
- enum dbg_attn_type attn_type;
- u8 num_regs, i, j;
-
- num_regs = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_NUM_REGS);
- attn_type = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE);
- block_name = qed_dbg_get_block_name(p_hwfn, results->block_id);
- if (!block_name)
- return DBG_STATUS_INVALID_ARGS;
-
- if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr ||
- !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr ||
- !p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr)
- return DBG_STATUS_DBG_ARRAY_NOT_SET;
-
- block_attn_name_offsets =
- (u32 *)p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr +
- results->names_offset;
-
- attn_name_base = p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr;
-
- /* Go over registers with a non-zero attention status */
- for (i = 0; i < num_regs; i++) {
- struct dbg_attn_bit_mapping *bit_mapping;
- struct dbg_attn_reg_result *reg_result;
- u8 num_reg_attn, bit_idx = 0;
-
- reg_result = &results->reg_results[i];
- num_reg_attn = GET_FIELD(reg_result->data,
- DBG_ATTN_REG_RESULT_NUM_REG_ATTN);
- bit_mapping = (struct dbg_attn_bit_mapping *)
- p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr +
- reg_result->block_attn_offset;
-
- /* Go over attention status bits */
- for (j = 0; j < num_reg_attn; j++, bit_idx++) {
- u16 attn_idx_val = GET_FIELD(bit_mapping[j].data,
- DBG_ATTN_BIT_MAPPING_VAL);
- const char *attn_name, *attn_type_str, *masked_str;
- u32 attn_name_offset;
- u32 sts_addr;
-
- /* Check if bit mask should be advanced (due to unused
- * bits).
- */
- if (GET_FIELD(bit_mapping[j].data,
- DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT)) {
- bit_idx += (u8)attn_idx_val;
- continue;
- }
-
- /* Check current bit index */
- if (!(reg_result->sts_val & OSAL_BIT(bit_idx)))
- continue;
-
- /* An attention bit with value=1 was found
- * Find attention name
- */
- attn_name_offset =
- block_attn_name_offsets[attn_idx_val];
- attn_name = attn_name_base + attn_name_offset;
- attn_type_str =
- (attn_type ==
- ATTN_TYPE_INTERRUPT ? "Interrupt" :
- "Parity");
- masked_str = reg_result->mask_val & OSAL_BIT(bit_idx) ?
- " [masked]" : "";
- sts_addr = GET_FIELD(reg_result->data,
- DBG_ATTN_REG_RESULT_STS_ADDRESS);
- DP_NOTICE(p_hwfn, false,
- "%s (%s) : %s [address 0x%08x, bit %d]%s\n",
- block_name, attn_type_str, attn_name,
- sts_addr * 4, bit_idx, masked_str);
- }
- }
-
- return DBG_STATUS_OK;
-}
-
/* Wrapper for unifying the idle_chk and mcp_trace api */
static enum dbg_status
qed_print_idle_chk_results_wrapper(struct ecore_hwfn *p_hwfn,
@@ -7683,22 +7407,6 @@ int qed_dbg_igu_fifo_size(struct ecore_dev *edev)
return qed_dbg_feature_size(edev, DBG_FEATURE_IGU_FIFO);
}
-static int qed_dbg_nvm_image_length(struct ecore_hwfn *p_hwfn,
- enum ecore_nvm_images image_id, u32 *length)
-{
- struct ecore_nvm_image_att image_att;
- int rc;
-
- *length = 0;
- rc = ecore_mcp_get_nvm_image_att(p_hwfn, image_id, &image_att);
- if (rc)
- return rc;
-
- *length = image_att.length;
-
- return rc;
-}
-
int qed_dbg_protection_override(struct ecore_dev *edev, void *buffer,
u32 *num_dumped_bytes)
{
@@ -7777,225 +7485,6 @@ enum debug_print_features {
ILT_DUMP = 13,
};
-static u32 qed_calc_regdump_header(struct ecore_dev *edev,
- enum debug_print_features feature,
- int engine, u32 feature_size, u8 omit_engine)
-{
- u32 res = 0;
-
- SET_FIELD(res, REGDUMP_HEADER_SIZE, feature_size);
- if (res != feature_size)
- DP_NOTICE(edev, false,
- "Feature %d is too large (size 0x%x) and will corrupt the dump\n",
- feature, feature_size);
-
- SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature);
- SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine);
- SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine);
-
- return res;
-}
-
-int qed_dbg_all_data(struct ecore_dev *edev, void *buffer)
-{
- u8 cur_engine, omit_engine = 0, org_engine;
- struct ecore_hwfn *p_hwfn =
- &edev->hwfns[edev->dbg_params.engine_for_debug];
- struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
- int grc_params[MAX_DBG_GRC_PARAMS], i;
- u32 offset = 0, feature_size;
- int rc;
-
- for (i = 0; i < MAX_DBG_GRC_PARAMS; i++)
- grc_params[i] = dev_data->grc.param_val[i];
-
- if (!ECORE_IS_CMT(edev))
- omit_engine = 1;
-
- OSAL_MUTEX_ACQUIRE(&edev->dbg_lock);
-
- org_engine = qed_get_debug_engine(edev);
- for (cur_engine = 0; cur_engine < edev->num_hwfns; cur_engine++) {
- /* Collect idle_chks and grcDump for each hw function */
- DP_VERBOSE(edev, ECORE_MSG_DEBUG,
- "obtaining idle_chk and grcdump for current engine\n");
- qed_set_debug_engine(edev, cur_engine);
-
- /* First idle_chk */
- rc = qed_dbg_idle_chk(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, IDLE_CHK, cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_idle_chk failed. rc = %d\n", rc);
- }
-
- /* Second idle_chk */
- rc = qed_dbg_idle_chk(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, IDLE_CHK, cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_idle_chk failed. rc = %d\n", rc);
- }
-
- /* reg_fifo dump */
- rc = qed_dbg_reg_fifo(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, REG_FIFO, cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_reg_fifo failed. rc = %d\n", rc);
- }
-
- /* igu_fifo dump */
- rc = qed_dbg_igu_fifo(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, IGU_FIFO, cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_igu_fifo failed. rc = %d", rc);
- }
-
- /* protection_override dump */
- rc = qed_dbg_protection_override(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE,
- &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, PROTECTION_OVERRIDE,
- cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev,
- "qed_dbg_protection_override failed. rc = %d\n",
- rc);
- }
-
- /* fw_asserts dump */
- rc = qed_dbg_fw_asserts(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, FW_ASSERTS,
- cur_engine, feature_size,
- omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_fw_asserts failed. rc = %d\n",
- rc);
- }
-
- /* GRC dump - must be last because when mcp stuck it will
- * clutter idle_chk, reg_fifo, ...
- */
- for (i = 0; i < MAX_DBG_GRC_PARAMS; i++)
- dev_data->grc.param_val[i] = grc_params[i];
-
- rc = qed_dbg_grc(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, GRC_DUMP,
- cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_grc failed. rc = %d", rc);
- }
- }
-
- qed_set_debug_engine(edev, org_engine);
-
- /* mcp_trace */
- rc = qed_dbg_mcp_trace(edev, (u8 *)buffer + offset +
- REGDUMP_HEADER_SIZE, &feature_size);
- if (!rc) {
- *(u32 *)((u8 *)buffer + offset) =
- qed_calc_regdump_header(edev, MCP_TRACE, cur_engine,
- feature_size, omit_engine);
- offset += (feature_size + REGDUMP_HEADER_SIZE);
- } else {
- DP_ERR(edev, "qed_dbg_mcp_trace failed. rc = %d\n", rc);
- }
-
- OSAL_MUTEX_RELEASE(&edev->dbg_lock);
-
- return 0;
-}
-
-int qed_dbg_all_data_size(struct ecore_dev *edev)
-{
- struct ecore_hwfn *p_hwfn =
- &edev->hwfns[edev->dbg_params.engine_for_debug];
- u32 regs_len = 0, image_len = 0, ilt_len = 0, total_ilt_len = 0;
- u8 cur_engine, org_engine;
-
- edev->disable_ilt_dump = false;
- org_engine = qed_get_debug_engine(edev);
- for (cur_engine = 0; cur_engine < edev->num_hwfns; cur_engine++) {
- /* Engine specific */
- DP_VERBOSE(edev, ECORE_MSG_DEBUG,
- "calculating idle_chk and grcdump register length for current engine\n");
- qed_set_debug_engine(edev, cur_engine);
- regs_len += REGDUMP_HEADER_SIZE + qed_dbg_idle_chk_size(edev) +
- REGDUMP_HEADER_SIZE + qed_dbg_idle_chk_size(edev) +
- REGDUMP_HEADER_SIZE + qed_dbg_grc_size(edev) +
- REGDUMP_HEADER_SIZE + qed_dbg_reg_fifo_size(edev) +
- REGDUMP_HEADER_SIZE + qed_dbg_igu_fifo_size(edev) +
- REGDUMP_HEADER_SIZE +
- qed_dbg_protection_override_size(edev) +
- REGDUMP_HEADER_SIZE + qed_dbg_fw_asserts_size(edev);
-
- ilt_len = REGDUMP_HEADER_SIZE + qed_dbg_ilt_size(edev);
- if (ilt_len < ILT_DUMP_MAX_SIZE) {
- total_ilt_len += ilt_len;
- regs_len += ilt_len;
- }
- }
-
- qed_set_debug_engine(edev, org_engine);
-
- /* Engine common */
- regs_len += REGDUMP_HEADER_SIZE + qed_dbg_mcp_trace_size(edev);
- qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_NVM_CFG1, &image_len);
- if (image_len)
- regs_len += REGDUMP_HEADER_SIZE + image_len;
- qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_DEFAULT_CFG,
- &image_len);
- if (image_len)
- regs_len += REGDUMP_HEADER_SIZE + image_len;
- qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_NVM_META, &image_len);
- if (image_len)
- regs_len += REGDUMP_HEADER_SIZE + image_len;
- qed_dbg_nvm_image_length(p_hwfn, ECORE_NVM_IMAGE_MDUMP, &image_len);
- if (image_len)
- regs_len += REGDUMP_HEADER_SIZE + image_len;
-
- if (regs_len > REGDUMP_MAX_SIZE) {
- DP_VERBOSE(edev, ECORE_MSG_DEBUG,
- "Dump exceeds max size 0x%x, disable ILT dump\n",
- REGDUMP_MAX_SIZE);
- edev->disable_ilt_dump = true;
- regs_len -= total_ilt_len;
- }
-
- return regs_len;
-}
-
int qed_dbg_feature(struct ecore_dev *edev, void *buffer,
enum ecore_dbg_features feature, u32 *num_dumped_bytes)
{
@@ -8098,24 +7587,3 @@ void qed_dbg_pf_init(struct ecore_dev *edev)
/* Set the hwfn to be 0 as default */
edev->dbg_params.engine_for_debug = 0;
}
-
-void qed_dbg_pf_exit(struct ecore_dev *edev)
-{
- struct ecore_dbg_feature *feature = NULL;
- enum ecore_dbg_features feature_idx;
-
- PMD_INIT_FUNC_TRACE(edev);
-
- /* debug features' buffers may be allocated if debug feature was used
- * but dump wasn't called
- */
- for (feature_idx = 0; feature_idx < DBG_FEATURE_NUM; feature_idx++) {
- feature = &edev->dbg_features[feature_idx];
- if (feature->dump_buf) {
- OSAL_VFREE(edev, feature->dump_buf);
- feature->dump_buf = NULL;
- }
- }
-
- OSAL_MUTEX_DEALLOC(&edev->dbg_lock);
-}
diff --git a/drivers/net/qede/qede_debug.h b/drivers/net/qede/qede_debug.h
index 93e1bd7109..90b55f1289 100644
--- a/drivers/net/qede/qede_debug.h
+++ b/drivers/net/qede/qede_debug.h
@@ -33,8 +33,6 @@ int qed_dbg_ilt_size(struct ecore_dev *edev);
int qed_dbg_mcp_trace(struct ecore_dev *edev, void *buffer,
u32 *num_dumped_bytes);
int qed_dbg_mcp_trace_size(struct ecore_dev *edev);
-int qed_dbg_all_data(struct ecore_dev *edev, void *buffer);
-int qed_dbg_all_data_size(struct ecore_dev *edev);
u8 qed_get_debug_engine(struct ecore_dev *edev);
void qed_set_debug_engine(struct ecore_dev *edev, int engine_number);
int qed_dbg_feature(struct ecore_dev *edev, void *buffer,
@@ -43,7 +41,6 @@ int
qed_dbg_feature_size(struct ecore_dev *edev, enum ecore_dbg_features feature);
void qed_dbg_pf_init(struct ecore_dev *edev);
-void qed_dbg_pf_exit(struct ecore_dev *edev);
/***************************** Public Functions *******************************/
@@ -98,21 +95,6 @@ void qed_read_regs(struct ecore_hwfn *p_hwfn,
*/
bool qed_read_fw_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, struct fw_info *fw_info);
-/**
- * @brief qed_dbg_grc_config - Sets the value of a GRC parameter.
- *
- * @param p_hwfn - HW device data
- * @param grc_param - GRC parameter
- * @param val - Value to set.
- *
- * @return error if one of the following holds:
- * - the version wasn't set
- * - grc_param is invalid
- * - val is outside the allowed boundaries
- */
-enum dbg_status qed_dbg_grc_config(struct ecore_hwfn *p_hwfn,
- enum dbg_grc_params grc_param, u32 val);
-
/**
* @brief qed_dbg_grc_set_params_default - Reverts all GRC parameters to their
* default value.
@@ -389,28 +371,6 @@ enum dbg_status qed_dbg_fw_asserts_dump(struct ecore_hwfn *p_hwfn,
u32 buf_size_in_dwords,
u32 *num_dumped_dwords);
-/**
- * @brief qed_dbg_read_attn - Reads the attention registers of the specified
- * block and type, and writes the results into the specified buffer.
- *
- * @param p_hwfn - HW device data
- * @param p_ptt - Ptt window used for writing the registers.
- * @param block - Block ID.
- * @param attn_type - Attention type.
- * @param clear_status - Indicates if the attention status should be cleared.
- * @param results - OUT: Pointer to write the read results into
- *
- * @return error if one of the following holds:
- * - the version wasn't set
- * Otherwise, returns ok.
- */
-enum dbg_status qed_dbg_read_attn(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum block_id block,
- enum dbg_attn_type attn_type,
- bool clear_status,
- struct dbg_attn_block_result *results);
-
/**
* @brief qed_dbg_print_attn - Prints attention registers values in the
* specified results struct.
@@ -529,18 +489,6 @@ enum dbg_status qed_print_idle_chk_results(struct ecore_hwfn *p_hwfn,
u32 *num_errors,
u32 *num_warnings);
-/**
- * @brief qed_dbg_mcp_trace_set_meta_data - Sets the MCP Trace meta data.
- *
- * Needed in case the MCP Trace dump doesn't contain the meta data (e.g. due to
- * no NVRAM access).
- *
- * @param data - pointer to MCP Trace meta data
- * @param size - size of MCP Trace meta data in dwords
- */
-void qed_dbg_mcp_trace_set_meta_data(struct ecore_hwfn *p_hwfn,
- const u32 *meta_buf);
-
/**
* @brief qed_get_mcp_trace_results_buf_size - Returns the required buffer size
* for MCP Trace results (in bytes).
@@ -573,37 +521,6 @@ enum dbg_status qed_print_mcp_trace_results(struct ecore_hwfn *p_hwfn,
u32 num_dumped_dwords,
char *results_buf);
-/**
- * @brief qed_print_mcp_trace_results_cont - Prints MCP Trace results, and
- * keeps the MCP trace meta data allocated, to support continuous MCP Trace
- * parsing. After the continuous parsing ends, mcp_trace_free_meta_data should
- * be called to free the meta data.
- *
- * @param p_hwfn - HW device data
- * @param dump_buf - mcp trace dump buffer, starting from the header.
- * @param results_buf - buffer for printing the mcp trace results.
- *
- * @return error if the parsing fails, ok otherwise.
- */
-enum dbg_status qed_print_mcp_trace_results_cont(struct ecore_hwfn *p_hwfn,
- u32 *dump_buf,
- char *results_buf);
-
-/**
- * @brief print_mcp_trace_line - Prints MCP Trace results for a single line
- *
- * @param p_hwfn - HW device data
- * @param dump_buf - mcp trace dump buffer, starting from the header.
- * @param num_dumped_bytes - number of bytes that were dumped.
- * @param results_buf - buffer for printing the mcp trace results.
- *
- * @return error if the parsing fails, ok otherwise.
- */
-enum dbg_status qed_print_mcp_trace_line(struct ecore_hwfn *p_hwfn,
- u8 *dump_buf,
- u32 num_dumped_bytes,
- char *results_buf);
-
/**
* @brief mcp_trace_free_meta_data - Frees the MCP Trace meta data.
* Should be called after continuous MCP Trace parsing.
@@ -742,18 +659,4 @@ enum dbg_status qed_print_fw_asserts_results(struct ecore_hwfn *p_hwfn,
u32 num_dumped_dwords,
char *results_buf);
-/**
- * @brief qed_dbg_parse_attn - Parses and prints attention registers values in
- * the specified results struct.
- *
- * @param p_hwfn - HW device data
- * @param results - Pointer to the attention read results
- *
- * @return error if one of the following holds:
- * - the version wasn't set
- * Otherwise, returns ok.
- */
-enum dbg_status qed_dbg_parse_attn(struct ecore_hwfn *p_hwfn,
- struct dbg_attn_block_result *results);
-
#endif
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 13e9665bb4..6513c6db81 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -47,19 +47,6 @@ sfc_kvargs_cleanup(struct sfc_adapter *sa)
rte_kvargs_free(sa->kvargs);
}
-static int
-sfc_kvarg_match_value(const char *value, const char * const *values,
- unsigned int n_values)
-{
- unsigned int i;
-
- for (i = 0; i < n_values; ++i)
- if (strcasecmp(value, values[i]) == 0)
- return 1;
-
- return 0;
-}
-
int
sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
arg_handler_t handler, void *opaque_arg)
@@ -70,30 +57,6 @@ sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
return -rte_kvargs_process(sa->kvargs, key_match, handler, opaque_arg);
}
-int
-sfc_kvarg_bool_handler(__rte_unused const char *key,
- const char *value_str, void *opaque)
-{
- const char * const true_strs[] = {
- "1", "y", "yes", "on", "true"
- };
- const char * const false_strs[] = {
- "0", "n", "no", "off", "false"
- };
- bool *value = opaque;
-
- if (sfc_kvarg_match_value(value_str, true_strs,
- RTE_DIM(true_strs)))
- *value = true;
- else if (sfc_kvarg_match_value(value_str, false_strs,
- RTE_DIM(false_strs)))
- *value = false;
- else
- return -EINVAL;
-
- return 0;
-}
-
int
sfc_kvarg_long_handler(__rte_unused const char *key,
const char *value_str, void *opaque)
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 0c3660890c..e39f1191a9 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -74,8 +74,6 @@ void sfc_kvargs_cleanup(struct sfc_adapter *sa);
int sfc_kvargs_process(struct sfc_adapter *sa, const char *key_match,
arg_handler_t handler, void *opaque_arg);
-int sfc_kvarg_bool_handler(const char *key, const char *value_str,
- void *opaque);
int sfc_kvarg_long_handler(const char *key, const char *value_str,
void *opaque);
int sfc_kvarg_string_handler(const char *key, const char *value_str,
diff --git a/drivers/net/softnic/parser.c b/drivers/net/softnic/parser.c
index ebcb10268a..3d94b3bfa9 100644
--- a/drivers/net/softnic/parser.c
+++ b/drivers/net/softnic/parser.c
@@ -38,44 +38,6 @@ get_hex_val(char c)
}
}
-int
-softnic_parser_read_arg_bool(const char *p)
-{
- p = skip_white_spaces(p);
- int result = -EINVAL;
-
- if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
- ((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
- p += 3;
- result = 1;
- }
-
- if (((p[0] == 'o') && (p[1] == 'n')) ||
- ((p[0] == 'O') && (p[1] == 'N'))) {
- p += 2;
- result = 1;
- }
-
- if (((p[0] == 'n') && (p[1] == 'o')) ||
- ((p[0] == 'N') && (p[1] == 'O'))) {
- p += 2;
- result = 0;
- }
-
- if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
- ((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
- p += 3;
- result = 0;
- }
-
- p = skip_white_spaces(p);
-
- if (p[0] != '\0')
- return -EINVAL;
-
- return result;
-}
-
int
softnic_parser_read_int32(int32_t *value, const char *p)
{
@@ -170,22 +132,6 @@ softnic_parser_read_uint32(uint32_t *value, const char *p)
return 0;
}
-int
-softnic_parser_read_uint32_hex(uint32_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = softnic_parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT32_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
softnic_parser_read_uint16(uint16_t *value, const char *p)
{
@@ -202,22 +148,6 @@ softnic_parser_read_uint16(uint16_t *value, const char *p)
return 0;
}
-int
-softnic_parser_read_uint16_hex(uint16_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = softnic_parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT16_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
softnic_parser_read_uint8(uint8_t *value, const char *p)
{
@@ -234,22 +164,6 @@ softnic_parser_read_uint8(uint8_t *value, const char *p)
return 0;
}
-int
-softnic_parser_read_uint8_hex(uint8_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = softnic_parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT8_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
softnic_parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens)
{
@@ -310,44 +224,6 @@ softnic_parse_hex_string(char *src, uint8_t *dst, uint32_t *size)
return 0;
}
-int
-softnic_parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels)
-{
- uint32_t n_max_labels = *n_labels, count = 0;
-
- /* Check for void list of labels */
- if (strcmp(string, "<void>") == 0) {
- *n_labels = 0;
- return 0;
- }
-
- /* At least one label should be present */
- for ( ; (*string != '\0'); ) {
- char *next;
- int value;
-
- if (count >= n_max_labels)
- return -1;
-
- if (count > 0) {
- if (string[0] != ':')
- return -1;
-
- string++;
- }
-
- value = strtol(string, &next, 10);
- if (next == string)
- return -1;
- string = next;
-
- labels[count++] = (uint32_t)value;
- }
-
- *n_labels = count;
- return 0;
-}
-
static struct rte_ether_addr *
my_ether_aton(const char *a)
{
@@ -427,97 +303,3 @@ softnic_parse_mac_addr(const char *token, struct rte_ether_addr *addr)
memcpy(addr, tmp, sizeof(struct rte_ether_addr));
return 0;
}
-
-int
-softnic_parse_cpu_core(const char *entry,
- struct softnic_cpu_core_params *p)
-{
- size_t num_len;
- char num[8];
-
- uint32_t s = 0, c = 0, h = 0, val;
- uint8_t s_parsed = 0, c_parsed = 0, h_parsed = 0;
- const char *next = skip_white_spaces(entry);
- char type;
-
- if (p == NULL)
- return -EINVAL;
-
- /* Expect <CORE> or [sX][cY][h]. At least one parameter is required. */
- while (*next != '\0') {
- /* If everything parsed nothing should left */
- if (s_parsed && c_parsed && h_parsed)
- return -EINVAL;
-
- type = *next;
- switch (type) {
- case 's':
- case 'S':
- if (s_parsed || c_parsed || h_parsed)
- return -EINVAL;
- s_parsed = 1;
- next++;
- break;
- case 'c':
- case 'C':
- if (c_parsed || h_parsed)
- return -EINVAL;
- c_parsed = 1;
- next++;
- break;
- case 'h':
- case 'H':
- if (h_parsed)
- return -EINVAL;
- h_parsed = 1;
- next++;
- break;
- default:
- /* If it start from digit it must be only core id. */
- if (!isdigit(*next) || s_parsed || c_parsed || h_parsed)
- return -EINVAL;
-
- type = 'C';
- }
-
- for (num_len = 0; *next != '\0'; next++, num_len++) {
- if (num_len == RTE_DIM(num))
- return -EINVAL;
-
- if (!isdigit(*next))
- break;
-
- num[num_len] = *next;
- }
-
- if (num_len == 0 && type != 'h' && type != 'H')
- return -EINVAL;
-
- if (num_len != 0 && (type == 'h' || type == 'H'))
- return -EINVAL;
-
- num[num_len] = '\0';
- val = strtol(num, NULL, 10);
-
- h = 0;
- switch (type) {
- case 's':
- case 'S':
- s = val;
- break;
- case 'c':
- case 'C':
- c = val;
- break;
- case 'h':
- case 'H':
- h = 1;
- break;
- }
- }
-
- p->socket_id = s;
- p->core_id = c;
- p->thread_id = h;
- return 0;
-}
diff --git a/drivers/net/softnic/parser.h b/drivers/net/softnic/parser.h
index 6f408b2485..2c14af32dd 100644
--- a/drivers/net/softnic/parser.h
+++ b/drivers/net/softnic/parser.h
@@ -31,8 +31,6 @@ skip_digits(const char *src)
return i;
}
-int softnic_parser_read_arg_bool(const char *p);
-
int softnic_parser_read_int32(int32_t *value, const char *p);
int softnic_parser_read_uint64(uint64_t *value, const char *p);
@@ -41,17 +39,12 @@ int softnic_parser_read_uint16(uint16_t *value, const char *p);
int softnic_parser_read_uint8(uint8_t *value, const char *p);
int softnic_parser_read_uint64_hex(uint64_t *value, const char *p);
-int softnic_parser_read_uint32_hex(uint32_t *value, const char *p);
-int softnic_parser_read_uint16_hex(uint16_t *value, const char *p);
-int softnic_parser_read_uint8_hex(uint8_t *value, const char *p);
int softnic_parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
int softnic_parse_ipv4_addr(const char *token, struct in_addr *ipv4);
int softnic_parse_ipv6_addr(const char *token, struct in6_addr *ipv6);
int softnic_parse_mac_addr(const char *token, struct rte_ether_addr *addr);
-int softnic_parse_mpls_labels(char *string,
- uint32_t *labels, uint32_t *n_labels);
struct softnic_cpu_core_params {
uint32_t socket_id;
@@ -59,9 +52,6 @@ struct softnic_cpu_core_params {
uint32_t thread_id;
};
-int softnic_parse_cpu_core(const char *entry,
- struct softnic_cpu_core_params *p);
-
int softnic_parse_tokenize_string(char *string,
char *tokens[], uint32_t *n_tokens);
diff --git a/drivers/net/softnic/rte_eth_softnic_cryptodev.c b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
index a1a4ca5650..0198e1e35d 100644
--- a/drivers/net/softnic/rte_eth_softnic_cryptodev.c
+++ b/drivers/net/softnic/rte_eth_softnic_cryptodev.c
@@ -21,21 +21,6 @@ softnic_cryptodev_init(struct pmd_internals *p)
return 0;
}
-void
-softnic_cryptodev_free(struct pmd_internals *p)
-{
- for ( ; ; ) {
- struct softnic_cryptodev *cryptodev;
-
- cryptodev = TAILQ_FIRST(&p->cryptodev_list);
- if (cryptodev == NULL)
- break;
-
- TAILQ_REMOVE(&p->cryptodev_list, cryptodev, node);
- free(cryptodev);
- }
-}
-
struct softnic_cryptodev *
softnic_cryptodev_find(struct pmd_internals *p,
const char *name)
diff --git a/drivers/net/softnic/rte_eth_softnic_internals.h b/drivers/net/softnic/rte_eth_softnic_internals.h
index 9c8737c9e2..414b79e068 100644
--- a/drivers/net/softnic/rte_eth_softnic_internals.h
+++ b/drivers/net/softnic/rte_eth_softnic_internals.h
@@ -793,9 +793,6 @@ softnic_tap_create(struct pmd_internals *p,
int
softnic_cryptodev_init(struct pmd_internals *p);
-void
-softnic_cryptodev_free(struct pmd_internals *p);
-
struct softnic_cryptodev *
softnic_cryptodev_find(struct pmd_internals *p,
const char *name);
@@ -1052,14 +1049,6 @@ softnic_pipeline_table_rule_delete_default(struct pmd_internals *p,
const char *pipeline_name,
uint32_t table_id);
-int
-softnic_pipeline_table_rule_stats_read(struct pmd_internals *p,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- struct rte_table_action_stats_counters *stats,
- int clear);
-
int
softnic_pipeline_table_mtr_profile_add(struct pmd_internals *p,
const char *pipeline_name,
@@ -1073,15 +1062,6 @@ softnic_pipeline_table_mtr_profile_delete(struct pmd_internals *p,
uint32_t table_id,
uint32_t meter_profile_id);
-int
-softnic_pipeline_table_rule_mtr_read(struct pmd_internals *p,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- uint32_t tc_mask,
- struct rte_table_action_mtr_counters *stats,
- int clear);
-
int
softnic_pipeline_table_dscp_table_update(struct pmd_internals *p,
const char *pipeline_name,
@@ -1089,14 +1069,6 @@ softnic_pipeline_table_dscp_table_update(struct pmd_internals *p,
uint64_t dscp_mask,
struct rte_table_action_dscp_table *dscp_table);
-int
-softnic_pipeline_table_rule_ttl_read(struct pmd_internals *p,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- struct rte_table_action_ttl_counters *stats,
- int clear);
-
/**
* Thread
*/
diff --git a/drivers/net/softnic/rte_eth_softnic_thread.c b/drivers/net/softnic/rte_eth_softnic_thread.c
index a8c26a5b23..cfddb44cb2 100644
--- a/drivers/net/softnic/rte_eth_softnic_thread.c
+++ b/drivers/net/softnic/rte_eth_softnic_thread.c
@@ -1672,66 +1672,6 @@ softnic_pipeline_table_rule_delete_default(struct pmd_internals *softnic,
return status;
}
-int
-softnic_pipeline_table_rule_stats_read(struct pmd_internals *softnic,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- struct rte_table_action_stats_counters *stats,
- int clear)
-{
- struct pipeline *p;
- struct pipeline_msg_req *req;
- struct pipeline_msg_rsp *rsp;
- int status;
-
- /* Check input params */
- if (pipeline_name == NULL ||
- data == NULL ||
- stats == NULL)
- return -1;
-
- p = softnic_pipeline_find(softnic, pipeline_name);
- if (p == NULL ||
- table_id >= p->n_tables)
- return -1;
-
- if (!pipeline_is_running(p)) {
- struct rte_table_action *a = p->table[table_id].a;
-
- status = rte_table_action_stats_read(a,
- data,
- stats,
- clear);
-
- return status;
- }
-
- /* Allocate request */
- req = pipeline_msg_alloc();
- if (req == NULL)
- return -1;
-
- /* Write request */
- req->type = PIPELINE_REQ_TABLE_RULE_STATS_READ;
- req->id = table_id;
- req->table_rule_stats_read.data = data;
- req->table_rule_stats_read.clear = clear;
-
- /* Send request and wait for response */
- rsp = pipeline_msg_send_recv(p, req);
-
- /* Read response */
- status = rsp->status;
- if (status)
- memcpy(stats, &rsp->table_rule_stats_read.stats, sizeof(*stats));
-
- /* Free response */
- pipeline_msg_free(rsp);
-
- return status;
-}
-
int
softnic_pipeline_table_mtr_profile_add(struct pmd_internals *softnic,
const char *pipeline_name,
@@ -1864,69 +1804,6 @@ softnic_pipeline_table_mtr_profile_delete(struct pmd_internals *softnic,
return status;
}
-int
-softnic_pipeline_table_rule_mtr_read(struct pmd_internals *softnic,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- uint32_t tc_mask,
- struct rte_table_action_mtr_counters *stats,
- int clear)
-{
- struct pipeline *p;
- struct pipeline_msg_req *req;
- struct pipeline_msg_rsp *rsp;
- int status;
-
- /* Check input params */
- if (pipeline_name == NULL ||
- data == NULL ||
- stats == NULL)
- return -1;
-
- p = softnic_pipeline_find(softnic, pipeline_name);
- if (p == NULL ||
- table_id >= p->n_tables)
- return -1;
-
- if (!pipeline_is_running(p)) {
- struct rte_table_action *a = p->table[table_id].a;
-
- status = rte_table_action_meter_read(a,
- data,
- tc_mask,
- stats,
- clear);
-
- return status;
- }
-
- /* Allocate request */
- req = pipeline_msg_alloc();
- if (req == NULL)
- return -1;
-
- /* Write request */
- req->type = PIPELINE_REQ_TABLE_RULE_MTR_READ;
- req->id = table_id;
- req->table_rule_mtr_read.data = data;
- req->table_rule_mtr_read.tc_mask = tc_mask;
- req->table_rule_mtr_read.clear = clear;
-
- /* Send request and wait for response */
- rsp = pipeline_msg_send_recv(p, req);
-
- /* Read response */
- status = rsp->status;
- if (status)
- memcpy(stats, &rsp->table_rule_mtr_read.stats, sizeof(*stats));
-
- /* Free response */
- pipeline_msg_free(rsp);
-
- return status;
-}
-
int
softnic_pipeline_table_dscp_table_update(struct pmd_internals *softnic,
const char *pipeline_name,
@@ -1993,66 +1870,6 @@ softnic_pipeline_table_dscp_table_update(struct pmd_internals *softnic,
return status;
}
-int
-softnic_pipeline_table_rule_ttl_read(struct pmd_internals *softnic,
- const char *pipeline_name,
- uint32_t table_id,
- void *data,
- struct rte_table_action_ttl_counters *stats,
- int clear)
-{
- struct pipeline *p;
- struct pipeline_msg_req *req;
- struct pipeline_msg_rsp *rsp;
- int status;
-
- /* Check input params */
- if (pipeline_name == NULL ||
- data == NULL ||
- stats == NULL)
- return -1;
-
- p = softnic_pipeline_find(softnic, pipeline_name);
- if (p == NULL ||
- table_id >= p->n_tables)
- return -1;
-
- if (!pipeline_is_running(p)) {
- struct rte_table_action *a = p->table[table_id].a;
-
- status = rte_table_action_ttl_read(a,
- data,
- stats,
- clear);
-
- return status;
- }
-
- /* Allocate request */
- req = pipeline_msg_alloc();
- if (req == NULL)
- return -1;
-
- /* Write request */
- req->type = PIPELINE_REQ_TABLE_RULE_TTL_READ;
- req->id = table_id;
- req->table_rule_ttl_read.data = data;
- req->table_rule_ttl_read.clear = clear;
-
- /* Send request and wait for response */
- rsp = pipeline_msg_send_recv(p, req);
-
- /* Read response */
- status = rsp->status;
- if (status)
- memcpy(stats, &rsp->table_rule_ttl_read.stats, sizeof(*stats));
-
- /* Free response */
- pipeline_msg_free(rsp);
-
- return status;
-}
-
/**
* Data plane threads: message handling
*/
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.c b/drivers/net/txgbe/base/txgbe_eeprom.c
index 72cd3ff307..fedaecf26d 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.c
+++ b/drivers/net/txgbe/base/txgbe_eeprom.c
@@ -274,42 +274,6 @@ s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data)
return err;
}
-/**
- * txgbe_ee_read_buffer - Read EEPROM byte(s) using hostif
- * @hw: pointer to hardware structure
- * @addr: offset of bytes in the EEPROM to read
- * @len: number of bytes
- * @data: byte(s) read from the EEPROM
- *
- * Reads a 8 bit byte(s) from the EEPROM using the hostif.
- **/
-s32 txgbe_ee_read_buffer(struct txgbe_hw *hw,
- u32 addr, u32 len, void *data)
-{
- const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
- u8 *buf = (u8 *)data;
- int err;
-
- err = hw->mac.acquire_swfw_sync(hw, mask);
- if (err)
- return err;
-
- while (len) {
- u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
- ? len : TXGBE_PMMBX_DATA_SIZE);
-
- err = txgbe_hic_sr_read(hw, addr, buf, seg);
- if (err)
- break;
-
- len -= seg;
- buf += seg;
- }
-
- hw->mac.release_swfw_sync(hw, mask);
- return err;
-}
-
/**
* txgbe_ee_write - Write EEPROM word using hostif
* @hw: pointer to hardware structure
@@ -420,42 +384,6 @@ s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data)
return err;
}
-/**
- * txgbe_ee_write_buffer - Write EEPROM byte(s) using hostif
- * @hw: pointer to hardware structure
- * @addr: offset of bytes in the EEPROM to write
- * @len: number of bytes
- * @data: word(s) write to the EEPROM
- *
- * Write a 8 bit byte(s) to the EEPROM using the hostif.
- **/
-s32 txgbe_ee_write_buffer(struct txgbe_hw *hw,
- u32 addr, u32 len, void *data)
-{
- const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
- u8 *buf = (u8 *)data;
- int err;
-
- err = hw->mac.acquire_swfw_sync(hw, mask);
- if (err)
- return err;
-
- while (len) {
- u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
- ? len : TXGBE_PMMBX_DATA_SIZE);
-
- err = txgbe_hic_sr_write(hw, addr, buf, seg);
- if (err)
- break;
-
- len -= seg;
- buf += seg;
- }
-
- hw->mac.release_swfw_sync(hw, mask);
- return err;
-}
-
/**
* txgbe_calc_eeprom_checksum - Calculates and returns the checksum
* @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index d0e142dba5..78b8af978b 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -51,14 +51,12 @@ s32 txgbe_ee_readw_sw(struct txgbe_hw *hw, u32 offset, u16 *data);
s32 txgbe_ee_readw_buffer(struct txgbe_hw *hw, u32 offset, u32 words,
void *data);
s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data);
-s32 txgbe_ee_read_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
s32 txgbe_ee_write16(struct txgbe_hw *hw, u32 offset, u16 data);
s32 txgbe_ee_writew_sw(struct txgbe_hw *hw, u32 offset, u16 data);
s32 txgbe_ee_writew_buffer(struct txgbe_hw *hw, u32 offset, u32 words,
void *data);
s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data);
-s32 txgbe_ee_write_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
#endif /* _TXGBE_EEPROM_H_ */
diff --git a/drivers/raw/ifpga/base/opae_eth_group.c b/drivers/raw/ifpga/base/opae_eth_group.c
index be28954e05..97c20a8068 100644
--- a/drivers/raw/ifpga/base/opae_eth_group.c
+++ b/drivers/raw/ifpga/base/opae_eth_group.c
@@ -152,16 +152,6 @@ static int eth_group_reset_mac(struct eth_group_device *dev, u8 index,
return ret;
}
-static void eth_group_mac_uinit(struct eth_group_device *dev)
-{
- u8 i;
-
- for (i = 0; i < dev->mac_num; i++) {
- if (eth_group_reset_mac(dev, i, true))
- dev_err(dev, "fail to disable mac %d\n", i);
- }
-}
-
static int eth_group_mac_init(struct eth_group_device *dev)
{
int ret;
@@ -272,12 +262,6 @@ static int eth_group_hw_init(struct eth_group_device *dev)
return ret;
}
-static void eth_group_hw_uinit(struct eth_group_device *dev)
-{
- eth_group_mac_uinit(dev);
- eth_group_phy_uinit(dev);
-}
-
struct eth_group_device *eth_group_probe(void *base)
{
struct eth_group_device *dev;
@@ -305,12 +289,3 @@ struct eth_group_device *eth_group_probe(void *base)
return dev;
}
-
-void eth_group_release(struct eth_group_device *dev)
-{
- if (dev) {
- eth_group_hw_uinit(dev);
- dev->status = ETH_GROUP_DEV_NOUSED;
- opae_free(dev);
- }
-}
diff --git a/drivers/raw/ifpga/base/opae_eth_group.h b/drivers/raw/ifpga/base/opae_eth_group.h
index 4868bd0e11..8dc23663b8 100644
--- a/drivers/raw/ifpga/base/opae_eth_group.h
+++ b/drivers/raw/ifpga/base/opae_eth_group.h
@@ -94,7 +94,6 @@ struct eth_group_device {
};
struct eth_group_device *eth_group_probe(void *base);
-void eth_group_release(struct eth_group_device *dev);
int eth_group_read_reg(struct eth_group_device *dev,
u8 type, u8 index, u16 addr, u32 *data);
int eth_group_write_reg(struct eth_group_device *dev,
diff --git a/drivers/raw/ifpga/base/opae_hw_api.c b/drivers/raw/ifpga/base/opae_hw_api.c
index d5cd5fe608..e2fdece4b4 100644
--- a/drivers/raw/ifpga/base/opae_hw_api.c
+++ b/drivers/raw/ifpga/base/opae_hw_api.c
@@ -84,50 +84,6 @@ opae_accelerator_alloc(const char *name, struct opae_accelerator_ops *ops,
return acc;
}
-/**
- * opae_acc_reg_read - read accelerator's register from its reg region.
- * @acc: accelerator to read.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: read operation width, e.g 4 byte = 32bit read.
- * @data: data to store the value read from the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data)
-{
- if (!acc || !data)
- return -EINVAL;
-
- if (acc->ops && acc->ops->read)
- return acc->ops->read(acc, region_idx, offset, byte, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_acc_reg_write - write to accelerator's register from its reg region.
- * @acc: accelerator to write.
- * @region_idx: reg region index.
- * @offset: reg offset.
- * @byte: write operation width, e.g 4 byte = 32bit write.
- * @data: data stored the value to write to the register.
- *
- * Return: 0 on success, otherwise error code.
- */
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data)
-{
- if (!acc || !data)
- return -EINVAL;
-
- if (acc->ops && acc->ops->write)
- return acc->ops->write(acc, region_idx, offset, byte, data);
-
- return -ENOENT;
-}
-
/**
* opae_acc_get_info - get information of an accelerator.
* @acc: targeted accelerator
@@ -635,50 +591,6 @@ opae_adapter_get_acc(struct opae_adapter *adapter, int acc_id)
return NULL;
}
-/**
- * opae_manager_read_mac_rom - read the content of the MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of retimer
- * @addr: buffer of the MAC address
- *
- * Return: return the bytes of read successfully
- */
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->read_mac_rom)
- return mgr->network_ops->read_mac_rom(mgr,
- port * sizeof(struct opae_ether_addr),
- addr, sizeof(struct opae_ether_addr));
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_write_mac_rom - write data into MAC ROM
- * @mgr: opae_manager for MAC ROM
- * @port: the port number of the retimer
- * @addr: data of the MAC address
- *
- * Return: return written bytes
- */
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops && mgr->network_ops->write_mac_rom)
- return mgr->network_ops->write_mac_rom(mgr,
- port * sizeof(struct opae_ether_addr),
- addr, sizeof(struct opae_ether_addr));
-
- return -ENOENT;
-}
-
/**
* opae_manager_get_eth_group_nums - get eth group numbers
* @mgr: opae_manager for eth group
@@ -741,54 +653,6 @@ int opae_manager_get_eth_group_region_info(struct opae_manager *mgr,
return -ENOENT;
}
-/**
- * opae_manager_eth_group_read_reg - read ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: read buffer
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->eth_group_reg_read)
- return mgr->network_ops->eth_group_reg_read(mgr, group_id,
- type, index, addr, data);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_eth_group_write_reg - write ETH group register
- * @mgr: opae_manager for ETH Group
- * @group_id: ETH group id
- * @type: eth type
- * @index: port index in eth group device
- * @addr: register address of ETH Group
- * @data: data will write to register
- *
- * Return: 0 on success, otherwise error code
- */
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data)
-{
- if (!mgr || !mgr->network_ops)
- return -EINVAL;
-
- if (mgr->network_ops->eth_group_reg_write)
- return mgr->network_ops->eth_group_reg_write(mgr, group_id,
- type, index, addr, data);
-
- return -ENOENT;
-}
-
/**
* opae_manager_get_retimer_info - get retimer info like PKVL chip
* @mgr: opae_manager for retimer
@@ -866,62 +730,6 @@ opae_mgr_get_sensor_by_name(struct opae_manager *mgr,
return NULL;
}
-/**
- * opae_manager_get_sensor_value_by_name - find the sensor by name and read out
- * the value
- * @mgr: opae_manager for sensor.
- * @name: the name of the sensor
- * @value: the readout sensor value
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_sensor_value_by_name(struct opae_manager *mgr,
- const char *name, unsigned int *value)
-{
- struct opae_sensor_info *sensor;
-
- if (!mgr)
- return -EINVAL;
-
- sensor = opae_mgr_get_sensor_by_name(mgr, name);
- if (!sensor)
- return -ENODEV;
-
- if (mgr->ops && mgr->ops->get_sensor_value)
- return mgr->ops->get_sensor_value(mgr, sensor, value);
-
- return -ENOENT;
-}
-
-/**
- * opae_manager_get_sensor_value_by_id - find the sensor by id and readout the
- * value
- * @mgr: opae_manager for sensor
- * @id: the id of the sensor
- * @value: the readout sensor value
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_sensor_value_by_id(struct opae_manager *mgr,
- unsigned int id, unsigned int *value)
-{
- struct opae_sensor_info *sensor;
-
- if (!mgr)
- return -EINVAL;
-
- sensor = opae_mgr_get_sensor_by_id(mgr, id);
- if (!sensor)
- return -ENODEV;
-
- if (mgr->ops && mgr->ops->get_sensor_value)
- return mgr->ops->get_sensor_value(mgr, sensor, value);
-
- return -ENOENT;
-}
-
/**
* opae_manager_get_sensor_value - get the current
* sensor value
@@ -944,23 +752,3 @@ opae_mgr_get_sensor_value(struct opae_manager *mgr,
return -ENOENT;
}
-
-/**
- * opae_manager_get_board_info - get board info
- * sensor value
- * @info: opae_board_info for the card
- *
- * Return: 0 on success, otherwise error code
- */
-int
-opae_mgr_get_board_info(struct opae_manager *mgr,
- struct opae_board_info **info)
-{
- if (!mgr || !info)
- return -EINVAL;
-
- if (mgr->ops && mgr->ops->get_board_info)
- return mgr->ops->get_board_info(mgr, info);
-
- return -ENOENT;
-}
diff --git a/drivers/raw/ifpga/base/opae_hw_api.h b/drivers/raw/ifpga/base/opae_hw_api.h
index e99ee4564c..32b603fc8a 100644
--- a/drivers/raw/ifpga/base/opae_hw_api.h
+++ b/drivers/raw/ifpga/base/opae_hw_api.h
@@ -92,10 +92,6 @@ struct opae_sensor_info *opae_mgr_get_sensor_by_name(struct opae_manager *mgr,
const char *name);
struct opae_sensor_info *opae_mgr_get_sensor_by_id(struct opae_manager *mgr,
unsigned int id);
-int opae_mgr_get_sensor_value_by_name(struct opae_manager *mgr,
- const char *name, unsigned int *value);
-int opae_mgr_get_sensor_value_by_id(struct opae_manager *mgr,
- unsigned int id, unsigned int *value);
int opae_mgr_get_sensor_value(struct opae_manager *mgr,
struct opae_sensor_info *sensor,
unsigned int *value);
@@ -200,28 +196,6 @@ opae_acc_get_mgr(struct opae_accelerator *acc)
return acc ? acc->mgr : NULL;
}
-int opae_acc_reg_read(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
-int opae_acc_reg_write(struct opae_accelerator *acc, unsigned int region_idx,
- u64 offset, unsigned int byte, void *data);
-
-#define opae_acc_reg_read64(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 8, data)
-#define opae_acc_reg_write64(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 8, data)
-#define opae_acc_reg_read32(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 4, data)
-#define opae_acc_reg_write32(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 4, data)
-#define opae_acc_reg_read16(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 2, data)
-#define opae_acc_reg_write16(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 2, data)
-#define opae_acc_reg_read8(acc, region, offset, data) \
- opae_acc_reg_read(acc, region, offset, 1, data)
-#define opae_acc_reg_write8(acc, region, offset, data) \
- opae_acc_reg_write(acc, region, offset, 1, data)
-
/*for data stream read/write*/
int opae_acc_data_read(struct opae_accelerator *acc, unsigned int flags,
u64 offset, unsigned int byte, void *data);
@@ -337,10 +311,6 @@ struct opae_ether_addr {
} __rte_packed;
/* OPAE vBNG network API*/
-int opae_manager_read_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr);
-int opae_manager_write_mac_rom(struct opae_manager *mgr, int port,
- struct opae_ether_addr *addr);
int opae_manager_get_retimer_info(struct opae_manager *mgr,
struct opae_retimer_info *info);
int opae_manager_get_retimer_status(struct opae_manager *mgr,
@@ -348,10 +318,4 @@ int opae_manager_get_retimer_status(struct opae_manager *mgr,
int opae_manager_get_eth_group_nums(struct opae_manager *mgr);
int opae_manager_get_eth_group_info(struct opae_manager *mgr,
u8 group_id, struct opae_eth_group_info *info);
-int opae_manager_eth_group_write_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 data);
-int opae_manager_eth_group_read_reg(struct opae_manager *mgr, u8 group_id,
- u8 type, u8 index, u16 addr, u32 *data);
-int opae_mgr_get_board_info(struct opae_manager *mgr,
- struct opae_board_info **info);
#endif /* _OPAE_HW_API_H_*/
diff --git a/drivers/raw/ifpga/base/opae_i2c.c b/drivers/raw/ifpga/base/opae_i2c.c
index 598eab5742..5ea7ca3672 100644
--- a/drivers/raw/ifpga/base/opae_i2c.c
+++ b/drivers/raw/ifpga/base/opae_i2c.c
@@ -104,12 +104,6 @@ int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
return ret;
}
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_read(dev, 0, slave_addr, offset, buf, count);
-}
-
int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
u8 *buf, u32 count)
{
@@ -117,12 +111,6 @@ int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
buf, count);
}
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count)
-{
- return i2c_write(dev, 0, slave_addr, offset, buf, count);
-}
-
int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
u8 *buf, u32 count)
{
diff --git a/drivers/raw/ifpga/base/opae_i2c.h b/drivers/raw/ifpga/base/opae_i2c.h
index 4f6b0b28bb..a21277b7cc 100644
--- a/drivers/raw/ifpga/base/opae_i2c.h
+++ b/drivers/raw/ifpga/base/opae_i2c.h
@@ -121,12 +121,8 @@ int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
u32 offset, u8 *buf, u32 count);
int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr,
u32 offset, u8 *buffer, int len);
-int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
u8 *buf, u32 count);
-int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
- u8 *buf, u32 count);
int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset,
u8 *buf, u32 count);
#endif
diff --git a/drivers/raw/ifpga/base/opae_ifpga_hw_api.c b/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
index 89c7b49203..ad5a9f2b6c 100644
--- a/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
+++ b/drivers/raw/ifpga/base/opae_ifpga_hw_api.c
@@ -31,23 +31,6 @@ int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
return ifpga_set_prop(fme->parent, FEATURE_FIU_ID_FME, 0, prop);
}
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
- struct fpga_fme_info *fme_info)
-{
- struct ifpga_fme_hw *fme;
-
- if (!mgr || !mgr->data || !fme_info)
- return -EINVAL;
-
- fme = mgr->data;
-
- spinlock_lock(&fme->lock);
- fme_info->capability = fme->capability;
- spinlock_unlock(&fme->lock);
-
- return 0;
-}
-
int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
struct fpga_fme_err_irq_set *err_irq_set)
{
@@ -61,85 +44,3 @@ int opae_manager_ifpga_set_err_irq(struct opae_manager *mgr,
return ifpga_set_irq(fme->parent, FEATURE_FIU_ID_FME, 0,
IFPGA_FME_FEATURE_ID_GLOBAL_ERR, err_irq_set);
}
-
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_get_prop(port->parent, FEATURE_FIU_ID_PORT,
- port->port_id, prop);
-}
-
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
- struct feature_prop *prop)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_set_prop(port->parent, FEATURE_FIU_ID_PORT,
- port->port_id, prop);
-}
-
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
- struct fpga_port_info *port_info)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data || !port_info)
- return -EINVAL;
-
- port = br->data;
-
- spinlock_lock(&port->lock);
- port_info->capability = port->capability;
- port_info->num_uafu_irqs = port->num_uafu_irqs;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
- struct fpga_port_region_info *info)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data || !info)
- return -EINVAL;
-
- /* Only support STP region now */
- if (info->index != PORT_REGION_INDEX_STP)
- return -EINVAL;
-
- port = br->data;
-
- spinlock_lock(&port->lock);
- info->addr = port->stp_addr;
- info->size = port->stp_size;
- spinlock_unlock(&port->lock);
-
- return 0;
-}
-
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
- struct fpga_port_err_irq_set *err_irq_set)
-{
- struct ifpga_port_hw *port;
-
- if (!br || !br->data)
- return -EINVAL;
-
- port = br->data;
-
- return ifpga_set_irq(port->parent, FEATURE_FIU_ID_PORT, port->port_id,
- IFPGA_PORT_FEATURE_ID_ERROR, err_irq_set);
-}
diff --git a/drivers/raw/ifpga/base/opae_ifpga_hw_api.h b/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
index bab33862ee..104ab97edc 100644
--- a/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
+++ b/drivers/raw/ifpga/base/opae_ifpga_hw_api.h
@@ -217,10 +217,6 @@ int opae_manager_ifpga_get_prop(struct opae_manager *mgr,
struct feature_prop *prop);
int opae_manager_ifpga_set_prop(struct opae_manager *mgr,
struct feature_prop *prop);
-int opae_bridge_ifpga_get_prop(struct opae_bridge *br,
- struct feature_prop *prop);
-int opae_bridge_ifpga_set_prop(struct opae_bridge *br,
- struct feature_prop *prop);
/*
* Retrieve information about the fpga fme.
@@ -231,9 +227,6 @@ struct fpga_fme_info {
#define FPGA_FME_CAP_ERR_IRQ (1 << 0) /* Support fme error interrupt */
};
-int opae_manager_ifpga_get_info(struct opae_manager *mgr,
- struct fpga_fme_info *fme_info);
-
/* Set eventfd information for ifpga FME error interrupt */
struct fpga_fme_err_irq_set {
s32 evtfd; /* Eventfd handler */
@@ -254,8 +247,6 @@ struct fpga_port_info {
u32 num_uafu_irqs; /* The number of uafu interrupts */
};
-int opae_bridge_ifpga_get_info(struct opae_bridge *br,
- struct fpga_port_info *port_info);
/*
* Retrieve region information about the fpga port.
* Driver needs to fill the index of struct fpga_port_region_info.
@@ -267,15 +258,9 @@ struct fpga_port_region_info {
u8 *addr; /* Base address of the region */
};
-int opae_bridge_ifpga_get_region_info(struct opae_bridge *br,
- struct fpga_port_region_info *info);
-
/* Set eventfd information for ifpga port error interrupt */
struct fpga_port_err_irq_set {
s32 evtfd; /* Eventfd handler */
};
-int opae_bridge_ifpga_set_err_irq(struct opae_bridge *br,
- struct fpga_port_err_irq_set *err_irq_set);
-
#endif /* _OPAE_IFPGA_HW_API_H_ */
diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h
index 2c4877c37d..1ef5cfbda0 100644
--- a/drivers/regex/mlx5/mlx5_regex.h
+++ b/drivers/regex/mlx5/mlx5_regex.h
@@ -111,8 +111,6 @@ int mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind,
/* mlx5_regex_fastpath.c */
int mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id);
-void mlx5_regexdev_teardown_fastpath(struct mlx5_regex_priv *priv,
- uint32_t qp_id);
uint16_t mlx5_regexdev_enqueue(struct rte_regexdev *dev, uint16_t qp_id,
struct rte_regex_ops **ops, uint16_t nb_ops);
uint16_t mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id,
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 254954776f..f38a3772cb 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -393,28 +393,3 @@ mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id)
setup_sqs(qp);
return 0;
}
-
-static void
-free_buffers(struct mlx5_regex_qp *qp)
-{
- if (qp->metadata) {
- mlx5_glue->dereg_mr(qp->metadata);
- rte_free(qp->metadata->addr);
- }
- if (qp->outputs) {
- mlx5_glue->dereg_mr(qp->outputs);
- rte_free(qp->outputs->addr);
- }
-}
-
-void
-mlx5_regexdev_teardown_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id)
-{
- struct mlx5_regex_qp *qp = &priv->qps[qp_id];
-
- if (qp) {
- free_buffers(qp);
- if (qp->jobs)
- rte_free(qp->jobs);
- }
-}
diff --git a/drivers/regex/mlx5/mlx5_rxp.c b/drivers/regex/mlx5/mlx5_rxp.c
index 7936a5235b..21e4847744 100644
--- a/drivers/regex/mlx5/mlx5_rxp.c
+++ b/drivers/regex/mlx5/mlx5_rxp.c
@@ -50,8 +50,6 @@ write_shared_rules(struct mlx5_regex_priv *priv,
uint8_t db_to_program);
static int
rxp_db_setup(struct mlx5_regex_priv *priv);
-static void
-rxp_dump_csrs(struct ibv_context *ctx, uint8_t id);
static int
rxp_write_rules_via_cp(struct ibv_context *ctx,
struct mlx5_rxp_rof_entry *rules,
@@ -64,49 +62,6 @@ rxp_start_engine(struct ibv_context *ctx, uint8_t id);
static int
rxp_stop_engine(struct ibv_context *ctx, uint8_t id);
-static void __rte_unused
-rxp_dump_csrs(struct ibv_context *ctx __rte_unused, uint8_t id __rte_unused)
-{
- uint32_t reg, i;
-
- /* Main CSRs*/
- for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
- if (mlx5_devx_regex_register_read(ctx, id,
- (MLX5_RXP_CSR_WIDTH * i) +
- MLX5_RXP_CSR_BASE_ADDRESS,
- ®)) {
- DRV_LOG(ERR, "Failed to read Main CSRs Engine %d!", id);
- return;
- }
- DRV_LOG(DEBUG, "RXP Main CSRs (Eng%d) register (%d): %08x",
- id, i, reg);
- }
- /* RTRU CSRs*/
- for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
- if (mlx5_devx_regex_register_read(ctx, id,
- (MLX5_RXP_CSR_WIDTH * i) +
- MLX5_RXP_RTRU_CSR_BASE_ADDRESS,
- ®)) {
- DRV_LOG(ERR, "Failed to read RTRU CSRs Engine %d!", id);
- return;
- }
- DRV_LOG(DEBUG, "RXP RTRU CSRs (Eng%d) register (%d): %08x",
- id, i, reg);
- }
- /* STAT CSRs */
- for (i = 0; i < MLX5_RXP_CSR_NUM_ENTRIES; i++) {
- if (mlx5_devx_regex_register_read(ctx, id,
- (MLX5_RXP_CSR_WIDTH * i) +
- MLX5_RXP_STATS_CSR_BASE_ADDRESS,
- ®)) {
- DRV_LOG(ERR, "Failed to read STAT CSRs Engine %d!", id);
- return;
- }
- DRV_LOG(DEBUG, "RXP STAT CSRs (Eng%d) register (%d): %08x",
- id, i, reg);
- }
-}
-
int
mlx5_regex_info_get(struct rte_regexdev *dev __rte_unused,
struct rte_regexdev_info *info)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..7515dc44b3 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -56,64 +56,6 @@ otx2_ree_err_intr_unregister(const struct rte_regexdev *dev)
vf->err_intr_registered = 0;
}
-static int
-ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, ree_lf_err_intr_handler, (void *)base,
- msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_ree_err_intr_register(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid REE LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_REE_LF_BAR2(vf, i);
- ret = ree_lf_err_intr_register(dev, vf->lf_msixoff[i], base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_REE_LF_BAR2(vf, j);
- ree_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
- return ret;
-}
-
int
otx2_ree_iq_enable(const struct rte_regexdev *dev, const struct otx2_ree_qp *qp,
uint8_t pri, uint32_t size_div2)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
index dedf5f3282..4733febc0e 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
@@ -188,8 +188,6 @@ union otx2_ree_match {
void otx2_ree_err_intr_unregister(const struct rte_regexdev *dev);
-int otx2_ree_err_intr_register(const struct rte_regexdev *dev);
-
int otx2_ree_iq_enable(const struct rte_regexdev *dev,
const struct otx2_ree_qp *qp,
uint8_t pri, uint32_t size_div128);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.c b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
index 6d58d367d4..726994e195 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
@@ -189,34 +189,6 @@ otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
return 0;
}
-int
-otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_rd_wr_reg_msg *msg;
- struct otx2_mbox *mbox;
-
- mbox = vf->otx2_dev.mbox;
- msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
- sizeof(*msg), sizeof(*msg));
- if (msg == NULL) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = vf->block_address;
-
- return ree_send_mbox_msg(vf);
-}
-
int
otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len)
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.h b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
index 953efa6724..c36e6a5b7a 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.h
+++ b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
@@ -22,9 +22,6 @@ int otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
int otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
uint64_t *val);
-int otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val);
-
int otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len);
diff --git a/examples/ip_pipeline/cryptodev.c b/examples/ip_pipeline/cryptodev.c
index b0d9f3d217..4c986e421d 100644
--- a/examples/ip_pipeline/cryptodev.c
+++ b/examples/ip_pipeline/cryptodev.c
@@ -38,14 +38,6 @@ cryptodev_find(const char *name)
return NULL;
}
-struct cryptodev *
-cryptodev_next(struct cryptodev *cryptodev)
-{
- return (cryptodev == NULL) ?
- TAILQ_FIRST(&cryptodev_list) :
- TAILQ_NEXT(cryptodev, node);
-}
-
struct cryptodev *
cryptodev_create(const char *name, struct cryptodev_params *params)
{
diff --git a/examples/ip_pipeline/cryptodev.h b/examples/ip_pipeline/cryptodev.h
index d00434379e..c91b8a69f7 100644
--- a/examples/ip_pipeline/cryptodev.h
+++ b/examples/ip_pipeline/cryptodev.h
@@ -29,9 +29,6 @@ cryptodev_init(void);
struct cryptodev *
cryptodev_find(const char *name);
-struct cryptodev *
-cryptodev_next(struct cryptodev *cryptodev);
-
struct cryptodev_params {
const char *dev_name;
uint32_t dev_id; /**< Valid only when *dev_name* is NULL. */
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356..d09609b9e9 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -248,24 +248,3 @@ link_create(const char *name, struct link_params *params)
return link;
}
-
-int
-link_is_up(const char *name)
-{
- struct rte_eth_link link_params;
- struct link *link;
-
- /* Check input params */
- if (name == NULL)
- return 0;
-
- link = link_find(name);
- if (link == NULL)
- return 0;
-
- /* Resource */
- if (rte_eth_link_get(link->port_id, &link_params) < 0)
- return 0;
-
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
-}
diff --git a/examples/ip_pipeline/link.h b/examples/ip_pipeline/link.h
index 34ff1149e0..a4f6aa0e73 100644
--- a/examples/ip_pipeline/link.h
+++ b/examples/ip_pipeline/link.h
@@ -60,7 +60,4 @@ struct link_params {
struct link *
link_create(const char *name, struct link_params *params);
-int
-link_is_up(const char *name);
-
#endif /* _INCLUDE_LINK_H_ */
diff --git a/examples/ip_pipeline/parser.c b/examples/ip_pipeline/parser.c
index dfd71a71d3..9f4f91d213 100644
--- a/examples/ip_pipeline/parser.c
+++ b/examples/ip_pipeline/parser.c
@@ -39,44 +39,6 @@ get_hex_val(char c)
}
}
-int
-parser_read_arg_bool(const char *p)
-{
- p = skip_white_spaces(p);
- int result = -EINVAL;
-
- if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) ||
- ((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) {
- p += 3;
- result = 1;
- }
-
- if (((p[0] == 'o') && (p[1] == 'n')) ||
- ((p[0] == 'O') && (p[1] == 'N'))) {
- p += 2;
- result = 1;
- }
-
- if (((p[0] == 'n') && (p[1] == 'o')) ||
- ((p[0] == 'N') && (p[1] == 'O'))) {
- p += 2;
- result = 0;
- }
-
- if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) ||
- ((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) {
- p += 3;
- result = 0;
- }
-
- p = skip_white_spaces(p);
-
- if (p[0] != '\0')
- return -EINVAL;
-
- return result;
-}
-
int
parser_read_uint64(uint64_t *value, const char *p)
{
@@ -153,22 +115,6 @@ parser_read_uint32(uint32_t *value, const char *p)
return 0;
}
-int
-parser_read_uint32_hex(uint32_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT32_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
parser_read_uint16(uint16_t *value, const char *p)
{
@@ -185,22 +131,6 @@ parser_read_uint16(uint16_t *value, const char *p)
return 0;
}
-int
-parser_read_uint16_hex(uint16_t *value, const char *p)
-{
- uint64_t val = 0;
- int ret = parser_read_uint64_hex(&val, p);
-
- if (ret < 0)
- return ret;
-
- if (val > UINT16_MAX)
- return -ERANGE;
-
- *value = val;
- return 0;
-}
-
int
parser_read_uint8(uint8_t *value, const char *p)
{
@@ -293,44 +223,6 @@ parse_hex_string(char *src, uint8_t *dst, uint32_t *size)
return 0;
}
-int
-parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels)
-{
- uint32_t n_max_labels = *n_labels, count = 0;
-
- /* Check for void list of labels */
- if (strcmp(string, "<void>") == 0) {
- *n_labels = 0;
- return 0;
- }
-
- /* At least one label should be present */
- for ( ; (*string != '\0'); ) {
- char *next;
- int value;
-
- if (count >= n_max_labels)
- return -1;
-
- if (count > 0) {
- if (string[0] != ':')
- return -1;
-
- string++;
- }
-
- value = strtol(string, &next, 10);
- if (next == string)
- return -1;
- string = next;
-
- labels[count++] = (uint32_t) value;
- }
-
- *n_labels = count;
- return 0;
-}
-
static struct rte_ether_addr *
my_ether_aton(const char *a)
{
@@ -410,97 +302,3 @@ parse_mac_addr(const char *token, struct rte_ether_addr *addr)
memcpy(addr, tmp, sizeof(struct rte_ether_addr));
return 0;
}
-
-int
-parse_cpu_core(const char *entry,
- struct cpu_core_params *p)
-{
- size_t num_len;
- char num[8];
-
- uint32_t s = 0, c = 0, h = 0, val;
- uint8_t s_parsed = 0, c_parsed = 0, h_parsed = 0;
- const char *next = skip_white_spaces(entry);
- char type;
-
- if (p == NULL)
- return -EINVAL;
-
- /* Expect <CORE> or [sX][cY][h]. At least one parameter is required. */
- while (*next != '\0') {
- /* If everything parsed nothing should left */
- if (s_parsed && c_parsed && h_parsed)
- return -EINVAL;
-
- type = *next;
- switch (type) {
- case 's':
- case 'S':
- if (s_parsed || c_parsed || h_parsed)
- return -EINVAL;
- s_parsed = 1;
- next++;
- break;
- case 'c':
- case 'C':
- if (c_parsed || h_parsed)
- return -EINVAL;
- c_parsed = 1;
- next++;
- break;
- case 'h':
- case 'H':
- if (h_parsed)
- return -EINVAL;
- h_parsed = 1;
- next++;
- break;
- default:
- /* If it start from digit it must be only core id. */
- if (!isdigit(*next) || s_parsed || c_parsed || h_parsed)
- return -EINVAL;
-
- type = 'C';
- }
-
- for (num_len = 0; *next != '\0'; next++, num_len++) {
- if (num_len == RTE_DIM(num))
- return -EINVAL;
-
- if (!isdigit(*next))
- break;
-
- num[num_len] = *next;
- }
-
- if (num_len == 0 && type != 'h' && type != 'H')
- return -EINVAL;
-
- if (num_len != 0 && (type == 'h' || type == 'H'))
- return -EINVAL;
-
- num[num_len] = '\0';
- val = strtol(num, NULL, 10);
-
- h = 0;
- switch (type) {
- case 's':
- case 'S':
- s = val;
- break;
- case 'c':
- case 'C':
- c = val;
- break;
- case 'h':
- case 'H':
- h = 1;
- break;
- }
- }
-
- p->socket_id = s;
- p->core_id = c;
- p->thread_id = h;
- return 0;
-}
diff --git a/examples/ip_pipeline/parser.h b/examples/ip_pipeline/parser.h
index 4538f675d4..826ed8d136 100644
--- a/examples/ip_pipeline/parser.h
+++ b/examples/ip_pipeline/parser.h
@@ -31,16 +31,12 @@ skip_digits(const char *src)
return i;
}
-int parser_read_arg_bool(const char *p);
-
int parser_read_uint64(uint64_t *value, const char *p);
int parser_read_uint32(uint32_t *value, const char *p);
int parser_read_uint16(uint16_t *value, const char *p);
int parser_read_uint8(uint8_t *value, const char *p);
int parser_read_uint64_hex(uint64_t *value, const char *p);
-int parser_read_uint32_hex(uint32_t *value, const char *p);
-int parser_read_uint16_hex(uint16_t *value, const char *p);
int parser_read_uint8_hex(uint8_t *value, const char *p);
int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
@@ -48,7 +44,6 @@ int parse_hex_string(char *src, uint8_t *dst, uint32_t *size);
int parse_ipv4_addr(const char *token, struct in_addr *ipv4);
int parse_ipv6_addr(const char *token, struct in6_addr *ipv6);
int parse_mac_addr(const char *token, struct rte_ether_addr *addr);
-int parse_mpls_labels(char *string, uint32_t *labels, uint32_t *n_labels);
struct cpu_core_params {
uint32_t socket_id;
@@ -56,8 +51,6 @@ struct cpu_core_params {
uint32_t thread_id;
};
-int parse_cpu_core(const char *entry, struct cpu_core_params *p);
-
int parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens);
#endif
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 84bbcf2b2d..424f281213 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -315,27 +315,6 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
return link;
}
-int
-link_is_up(struct obj *obj, const char *name)
-{
- struct rte_eth_link link_params;
- struct link *link;
-
- /* Check input params */
- if (!obj || !name)
- return 0;
-
- link = link_find(obj, name);
- if (link == NULL)
- return 0;
-
- /* Resource */
- if (rte_eth_link_get(link->port_id, &link_params) < 0)
- return 0;
-
- return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
-}
-
struct link *
link_find(struct obj *obj, const char *name)
{
diff --git a/examples/pipeline/obj.h b/examples/pipeline/obj.h
index e6351fd279..3d729e43b0 100644
--- a/examples/pipeline/obj.h
+++ b/examples/pipeline/obj.h
@@ -95,9 +95,6 @@ link_create(struct obj *obj,
const char *name,
struct link_params *params);
-int
-link_is_up(struct obj *obj, const char *name);
-
struct link *
link_find(struct obj *obj, const char *name);
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 03a4f2dd2d..4480e5249a 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -238,14 +238,6 @@ static int huge_wrap_sigsetjmp(void)
return sigsetjmp(huge_jmpenv, 1);
}
-#ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES
-/* Callback for numa library. */
-void numa_error(char *where)
-{
- RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno));
-}
-#endif
-
/*
* Mmap all hugepages of hugepage table: it first open a file in
* hugetlbfs, then mmap() hugepage_sz data in it. If orig is set, the
diff --git a/lib/librte_vhost/fd_man.c b/lib/librte_vhost/fd_man.c
index 55d4856f9e..942c5f145b 100644
--- a/lib/librte_vhost/fd_man.c
+++ b/lib/librte_vhost/fd_man.c
@@ -100,21 +100,6 @@ fdset_add_fd(struct fdset *pfdset, int idx, int fd,
pfd->revents = 0;
}
-void
-fdset_init(struct fdset *pfdset)
-{
- int i;
-
- if (pfdset == NULL)
- return;
-
- for (i = 0; i < MAX_FDS; i++) {
- pfdset->fd[i].fd = -1;
- pfdset->fd[i].dat = NULL;
- }
- pfdset->num = 0;
-}
-
/**
* Register the fd in the fdset with read/write handler and context.
*/
diff --git a/lib/librte_vhost/fd_man.h b/lib/librte_vhost/fd_man.h
index 3ab5cfdd60..f0157eeeed 100644
--- a/lib/librte_vhost/fd_man.h
+++ b/lib/librte_vhost/fd_man.h
@@ -39,8 +39,6 @@ struct fdset {
};
-void fdset_init(struct fdset *pfdset);
-
int fdset_add(struct fdset *pfdset, int fd,
fd_cb rcb, fd_cb wcb, void *dat);
--
2.26.2
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] app/testpmd: fix MTU after device configure
@ 2020-11-16 18:50 3% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-16 18:50 UTC (permalink / raw)
To: Wenzhuo Lu, Beilei Xing, Bernard Iremonger
Cc: dev, Qi Zhang, Steve Yang, Thomas Monjalon, Andrew Rybchenko,
Konstantin Ananyev, Olivier Matz, Lance Richardson,
David Marchand
On 11/13/2020 11:44 AM, Ferruh Yigit wrote:
> In 'rte_eth_dev_configure()', if 'DEV_RX_OFFLOAD_JUMBO_FRAME' is not set
> the max frame size is limited to 'RTE_ETHER_MAX_LEN' (1518).
> This is mistake because for the PMDs that has frame size bigger than
> "RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN" (18 bytes), the MTU becomes
> less than 1500, causing a valid frame with 1500 bytes payload to be
> dropped.
>
> Since 'rte_eth_dev_set_mtu()' works as expected, it is called after
> 'rte_eth_dev_configure()' to fix the MTU.
> It may look redundant to set MTU after 'rte_eth_dev_configure()', both
> with default values, but it is not, the resulting MTU config can be
> different in the device based on frame overhead of the PMD.
>
> And instead of setting the MTU to default value, it is first get via
> 'rte_eth_dev_get_mtu()' and set again, this is to cover cases MTU
> changed from testpmd command line.
>
> 'rte_eth_dev_set_mtu()', '-ENOTSUP' error is ignored to prevent
> irrelevant warning messages for the virtual PMDs.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> Cc: Steve Yang <stevex.yang@intel.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>
> Cc: Lance Richardson <lance.richardson@broadcom.com>
> ---
> app/test-pmd/testpmd.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 33fc0fddf5..48e9647fc7 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2537,6 +2537,8 @@ start_port(portid_t pid)
> }
>
> if (port->need_reconfig > 0) {
> + uint16_t mtu = RTE_ETHER_MTU;
> +
> port->need_reconfig = 0;
>
> if (flow_isolate_all) {
> @@ -2570,6 +2572,23 @@ start_port(portid_t pid)
> port->need_reconfig = 1;
> return -1;
> }
> +
> + /*
> + * Workaround for rte_eth_dev_configure(), max_rx_pkt_len
> + * set MTU wrong for the PMDs that have frame overhead
> + * bigger than RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN.
> + * For a PMD that has 26 bytes overhead, rte_eth_dev_configure()
> + * can set MTU to max 1492, not to expected 1500 bytes.
> + * Using rte_eth_dev_set_mtu() to be able to set MTU correctly,
> + * default MTU value is 1500.
> + */
> + diag = rte_eth_dev_get_mtu(pi, &mtu);
> + if (diag)
> + printf("Failed to get MTU for port %d\n", pi);
> + diag = rte_eth_dev_set_mtu(pi, mtu);
> + if (diag != 0 && diag != -ENOTSUP)
> + printf("Failed to set MTU to %u for port %d\n",
> + mtu, pi);
> }
> if (port->need_reconfig_queues > 0) {
> port->need_reconfig_queues = 0;
>
@David highlighted that 'scatter' tests are failing in the lab with this commit,
https://lab.dpdk.org/results/dashboard/patchsets/14492/
With above commit only 'mtu' is taken into account, so in testpmd both
"--max-pkt-len=N" parameter and "port config all max-pkt-len #" command are no
more working as expected. This seems the reason of the failure.
Technically it is possible to fix dts testcase by adding following commands:
port stop all
port config mtu 0 9000
port start all
But, there may be other side affects from "max-pkt-len" is not working in
testpmd as expected. Reverting this one too can be safest option.
For now we need to live with the issue this patch is fixing, hopefully we can
fix it in next release with fixing all testpmd, ethdev and drivers, there is a
question about ethdev change if it will be an ABI break or not, we will see it.
And there is a longer term target to deprecate 'max_rx_pkt_len' and 'mtu' to
unify them:
https://patches.dpdk.org/patch/81591/
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy
@ 2020-11-16 16:23 3% ` Ferruh Yigit
2020-11-22 13:28 0% ` Jack Min
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-11-16 16:23 UTC (permalink / raw)
To: Xiaoyu Min, Jingjing Wu, Beilei Xing
Cc: dev, Xiaoyu Min, Thomas Monjalon, Andrew Rybchenko, Ori Kam, Dekel Peled
On 11/16/2020 7:55 AM, Xiaoyu Min wrote:
> From: Xiaoyu Min <jackmin@nvidia.com>
>
> The rte_flow_item_vlan items are refined.
> The structs do not exactly represent the packet bits captured on the
> wire anymore so should only copy real header instead of the whole struct.
>
> Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
>
> Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items")
>
> Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
> ---
> drivers/net/iavf/iavf_fdir.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> index d683a468c1..7054bde0b9 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -541,7 +541,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
> VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE);
>
> rte_memcpy(hdr->buffer,
> - eth_spec, sizeof(*eth_spec));
> + eth_spec, sizeof(struct rte_ether_hdr));
This requires 'struct rte_flow_item_eth' should have 'struct rte_ether_hdr' as
first element, and I suspect this usage exists in a few more locations, but I
wonder if this assumption is real and documented in somewhere?
I am not just talking about 'struct rte_flow_item_eth', but for all
'rte_flow_item_*'...
btw, while checking for the 'struct rte_flow_item_eth', pahole shows it is using
20 bytes, and I suspect this is not the intention with the reserved field:
struct rte_flow_item_eth {
struct rte_ether_addr dst; /* 0 6 */
struct rte_ether_addr src; /* 6 6 */
uint16_t type; /* 12 2 */
/* Bitfield combined with previous fields */
uint32_t has_vlan:1; /* 12:15 4 */
/* XXX 31 bits hole, try to pack */
uint32_t reserved:31; /* 16: 1 4 */
/* size: 20, cachelines: 1, members: 5 */
/* bit holes: 1, sum bit holes: 31 bits */
/* bit_padding: 1 bits */
/* last cacheline: 20 bytes */
};
'has_vlan' seems combined with previous field to make together 32 bits. So the
'reserved' field is occupying a new 32 bits all by itself.
What about changing the struct as following, while we can change the ABI:
struct rte_flow_item_eth {
struct rte_ether_addr dst; /* 0 6 */
struct rte_ether_addr src; /* 6 6 */
uint16_t type; /* 12 2 */
uint16_t has_vlan:1; /* 14:15 2 */
uint16_t reserved:15; /* 14: 0 2 */
/* size: 16, cachelines: 1, members: 5 */
/* last cacheline: 16 bytes */
};
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] devtools: fix x86-default env when installing
@ 2020-11-12 13:38 4% David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-12 13:38 UTC (permalink / raw)
To: dev; +Cc: thomas, stable
While testing Thomas patch on this script verbosity, I noticed that we
load the x86-default environment after installing this target.
I did not see any problem with it, yet we should load corresponding
environment before installing a target.
Fixes: bd253daa7717 ("devtools: fix test of ninja install")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
devtools/test-meson-builds.sh | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 469251b6ef..7b0d05ac3f 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -253,17 +253,15 @@ done
# Test installation of the x86-default target, to be used for checking
# the sample apps build using the pkg-config file for cflags and libs
+load_env cc
build_path=$(readlink -f $builds_dir/build-x86-default)
export DESTDIR=$build_path/install
# No need to reinstall if ABI checks are enabled
if [ -z "$DPDK_ABI_REF_VERSION" ]; then
install_target $build_path $DESTDIR
fi
-
-load_env cc
pc_file=$(find $DESTDIR -name libdpdk.pc)
export PKG_CONFIG_PATH=$(dirname $pc_file):$PKG_CONFIG_PATH
-
# if pkg-config defines the necessary flags, test building some examples
if pkg-config --define-prefix libdpdk >/dev/null 2>&1; then
export PKGCONF="pkg-config --define-prefix"
--
2.23.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v10 4/7] doc: update documentation to reflect new options
@ 2020-11-10 22:55 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-11-10 22:55 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Replace old option syntax -w with -a and update any wording
around blacklisting.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 20 ++++++-------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 22 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 8 +++---
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 4 +--
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 8 ++++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 3 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 178 insertions(+), 158 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 566423948f79..cf16f0350303 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -638,19 +638,19 @@ Testing
QAT SYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 242d283965f9..485a375c4f2c 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -63,7 +63,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -78,7 +78,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -87,7 +87,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -96,7 +96,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -107,7 +107,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -115,7 +115,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -125,7 +125,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -135,7 +135,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -145,7 +145,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
-----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index ab093c3f4df6..d9a7d8793092 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -745,7 +745,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -770,12 +770,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 3fa77d7458c0..f01cd65603f6 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_net_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -760,7 +760,7 @@ devices managed by librte_net_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ae1642b15ec3..917482dbe2a5 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index c9deb53349ab..f98c31e4695e 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index 27ff306b1a9b..ae9f08ec8d1d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -120,25 +120,25 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 \
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
- -w 81:00.0 -- -i
+ -a 81:00.0 -- -i
#. Start testpmd using a flexible device definition
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 -w ff:ff.f \
+ ./<build_dir>/app/dpdk-testpmd -c 0xff -n 4 -a ff:ff.f \
--vdev='net_failsafe0,exec(echo 84:00.0)' -- -i
#. Start testpmd, automatically probing the device 84:00.0 and using it with
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a4b288abcf5f..43f74e02abf3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 828a25988e34..ab0a6ee36e51 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -222,7 +222,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -452,7 +452,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -460,7 +460,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -796,7 +796,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 11c7420ed502..f03103704014 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -62,8 +74,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index c408ab71385b..10660ce853b4 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_net_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_net_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_net_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses to be used with the allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_net_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -409,7 +409,7 @@ devices managed by librte_net_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 59b2bf4036b9..e96aca21eb9a 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1524,7 +1524,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for to be used with the allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1532,14 +1532,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1547,7 +1547,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 18566a2c6665..a4f224424ef5 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,7 @@ for details.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +116,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -127,7 +127,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -139,7 +139,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -151,7 +151,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -163,7 +163,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +185,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -194,7 +194,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -205,7 +205,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -213,7 +213,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -229,7 +229,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index cc5b9f120c97..962e54389fbc 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -350,7 +350,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 6f9900883495..12d43ce93e28 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..1f30e13b8bf3 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
-~~~~~~~~~~~~
+Block list
+~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device block list functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocked are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..57fd7425a15d 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocked by every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6bbd6ee93922..5da3a9cd05c5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -644,6 +644,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to block/allow list.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+ $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
-c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
.. code-block:: console
- ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
- -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+ -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..faf00c75d135 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to only use the Ethernet devices needed (via the allow flag)
+and therefore implicitly blocking all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..e7875f8dcd7e 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,19 @@ Following is the sample command:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+ -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option allows the event device supported by platform.
+ The syntax used to indicate this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+ -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..cb9c4f216986 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -a 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 018358ac1719..634009cceea9 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -65,7 +65,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --rules-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --rules-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-08 14:19 0% ` Ananyev, Konstantin
@ 2020-11-10 16:26 0% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2020-11-10 16:26 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Andrew Rybchenko, Morten Brørup, dev
On Sun, Nov 08, 2020 at 02:19:55PM +0000, Ananyev, Konstantin wrote:
>
>
> > >>
> > >>>>>>>>>>>>>>>>>> Hi Olivier,
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> m->nb_seg must be reset on mbuf free
> > >>>> whatever
> > >>>>>> the
> > >>>>>>>> value
> > >>>>>>>>>> of m->next,
> > >>>>>>>>>>>>>>>>>>> because it can happen that m->nb_seg is
> > >> !=
> > >>>> 1.
> > >>>>>> For
> > >>>>>>>>>> instance in this
> > >>>>>>>>>>>>>>>>>>> case:
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> m1 = rte_pktmbuf_alloc(mp);
> > >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m1, 500);
> > >>>>>>>>>>>>>>>>>>> m2 = rte_pktmbuf_alloc(mp);
> > >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m2, 500);
> > >>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m1, m2);
> > >>>>>>>>>>>>>>>>>>> m0 = rte_pktmbuf_alloc(mp);
> > >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m0, 500);
> > >>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m0, m1);
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> As rte_pktmbuf_chain() does not reset
> > >>>> nb_seg in
> > >>>>>> the
> > >>>>>>>>>> initial m1
> > >>>>>>>>>>>>>>>>>>> segment (this is not required), after
> > >> this
> > >>>> code
> > >>>>>> the
> > >>>>>>>>>> mbuf chain
> > >>>>>>>>>>>>>>>>>>> have 3 segments:
> > >>>>>>>>>>>>>>>>>>> - m0: next=m1, nb_seg=3
> > >>>>>>>>>>>>>>>>>>> - m1: next=m2, nb_seg=2
> > >>>>>>>>>>>>>>>>>>> - m2: next=NULL, nb_seg=1
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> Freeing this mbuf chain will not
> > >> restore
> > >>>>>> nb_seg=1
> > >>>>>>>> in
> > >>>>>>>>>> the second
> > >>>>>>>>>>>>>>>>>>> segment.
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> Hmm, not sure why is that?
> > >>>>>>>>>>>>>>>>>> You are talking about freeing m1, right?
> > >>>>>>>>>>>>>>>>>> rte_pktmbuf_prefree_seg(struct rte_mbuf
> > >> *m)
> > >>>>>>>>>>>>>>>>>> {
> > >>>>>>>>>>>>>>>>>> ...
> > >>>>>>>>>>>>>>>>>> if (m->next != NULL) {
> > >>>>>>>>>>>>>>>>>> m->next = NULL;
> > >>>>>>>>>>>>>>>>>> m->nb_segs = 1;
> > >>>>>>>>>>>>>>>>>> }
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> m1->next != NULL, so it will enter the
> > >> if()
> > >>>>>> block,
> > >>>>>>>>>>>>>>>>>> and will reset both next and nb_segs.
> > >>>>>>>>>>>>>>>>>> What I am missing here?
> > >>>>>>>>>>>>>>>>>> Thinking in more generic way, that
> > >> change:
> > >>>>>>>>>>>>>>>>>> - if (m->next != NULL) {
> > >>>>>>>>>>>>>>>>>> - m->next = NULL;
> > >>>>>>>>>>>>>>>>>> - m->nb_segs = 1;
> > >>>>>>>>>>>>>>>>>> - }
> > >>>>>>>>>>>>>>>>>> + m->next = NULL;
> > >>>>>>>>>>>>>>>>>> + m->nb_segs = 1;
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> Ah, sorry. I oversimplified the example
> > >> and
> > >>>> now
> > >>>>>> it
> > >>>>>>>> does
> > >>>>>>>>>> not
> > >>>>>>>>>>>>>>>>> show the issue...
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> The full example also adds a split() to
> > >> break
> > >>>> the
> > >>>>>>>> mbuf
> > >>>>>>>>>> chain
> > >>>>>>>>>>>>>>>>> between m1 and m2. The kind of thing that
> > >>>> would
> > >>>>>> be
> > >>>>>>>> done
> > >>>>>>>>>> for
> > >>>>>>>>>>>>>>>>> software TCP segmentation.
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> If so, may be the right solution is to care
> > >>>> about
> > >>>>>>>> nb_segs
> > >>>>>>>>>>>>>>>> when next is set to NULL on split? Any
> > >> place
> > >>>> when
> > >>>>>> next
> > >>>>>>>> is
> > >>>>>>>>>> set
> > >>>>>>>>>>>>>>>> to NULL. Just to keep the optimization in a
> > >>>> more
> > >>>>>>>> generic
> > >>>>>>>>>> place.
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> The problem with that approach is that there
> > >> are
> > >>>>>> already
> > >>>>>>>>>> several
> > >>>>>>>>>>>>>>> existing split() or trim() implementations in
> > >>>>>> different
> > >>>>>>>> DPDK-
> > >>>>>>>>>> based
> > >>>>>>>>>>>>>>> applications. For instance, we have some in
> > >>>>>> 6WINDGate. If
> > >>>>>>>> we
> > >>>>>>>>>> force
> > >>>>>>>>>>>>>>> applications to set nb_seg to 1 when
> > >> resetting
> > >>>> next,
> > >>>>>> it
> > >>>>>>>> has
> > >>>>>>>>>> to be
> > >>>>>>>>>>>>>>> documented because it is not straightforward.
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> I think it is better to go that way.
> > >>>>>>>>>>>>>> From my perspective it seems natural to reset
> > >>>> nb_seg at
> > >>>>>>>> same
> > >>>>>>>>>> time
> > >>>>>>>>>>>>>> we reset next, otherwise inconsistency will
> > >> occur.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> While it is not explicitly stated for nb_segs, to
> > >> me
> > >>>> it
> > >>>>>> was
> > >>>>>>>> clear
> > >>>>>>>>>> that
> > >>>>>>>>>>>>> nb_segs is only valid in the first segment, like
> > >> for
> > >>>> many
> > >>>>>>>> fields
> > >>>>>>>>>> (port,
> > >>>>>>>>>>>>> ol_flags, vlan, rss, ...).
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> If we say that nb_segs has to be valid in any
> > >>>> segments,
> > >>>>>> it
> > >>>>>>>> means
> > >>>>>>>>>> that
> > >>>>>>>>>>>>> chain() or split() will have to update it in all
> > >>>>>> segments,
> > >>>>>>>> which
> > >>>>>>>>>> is not
> > >>>>>>>>>>>>> efficient.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> Why in all?
> > >>>>>>>>>>>> We can state that nb_segs on non-first segment
> > >> should
> > >>>>>> always
> > >>>>>>>> equal
> > >>>>>>>>>> 1.
> > >>>>>>>>>>>> As I understand in that case, both split() and
> > >> chain()
> > >>>> have
> > >>>>>> to
> > >>>>>>>>>> update nb_segs
> > >>>>>>>>>>>> only for head mbufs, rest ones will remain
> > >> untouched.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Well, anyway, I think it's strange to have a
> > >> constraint
> > >>>> on m-
> > >>>>>>>>> nb_segs
> > >>>>>>>>>> for
> > >>>>>>>>>>> non-first segment. We don't have that kind of
> > >> constraints
> > >>>> for
> > >>>>>>>> other
> > >>>>>>>>>> fields.
> > >>>>>>>>>>
> > >>>>>>>>>> True, we don't. But this is one of the fields we
> > >> consider
> > >>>>>> critical
> > >>>>>>>>>> for proper work of mbuf alloc/free mechanism.
> > >>>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> I am not sure that requiring m->nb_segs == 1 on non-first
> > >>>>>> segments
> > >>>>>>>> will provide any benefits.
> > >>>>>>>>
> > >>>>>>>> It would make this patch unneeded.
> > >>>>>>>> So, for direct, non-segmented mbufs pktmbuf_free() will
> > >> remain
> > >>>>>> write-
> > >>>>>>>> free.
> > >>>>>>>
> > >>>>>>> I see. Then I agree with Konstantin that alternative
> > >> solutions
> > >>>> should
> > >>>>>> be considered.
> > >>>>>>>
> > >>>>>>> The benefit regarding free()'ing non-segmented mbufs - which
> > >> is a
> > >>>>>> very common operation - certainly outweighs the cost of
> > >> requiring
> > >>>>>> split()/chain() operations to set the new head mbuf's nb_segs =
> > >> 1.
> > >>>>>>>
> > >>>>>>> Nonetheless, the bug needs to be fixed somehow.
> > >>>>>>>
> > >>>>>>> If we can't come up with a better solution that doesn't break
> > >> the
> > >>>>>> ABI, we are forced to accept the patch.
> > >>>>>>>
> > >>>>>>> Unless the techboard accepts to break the ABI in order to
> > >> avoid
> > >>>> the
> > >>>>>> performance cost of this patch.
> > >>>>>>
> > >>>>>> Did someone notice a performance drop with this patch?
> > >>>>>> On my side, I don't see any regression on a L3 use case.
> > >>>>>
> > >>>>> I am afraid that the DPDK performance regression tests are based
> > >> on
> > >>>> TX immediately following RX, so cache misses in TX may go by
> > >> unnoticed
> > >>>> because RX warmed up the cache for TX already. And similarly for RX
> > >>>> reusing mbufs that have been warmed up by the preceding free() at
> > >> TX.
> > >>>>>
> > >>>>> Please consider testing the performance difference with the mbuf
> > >>>> being completely cold at TX, and going completely cold again before
> > >>>> being reused for RX.
> > >>>>>
> > >>>>>>
> > >>>>>> Let's sumarize: splitting a mbuf chain and freeing it causes
> > >>>> subsequent
> > >>>>>> mbuf
> > >>>>>> allocation to return a mbuf which is not correctly initialized.
> > >>>> There
> > >>>>>> are 2
> > >>>>>> options to fix it:
> > >>>>>>
> > >>>>>> 1/ change the mbuf free function (this patch)
> > >>>>>>
> > >>>>>> - m->nb_segs would behave like many other field: valid in
> > >> the
> > >>>> first
> > >>>>>> segment, ignored in other segments
> > >>>>>> - may impact performance (suspected)
> > >>>>>>
> > >>>>>> 2/ change all places where a mbuf chain is split, or trimmed
> > >>>>>>
> > >>>>>> - m->nb_segs would have a specific behavior: count the
> > >> number of
> > >>>>>> segments in the first mbuf, should be 1 in the last
> > >> segment,
> > >>>>>> ignored in other ones.
> > >>>>>> - no code change in mbuf library, so no performance impact
> > >>>>>> - need to patch all places where we do a mbuf split or trim.
> > >>>> From
> > >>>>>> afar,
> > >>>>>> I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> > >>>>>> applications
> > >>>>>> may have to be patched (for instance, I already found 3
> > >> places
> > >>>> in
> > >>>>>> 6WIND code base without a deep search).
> > >>>>>>
> > >>>>>> In my opinion, 1/ is better, except we notice a significant
> > >>>>>> performance,
> > >>>>>> because the (implicit) behavior is unchanged.
> > >>>>>>
> > >>>>>> Whatever the solution, some documentation has to be added.
> > >>>>>>
> > >>>>>> Olivier
> > >>>>>>
> > >>>>>
> > >>>>> Unfortunately, I don't think that anything but the first option
> > >> will
> > >>>> go into 20.11 and stable releases of older versions, so I stand by
> > >> my
> > >>>> acknowledgment of the patch.
> > >>>>
> > >>>> If we are affraid about 20.11 performance (it is legitimate, few
> > >> days
> > >>>> before the release), we can target 21.02. After all, everybody
> > >> lives
> > >>>> with this bug since 2017, so there is no urgency. If accepted and
> > >> well
> > >>>> tested, it can be backported in stable branches.
> > >>>
> > >>> +1
> > >>>
> > >>> Good thinking, Olivier!
> > >>
> > >> Looking at the changes once again, it probably can be reworked a bit:
> > >>
> > >> - if (m->next != NULL) {
> > >> - m->next = NULL;
> > >> - m->nb_segs = 1;
> > >> - }
> > >>
> > >> + if (m->next != NULL)
> > >> + m->next = NULL;
> > >> + if (m->nb_segs != 1)
> > >> + m->nb_segs = 1;
> > >>
> > >> That way we add one more condition checking, but I suppose it
> > >> shouldn't be that perf critical.
> > >> That way for direct,non-segmented mbuf it still should be write-free.
> > >> Except cases as you described above: chain(), then split().
> > >>
> > >> Of-course we still need to do perf testing for that approach too.
> > >> So if your preference it to postpone it till 21.02 - that's ok for me.
> > >> Konstantin
> > >
> > > With this suggestion, I cannot imagine any performance drop for direct, non-segmented mbufs: It now reads m->nb_segs, residing in the
> > mbuf's first cache line, but the function already reads m->refcnt in the first cache line; so no cache misses are introduced.
> >
> > +1
>
> I don't expect perf drop with that approach either.
> But some perf testing still needs to be done, just in case 😊
I also agree with your suggestion Konstantin.
Let's postpone it right after 20.11 so we have more time to test.
I'll send a v2.
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
2020-11-10 13:54 4% ` Kinsella, Ray
@ 2020-11-10 13:57 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-11-10 13:57 UTC (permalink / raw)
To: Kinsella, Ray
Cc: Walsh, Conor, dpdk-dev, Luca Boccassi, Dodji Seketeli, Mcnamara, John
On Tue, Nov 10, 2020 at 2:54 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> > The Travis script flushes the ABI cache on a libabigail version change.
>
> Why would the libabigail version change in Travis - due do an OS update or the like?
Because in Travis, we compiled our own version of libabigail, the one
in ubuntu 18.04 being buggy (opened some bug, never got any feedback).
I had left automatic flush to test different versions.
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
2020-11-10 12:53 8% ` David Marchand
@ 2020-11-10 13:54 4% ` Kinsella, Ray
2020-11-10 13:57 4% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-11-10 13:54 UTC (permalink / raw)
To: David Marchand
Cc: Walsh, Conor, dpdk-dev, Luca Boccassi, Dodji Seketeli, Mcnamara, John
On 10/11/2020 12:53, David Marchand wrote:
> On Tue, Nov 3, 2020 at 11:07 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>> Came across an issue with this.
>>
>> Essentially what is happening is that an ABI dump file generated with a newer versions of libabigail
>> is not guaranteed to be 100% compatible with a older versions.
>>
>> That then adds a wrinkle that we need may need to look at maintaining abi dump archives per distro release,
>> or libabigail version depending on how you look at it.
>
> This is something I had encountered.
>
> The Travis script flushes the ABI cache on a libabigail version change.
Why would the libabigail version change in Travis - due do an OS update or the like?
> When using the test-meson-builds.sh integration, the gen-abi.sh
> devtools script can be called to regenerate the dump files from the
> existing local binaries.
>
>
>>
>> An alter approach suggested by Dodi would be to just archive the binaries somewhere instead,
>> and regenerate the dumps at build time. That _may_ be feasible,
>> but you lose some of the benefit (build time saving) compared to archiving the abi dumps.
>>
>> The most sensible approach to archiving the binaries.
>> is to use DPDK release os packaging for this, installed to a fs sandbox.
>>
>> So the next steps are figuring out, which is the better option between
>> maintaining multiple abi dump archives, one per supported os distro.
>> or looking at what needs to happen with DPDK os packaging.
>>
>> So some work still to do here.
>
> I am still unconvinced about the approach, but I'll wait for your next proposal.
>
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
2020-11-03 10:07 9% ` [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks Kinsella, Ray
@ 2020-11-10 12:53 8% ` David Marchand
2020-11-10 13:54 4% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-11-10 12:53 UTC (permalink / raw)
To: Kinsella, Ray
Cc: Walsh, Conor, dpdk-dev, Luca Boccassi, Dodji Seketeli, Mcnamara, John
On Tue, Nov 3, 2020 at 11:07 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
> Came across an issue with this.
>
> Essentially what is happening is that an ABI dump file generated with a newer versions of libabigail
> is not guaranteed to be 100% compatible with a older versions.
>
> That then adds a wrinkle that we need may need to look at maintaining abi dump archives per distro release,
> or libabigail version depending on how you look at it.
This is something I had encountered.
The Travis script flushes the ABI cache on a libabigail version change.
When using the test-meson-builds.sh integration, the gen-abi.sh
devtools script can be called to regenerate the dump files from the
existing local binaries.
>
> An alter approach suggested by Dodi would be to just archive the binaries somewhere instead,
> and regenerate the dumps at build time. That _may_ be feasible,
> but you lose some of the benefit (build time saving) compared to archiving the abi dumps.
>
> The most sensible approach to archiving the binaries.
> is to use DPDK release os packaging for this, installed to a fs sandbox.
>
> So the next steps are figuring out, which is the better option between
> maintaining multiple abi dump archives, one per supported os distro.
> or looking at what needs to happen with DPDK os packaging.
>
> So some work still to do here.
I am still unconvinced about the approach, but I'll wait for your next proposal.
--
David Marchand
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-08 14:16 0% ` Andrew Rybchenko
@ 2020-11-08 14:19 0% ` Ananyev, Konstantin
2020-11-10 16:26 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-11-08 14:19 UTC (permalink / raw)
To: Andrew Rybchenko, Morten Brørup, Olivier Matz; +Cc: dev
> >>
> >>>>>>>>>>>>>>>>>> Hi Olivier,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> m->nb_seg must be reset on mbuf free
> >>>> whatever
> >>>>>> the
> >>>>>>>> value
> >>>>>>>>>> of m->next,
> >>>>>>>>>>>>>>>>>>> because it can happen that m->nb_seg is
> >> !=
> >>>> 1.
> >>>>>> For
> >>>>>>>>>> instance in this
> >>>>>>>>>>>>>>>>>>> case:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> m1 = rte_pktmbuf_alloc(mp);
> >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m1, 500);
> >>>>>>>>>>>>>>>>>>> m2 = rte_pktmbuf_alloc(mp);
> >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m2, 500);
> >>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m1, m2);
> >>>>>>>>>>>>>>>>>>> m0 = rte_pktmbuf_alloc(mp);
> >>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m0, 500);
> >>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m0, m1);
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> As rte_pktmbuf_chain() does not reset
> >>>> nb_seg in
> >>>>>> the
> >>>>>>>>>> initial m1
> >>>>>>>>>>>>>>>>>>> segment (this is not required), after
> >> this
> >>>> code
> >>>>>> the
> >>>>>>>>>> mbuf chain
> >>>>>>>>>>>>>>>>>>> have 3 segments:
> >>>>>>>>>>>>>>>>>>> - m0: next=m1, nb_seg=3
> >>>>>>>>>>>>>>>>>>> - m1: next=m2, nb_seg=2
> >>>>>>>>>>>>>>>>>>> - m2: next=NULL, nb_seg=1
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Freeing this mbuf chain will not
> >> restore
> >>>>>> nb_seg=1
> >>>>>>>> in
> >>>>>>>>>> the second
> >>>>>>>>>>>>>>>>>>> segment.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Hmm, not sure why is that?
> >>>>>>>>>>>>>>>>>> You are talking about freeing m1, right?
> >>>>>>>>>>>>>>>>>> rte_pktmbuf_prefree_seg(struct rte_mbuf
> >> *m)
> >>>>>>>>>>>>>>>>>> {
> >>>>>>>>>>>>>>>>>> ...
> >>>>>>>>>>>>>>>>>> if (m->next != NULL) {
> >>>>>>>>>>>>>>>>>> m->next = NULL;
> >>>>>>>>>>>>>>>>>> m->nb_segs = 1;
> >>>>>>>>>>>>>>>>>> }
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> m1->next != NULL, so it will enter the
> >> if()
> >>>>>> block,
> >>>>>>>>>>>>>>>>>> and will reset both next and nb_segs.
> >>>>>>>>>>>>>>>>>> What I am missing here?
> >>>>>>>>>>>>>>>>>> Thinking in more generic way, that
> >> change:
> >>>>>>>>>>>>>>>>>> - if (m->next != NULL) {
> >>>>>>>>>>>>>>>>>> - m->next = NULL;
> >>>>>>>>>>>>>>>>>> - m->nb_segs = 1;
> >>>>>>>>>>>>>>>>>> - }
> >>>>>>>>>>>>>>>>>> + m->next = NULL;
> >>>>>>>>>>>>>>>>>> + m->nb_segs = 1;
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Ah, sorry. I oversimplified the example
> >> and
> >>>> now
> >>>>>> it
> >>>>>>>> does
> >>>>>>>>>> not
> >>>>>>>>>>>>>>>>> show the issue...
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> The full example also adds a split() to
> >> break
> >>>> the
> >>>>>>>> mbuf
> >>>>>>>>>> chain
> >>>>>>>>>>>>>>>>> between m1 and m2. The kind of thing that
> >>>> would
> >>>>>> be
> >>>>>>>> done
> >>>>>>>>>> for
> >>>>>>>>>>>>>>>>> software TCP segmentation.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> If so, may be the right solution is to care
> >>>> about
> >>>>>>>> nb_segs
> >>>>>>>>>>>>>>>> when next is set to NULL on split? Any
> >> place
> >>>> when
> >>>>>> next
> >>>>>>>> is
> >>>>>>>>>> set
> >>>>>>>>>>>>>>>> to NULL. Just to keep the optimization in a
> >>>> more
> >>>>>>>> generic
> >>>>>>>>>> place.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> The problem with that approach is that there
> >> are
> >>>>>> already
> >>>>>>>>>> several
> >>>>>>>>>>>>>>> existing split() or trim() implementations in
> >>>>>> different
> >>>>>>>> DPDK-
> >>>>>>>>>> based
> >>>>>>>>>>>>>>> applications. For instance, we have some in
> >>>>>> 6WINDGate. If
> >>>>>>>> we
> >>>>>>>>>> force
> >>>>>>>>>>>>>>> applications to set nb_seg to 1 when
> >> resetting
> >>>> next,
> >>>>>> it
> >>>>>>>> has
> >>>>>>>>>> to be
> >>>>>>>>>>>>>>> documented because it is not straightforward.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I think it is better to go that way.
> >>>>>>>>>>>>>> From my perspective it seems natural to reset
> >>>> nb_seg at
> >>>>>>>> same
> >>>>>>>>>> time
> >>>>>>>>>>>>>> we reset next, otherwise inconsistency will
> >> occur.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> While it is not explicitly stated for nb_segs, to
> >> me
> >>>> it
> >>>>>> was
> >>>>>>>> clear
> >>>>>>>>>> that
> >>>>>>>>>>>>> nb_segs is only valid in the first segment, like
> >> for
> >>>> many
> >>>>>>>> fields
> >>>>>>>>>> (port,
> >>>>>>>>>>>>> ol_flags, vlan, rss, ...).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> If we say that nb_segs has to be valid in any
> >>>> segments,
> >>>>>> it
> >>>>>>>> means
> >>>>>>>>>> that
> >>>>>>>>>>>>> chain() or split() will have to update it in all
> >>>>>> segments,
> >>>>>>>> which
> >>>>>>>>>> is not
> >>>>>>>>>>>>> efficient.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Why in all?
> >>>>>>>>>>>> We can state that nb_segs on non-first segment
> >> should
> >>>>>> always
> >>>>>>>> equal
> >>>>>>>>>> 1.
> >>>>>>>>>>>> As I understand in that case, both split() and
> >> chain()
> >>>> have
> >>>>>> to
> >>>>>>>>>> update nb_segs
> >>>>>>>>>>>> only for head mbufs, rest ones will remain
> >> untouched.
> >>>>>>>>>>>
> >>>>>>>>>>> Well, anyway, I think it's strange to have a
> >> constraint
> >>>> on m-
> >>>>>>>>> nb_segs
> >>>>>>>>>> for
> >>>>>>>>>>> non-first segment. We don't have that kind of
> >> constraints
> >>>> for
> >>>>>>>> other
> >>>>>>>>>> fields.
> >>>>>>>>>>
> >>>>>>>>>> True, we don't. But this is one of the fields we
> >> consider
> >>>>>> critical
> >>>>>>>>>> for proper work of mbuf alloc/free mechanism.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> I am not sure that requiring m->nb_segs == 1 on non-first
> >>>>>> segments
> >>>>>>>> will provide any benefits.
> >>>>>>>>
> >>>>>>>> It would make this patch unneeded.
> >>>>>>>> So, for direct, non-segmented mbufs pktmbuf_free() will
> >> remain
> >>>>>> write-
> >>>>>>>> free.
> >>>>>>>
> >>>>>>> I see. Then I agree with Konstantin that alternative
> >> solutions
> >>>> should
> >>>>>> be considered.
> >>>>>>>
> >>>>>>> The benefit regarding free()'ing non-segmented mbufs - which
> >> is a
> >>>>>> very common operation - certainly outweighs the cost of
> >> requiring
> >>>>>> split()/chain() operations to set the new head mbuf's nb_segs =
> >> 1.
> >>>>>>>
> >>>>>>> Nonetheless, the bug needs to be fixed somehow.
> >>>>>>>
> >>>>>>> If we can't come up with a better solution that doesn't break
> >> the
> >>>>>> ABI, we are forced to accept the patch.
> >>>>>>>
> >>>>>>> Unless the techboard accepts to break the ABI in order to
> >> avoid
> >>>> the
> >>>>>> performance cost of this patch.
> >>>>>>
> >>>>>> Did someone notice a performance drop with this patch?
> >>>>>> On my side, I don't see any regression on a L3 use case.
> >>>>>
> >>>>> I am afraid that the DPDK performance regression tests are based
> >> on
> >>>> TX immediately following RX, so cache misses in TX may go by
> >> unnoticed
> >>>> because RX warmed up the cache for TX already. And similarly for RX
> >>>> reusing mbufs that have been warmed up by the preceding free() at
> >> TX.
> >>>>>
> >>>>> Please consider testing the performance difference with the mbuf
> >>>> being completely cold at TX, and going completely cold again before
> >>>> being reused for RX.
> >>>>>
> >>>>>>
> >>>>>> Let's sumarize: splitting a mbuf chain and freeing it causes
> >>>> subsequent
> >>>>>> mbuf
> >>>>>> allocation to return a mbuf which is not correctly initialized.
> >>>> There
> >>>>>> are 2
> >>>>>> options to fix it:
> >>>>>>
> >>>>>> 1/ change the mbuf free function (this patch)
> >>>>>>
> >>>>>> - m->nb_segs would behave like many other field: valid in
> >> the
> >>>> first
> >>>>>> segment, ignored in other segments
> >>>>>> - may impact performance (suspected)
> >>>>>>
> >>>>>> 2/ change all places where a mbuf chain is split, or trimmed
> >>>>>>
> >>>>>> - m->nb_segs would have a specific behavior: count the
> >> number of
> >>>>>> segments in the first mbuf, should be 1 in the last
> >> segment,
> >>>>>> ignored in other ones.
> >>>>>> - no code change in mbuf library, so no performance impact
> >>>>>> - need to patch all places where we do a mbuf split or trim.
> >>>> From
> >>>>>> afar,
> >>>>>> I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> >>>>>> applications
> >>>>>> may have to be patched (for instance, I already found 3
> >> places
> >>>> in
> >>>>>> 6WIND code base without a deep search).
> >>>>>>
> >>>>>> In my opinion, 1/ is better, except we notice a significant
> >>>>>> performance,
> >>>>>> because the (implicit) behavior is unchanged.
> >>>>>>
> >>>>>> Whatever the solution, some documentation has to be added.
> >>>>>>
> >>>>>> Olivier
> >>>>>>
> >>>>>
> >>>>> Unfortunately, I don't think that anything but the first option
> >> will
> >>>> go into 20.11 and stable releases of older versions, so I stand by
> >> my
> >>>> acknowledgment of the patch.
> >>>>
> >>>> If we are affraid about 20.11 performance (it is legitimate, few
> >> days
> >>>> before the release), we can target 21.02. After all, everybody
> >> lives
> >>>> with this bug since 2017, so there is no urgency. If accepted and
> >> well
> >>>> tested, it can be backported in stable branches.
> >>>
> >>> +1
> >>>
> >>> Good thinking, Olivier!
> >>
> >> Looking at the changes once again, it probably can be reworked a bit:
> >>
> >> - if (m->next != NULL) {
> >> - m->next = NULL;
> >> - m->nb_segs = 1;
> >> - }
> >>
> >> + if (m->next != NULL)
> >> + m->next = NULL;
> >> + if (m->nb_segs != 1)
> >> + m->nb_segs = 1;
> >>
> >> That way we add one more condition checking, but I suppose it
> >> shouldn't be that perf critical.
> >> That way for direct,non-segmented mbuf it still should be write-free.
> >> Except cases as you described above: chain(), then split().
> >>
> >> Of-course we still need to do perf testing for that approach too.
> >> So if your preference it to postpone it till 21.02 - that's ok for me.
> >> Konstantin
> >
> > With this suggestion, I cannot imagine any performance drop for direct, non-segmented mbufs: It now reads m->nb_segs, residing in the
> mbuf's first cache line, but the function already reads m->refcnt in the first cache line; so no cache misses are introduced.
>
> +1
I don't expect perf drop with that approach either.
But some perf testing still needs to be done, just in case 😊
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 12:23 0% ` Morten Brørup
@ 2020-11-08 14:16 0% ` Andrew Rybchenko
2020-11-08 14:19 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-11-08 14:16 UTC (permalink / raw)
To: Morten Brørup, Ananyev, Konstantin, Olivier Matz; +Cc: dev
On 11/6/20 3:23 PM, Morten Brørup wrote:
>> From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
>> Sent: Friday, November 6, 2020 12:54 PM
>>
>>>>>>>>>>>>>>>>>> Hi Olivier,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> m->nb_seg must be reset on mbuf free
>>>> whatever
>>>>>> the
>>>>>>>> value
>>>>>>>>>> of m->next,
>>>>>>>>>>>>>>>>>>> because it can happen that m->nb_seg is
>> !=
>>>> 1.
>>>>>> For
>>>>>>>>>> instance in this
>>>>>>>>>>>>>>>>>>> case:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> m1 = rte_pktmbuf_alloc(mp);
>>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m1, 500);
>>>>>>>>>>>>>>>>>>> m2 = rte_pktmbuf_alloc(mp);
>>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m2, 500);
>>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m1, m2);
>>>>>>>>>>>>>>>>>>> m0 = rte_pktmbuf_alloc(mp);
>>>>>>>>>>>>>>>>>>> rte_pktmbuf_append(m0, 500);
>>>>>>>>>>>>>>>>>>> rte_pktmbuf_chain(m0, m1);
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> As rte_pktmbuf_chain() does not reset
>>>> nb_seg in
>>>>>> the
>>>>>>>>>> initial m1
>>>>>>>>>>>>>>>>>>> segment (this is not required), after
>> this
>>>> code
>>>>>> the
>>>>>>>>>> mbuf chain
>>>>>>>>>>>>>>>>>>> have 3 segments:
>>>>>>>>>>>>>>>>>>> - m0: next=m1, nb_seg=3
>>>>>>>>>>>>>>>>>>> - m1: next=m2, nb_seg=2
>>>>>>>>>>>>>>>>>>> - m2: next=NULL, nb_seg=1
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Freeing this mbuf chain will not
>> restore
>>>>>> nb_seg=1
>>>>>>>> in
>>>>>>>>>> the second
>>>>>>>>>>>>>>>>>>> segment.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hmm, not sure why is that?
>>>>>>>>>>>>>>>>>> You are talking about freeing m1, right?
>>>>>>>>>>>>>>>>>> rte_pktmbuf_prefree_seg(struct rte_mbuf
>> *m)
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>> ...
>>>>>>>>>>>>>>>>>> if (m->next != NULL) {
>>>>>>>>>>>>>>>>>> m->next = NULL;
>>>>>>>>>>>>>>>>>> m->nb_segs = 1;
>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> m1->next != NULL, so it will enter the
>> if()
>>>>>> block,
>>>>>>>>>>>>>>>>>> and will reset both next and nb_segs.
>>>>>>>>>>>>>>>>>> What I am missing here?
>>>>>>>>>>>>>>>>>> Thinking in more generic way, that
>> change:
>>>>>>>>>>>>>>>>>> - if (m->next != NULL) {
>>>>>>>>>>>>>>>>>> - m->next = NULL;
>>>>>>>>>>>>>>>>>> - m->nb_segs = 1;
>>>>>>>>>>>>>>>>>> - }
>>>>>>>>>>>>>>>>>> + m->next = NULL;
>>>>>>>>>>>>>>>>>> + m->nb_segs = 1;
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ah, sorry. I oversimplified the example
>> and
>>>> now
>>>>>> it
>>>>>>>> does
>>>>>>>>>> not
>>>>>>>>>>>>>>>>> show the issue...
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The full example also adds a split() to
>> break
>>>> the
>>>>>>>> mbuf
>>>>>>>>>> chain
>>>>>>>>>>>>>>>>> between m1 and m2. The kind of thing that
>>>> would
>>>>>> be
>>>>>>>> done
>>>>>>>>>> for
>>>>>>>>>>>>>>>>> software TCP segmentation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If so, may be the right solution is to care
>>>> about
>>>>>>>> nb_segs
>>>>>>>>>>>>>>>> when next is set to NULL on split? Any
>> place
>>>> when
>>>>>> next
>>>>>>>> is
>>>>>>>>>> set
>>>>>>>>>>>>>>>> to NULL. Just to keep the optimization in a
>>>> more
>>>>>>>> generic
>>>>>>>>>> place.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The problem with that approach is that there
>> are
>>>>>> already
>>>>>>>>>> several
>>>>>>>>>>>>>>> existing split() or trim() implementations in
>>>>>> different
>>>>>>>> DPDK-
>>>>>>>>>> based
>>>>>>>>>>>>>>> applications. For instance, we have some in
>>>>>> 6WINDGate. If
>>>>>>>> we
>>>>>>>>>> force
>>>>>>>>>>>>>>> applications to set nb_seg to 1 when
>> resetting
>>>> next,
>>>>>> it
>>>>>>>> has
>>>>>>>>>> to be
>>>>>>>>>>>>>>> documented because it is not straightforward.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think it is better to go that way.
>>>>>>>>>>>>>> From my perspective it seems natural to reset
>>>> nb_seg at
>>>>>>>> same
>>>>>>>>>> time
>>>>>>>>>>>>>> we reset next, otherwise inconsistency will
>> occur.
>>>>>>>>>>>>>
>>>>>>>>>>>>> While it is not explicitly stated for nb_segs, to
>> me
>>>> it
>>>>>> was
>>>>>>>> clear
>>>>>>>>>> that
>>>>>>>>>>>>> nb_segs is only valid in the first segment, like
>> for
>>>> many
>>>>>>>> fields
>>>>>>>>>> (port,
>>>>>>>>>>>>> ol_flags, vlan, rss, ...).
>>>>>>>>>>>>>
>>>>>>>>>>>>> If we say that nb_segs has to be valid in any
>>>> segments,
>>>>>> it
>>>>>>>> means
>>>>>>>>>> that
>>>>>>>>>>>>> chain() or split() will have to update it in all
>>>>>> segments,
>>>>>>>> which
>>>>>>>>>> is not
>>>>>>>>>>>>> efficient.
>>>>>>>>>>>>
>>>>>>>>>>>> Why in all?
>>>>>>>>>>>> We can state that nb_segs on non-first segment
>> should
>>>>>> always
>>>>>>>> equal
>>>>>>>>>> 1.
>>>>>>>>>>>> As I understand in that case, both split() and
>> chain()
>>>> have
>>>>>> to
>>>>>>>>>> update nb_segs
>>>>>>>>>>>> only for head mbufs, rest ones will remain
>> untouched.
>>>>>>>>>>>
>>>>>>>>>>> Well, anyway, I think it's strange to have a
>> constraint
>>>> on m-
>>>>>>>>> nb_segs
>>>>>>>>>> for
>>>>>>>>>>> non-first segment. We don't have that kind of
>> constraints
>>>> for
>>>>>>>> other
>>>>>>>>>> fields.
>>>>>>>>>>
>>>>>>>>>> True, we don't. But this is one of the fields we
>> consider
>>>>>> critical
>>>>>>>>>> for proper work of mbuf alloc/free mechanism.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure that requiring m->nb_segs == 1 on non-first
>>>>>> segments
>>>>>>>> will provide any benefits.
>>>>>>>>
>>>>>>>> It would make this patch unneeded.
>>>>>>>> So, for direct, non-segmented mbufs pktmbuf_free() will
>> remain
>>>>>> write-
>>>>>>>> free.
>>>>>>>
>>>>>>> I see. Then I agree with Konstantin that alternative
>> solutions
>>>> should
>>>>>> be considered.
>>>>>>>
>>>>>>> The benefit regarding free()'ing non-segmented mbufs - which
>> is a
>>>>>> very common operation - certainly outweighs the cost of
>> requiring
>>>>>> split()/chain() operations to set the new head mbuf's nb_segs =
>> 1.
>>>>>>>
>>>>>>> Nonetheless, the bug needs to be fixed somehow.
>>>>>>>
>>>>>>> If we can't come up with a better solution that doesn't break
>> the
>>>>>> ABI, we are forced to accept the patch.
>>>>>>>
>>>>>>> Unless the techboard accepts to break the ABI in order to
>> avoid
>>>> the
>>>>>> performance cost of this patch.
>>>>>>
>>>>>> Did someone notice a performance drop with this patch?
>>>>>> On my side, I don't see any regression on a L3 use case.
>>>>>
>>>>> I am afraid that the DPDK performance regression tests are based
>> on
>>>> TX immediately following RX, so cache misses in TX may go by
>> unnoticed
>>>> because RX warmed up the cache for TX already. And similarly for RX
>>>> reusing mbufs that have been warmed up by the preceding free() at
>> TX.
>>>>>
>>>>> Please consider testing the performance difference with the mbuf
>>>> being completely cold at TX, and going completely cold again before
>>>> being reused for RX.
>>>>>
>>>>>>
>>>>>> Let's sumarize: splitting a mbuf chain and freeing it causes
>>>> subsequent
>>>>>> mbuf
>>>>>> allocation to return a mbuf which is not correctly initialized.
>>>> There
>>>>>> are 2
>>>>>> options to fix it:
>>>>>>
>>>>>> 1/ change the mbuf free function (this patch)
>>>>>>
>>>>>> - m->nb_segs would behave like many other field: valid in
>> the
>>>> first
>>>>>> segment, ignored in other segments
>>>>>> - may impact performance (suspected)
>>>>>>
>>>>>> 2/ change all places where a mbuf chain is split, or trimmed
>>>>>>
>>>>>> - m->nb_segs would have a specific behavior: count the
>> number of
>>>>>> segments in the first mbuf, should be 1 in the last
>> segment,
>>>>>> ignored in other ones.
>>>>>> - no code change in mbuf library, so no performance impact
>>>>>> - need to patch all places where we do a mbuf split or trim.
>>>> From
>>>>>> afar,
>>>>>> I see at least mbuf_cut_seg_ofs() in DPDK. Some external
>>>>>> applications
>>>>>> may have to be patched (for instance, I already found 3
>> places
>>>> in
>>>>>> 6WIND code base without a deep search).
>>>>>>
>>>>>> In my opinion, 1/ is better, except we notice a significant
>>>>>> performance,
>>>>>> because the (implicit) behavior is unchanged.
>>>>>>
>>>>>> Whatever the solution, some documentation has to be added.
>>>>>>
>>>>>> Olivier
>>>>>>
>>>>>
>>>>> Unfortunately, I don't think that anything but the first option
>> will
>>>> go into 20.11 and stable releases of older versions, so I stand by
>> my
>>>> acknowledgment of the patch.
>>>>
>>>> If we are affraid about 20.11 performance (it is legitimate, few
>> days
>>>> before the release), we can target 21.02. After all, everybody
>> lives
>>>> with this bug since 2017, so there is no urgency. If accepted and
>> well
>>>> tested, it can be backported in stable branches.
>>>
>>> +1
>>>
>>> Good thinking, Olivier!
>>
>> Looking at the changes once again, it probably can be reworked a bit:
>>
>> - if (m->next != NULL) {
>> - m->next = NULL;
>> - m->nb_segs = 1;
>> - }
>>
>> + if (m->next != NULL)
>> + m->next = NULL;
>> + if (m->nb_segs != 1)
>> + m->nb_segs = 1;
>>
>> That way we add one more condition checking, but I suppose it
>> shouldn't be that perf critical.
>> That way for direct,non-segmented mbuf it still should be write-free.
>> Except cases as you described above: chain(), then split().
>>
>> Of-course we still need to do perf testing for that approach too.
>> So if your preference it to postpone it till 21.02 - that's ok for me.
>> Konstantin
>
> With this suggestion, I cannot imagine any performance drop for direct, non-segmented mbufs: It now reads m->nb_segs, residing in the mbuf's first cache line, but the function already reads m->refcnt in the first cache line; so no cache misses are introduced.
+1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 11:53 0% ` Ananyev, Konstantin
@ 2020-11-06 12:23 0% ` Morten Brørup
2020-11-08 14:16 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-06 12:23 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier Matz; +Cc: Andrew Rybchenko, dev
> From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> Sent: Friday, November 6, 2020 12:54 PM
>
> > > > > > > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >>> m->nb_seg must be reset on mbuf free
> > > whatever
> > > > > the
> > > > > > > value
> > > > > > > > > of m->next,
> > > > > > > > > > > > > > > >>> because it can happen that m->nb_seg is
> !=
> > > 1.
> > > > > For
> > > > > > > > > instance in this
> > > > > > > > > > > > > > > >>> case:
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset
> > > nb_seg in
> > > > > the
> > > > > > > > > initial m1
> > > > > > > > > > > > > > > >>> segment (this is not required), after
> this
> > > code
> > > > > the
> > > > > > > > > mbuf chain
> > > > > > > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Freeing this mbuf chain will not
> restore
> > > > > nb_seg=1
> > > > > > > in
> > > > > > > > > the second
> > > > > > > > > > > > > > > >>> segment.
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf
> *m)
> > > > > > > > > > > > > > > >> {
> > > > > > > > > > > > > > > >> ...
> > > > > > > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > > > > > > >> }
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> m1->next != NULL, so it will enter the
> if()
> > > > > block,
> > > > > > > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > > > > > > >> What I am missing here?
> > > > > > > > > > > > > > > >> Thinking in more generic way, that
> change:
> > > > > > > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > > > > > > >> - }
> > > > > > > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Ah, sorry. I oversimplified the example
> and
> > > now
> > > > > it
> > > > > > > does
> > > > > > > > > not
> > > > > > > > > > > > > > > > show the issue...
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > The full example also adds a split() to
> break
> > > the
> > > > > > > mbuf
> > > > > > > > > chain
> > > > > > > > > > > > > > > > between m1 and m2. The kind of thing that
> > > would
> > > > > be
> > > > > > > done
> > > > > > > > > for
> > > > > > > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > If so, may be the right solution is to care
> > > about
> > > > > > > nb_segs
> > > > > > > > > > > > > > > when next is set to NULL on split? Any
> place
> > > when
> > > > > next
> > > > > > > is
> > > > > > > > > set
> > > > > > > > > > > > > > > to NULL. Just to keep the optimization in a
> > > more
> > > > > > > generic
> > > > > > > > > place.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > > The problem with that approach is that there
> are
> > > > > already
> > > > > > > > > several
> > > > > > > > > > > > > > existing split() or trim() implementations in
> > > > > different
> > > > > > > DPDK-
> > > > > > > > > based
> > > > > > > > > > > > > > applications. For instance, we have some in
> > > > > 6WINDGate. If
> > > > > > > we
> > > > > > > > > force
> > > > > > > > > > > > > > applications to set nb_seg to 1 when
> resetting
> > > next,
> > > > > it
> > > > > > > has
> > > > > > > > > to be
> > > > > > > > > > > > > > documented because it is not straightforward.
> > > > > > > > > > > > >
> > > > > > > > > > > > > I think it is better to go that way.
> > > > > > > > > > > > > From my perspective it seems natural to reset
> > > nb_seg at
> > > > > > > same
> > > > > > > > > time
> > > > > > > > > > > > > we reset next, otherwise inconsistency will
> occur.
> > > > > > > > > > > >
> > > > > > > > > > > > While it is not explicitly stated for nb_segs, to
> me
> > > it
> > > > > was
> > > > > > > clear
> > > > > > > > > that
> > > > > > > > > > > > nb_segs is only valid in the first segment, like
> for
> > > many
> > > > > > > fields
> > > > > > > > > (port,
> > > > > > > > > > > > ol_flags, vlan, rss, ...).
> > > > > > > > > > > >
> > > > > > > > > > > > If we say that nb_segs has to be valid in any
> > > segments,
> > > > > it
> > > > > > > means
> > > > > > > > > that
> > > > > > > > > > > > chain() or split() will have to update it in all
> > > > > segments,
> > > > > > > which
> > > > > > > > > is not
> > > > > > > > > > > > efficient.
> > > > > > > > > > >
> > > > > > > > > > > Why in all?
> > > > > > > > > > > We can state that nb_segs on non-first segment
> should
> > > > > always
> > > > > > > equal
> > > > > > > > > 1.
> > > > > > > > > > > As I understand in that case, both split() and
> chain()
> > > have
> > > > > to
> > > > > > > > > update nb_segs
> > > > > > > > > > > only for head mbufs, rest ones will remain
> untouched.
> > > > > > > > > >
> > > > > > > > > > Well, anyway, I think it's strange to have a
> constraint
> > > on m-
> > > > > > > >nb_segs
> > > > > > > > > for
> > > > > > > > > > non-first segment. We don't have that kind of
> constraints
> > > for
> > > > > > > other
> > > > > > > > > fields.
> > > > > > > > >
> > > > > > > > > True, we don't. But this is one of the fields we
> consider
> > > > > critical
> > > > > > > > > for proper work of mbuf alloc/free mechanism.
> > > > > > > > >
> > > > > > > >
> > > > > > > > I am not sure that requiring m->nb_segs == 1 on non-first
> > > > > segments
> > > > > > > will provide any benefits.
> > > > > > >
> > > > > > > It would make this patch unneeded.
> > > > > > > So, for direct, non-segmented mbufs pktmbuf_free() will
> remain
> > > > > write-
> > > > > > > free.
> > > > > >
> > > > > > I see. Then I agree with Konstantin that alternative
> solutions
> > > should
> > > > > be considered.
> > > > > >
> > > > > > The benefit regarding free()'ing non-segmented mbufs - which
> is a
> > > > > very common operation - certainly outweighs the cost of
> requiring
> > > > > split()/chain() operations to set the new head mbuf's nb_segs =
> 1.
> > > > > >
> > > > > > Nonetheless, the bug needs to be fixed somehow.
> > > > > >
> > > > > > If we can't come up with a better solution that doesn't break
> the
> > > > > ABI, we are forced to accept the patch.
> > > > > >
> > > > > > Unless the techboard accepts to break the ABI in order to
> avoid
> > > the
> > > > > performance cost of this patch.
> > > > >
> > > > > Did someone notice a performance drop with this patch?
> > > > > On my side, I don't see any regression on a L3 use case.
> > > >
> > > > I am afraid that the DPDK performance regression tests are based
> on
> > > TX immediately following RX, so cache misses in TX may go by
> unnoticed
> > > because RX warmed up the cache for TX already. And similarly for RX
> > > reusing mbufs that have been warmed up by the preceding free() at
> TX.
> > > >
> > > > Please consider testing the performance difference with the mbuf
> > > being completely cold at TX, and going completely cold again before
> > > being reused for RX.
> > > >
> > > > >
> > > > > Let's sumarize: splitting a mbuf chain and freeing it causes
> > > subsequent
> > > > > mbuf
> > > > > allocation to return a mbuf which is not correctly initialized.
> > > There
> > > > > are 2
> > > > > options to fix it:
> > > > >
> > > > > 1/ change the mbuf free function (this patch)
> > > > >
> > > > > - m->nb_segs would behave like many other field: valid in
> the
> > > first
> > > > > segment, ignored in other segments
> > > > > - may impact performance (suspected)
> > > > >
> > > > > 2/ change all places where a mbuf chain is split, or trimmed
> > > > >
> > > > > - m->nb_segs would have a specific behavior: count the
> number of
> > > > > segments in the first mbuf, should be 1 in the last
> segment,
> > > > > ignored in other ones.
> > > > > - no code change in mbuf library, so no performance impact
> > > > > - need to patch all places where we do a mbuf split or trim.
> > > From
> > > > > afar,
> > > > > I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> > > > > applications
> > > > > may have to be patched (for instance, I already found 3
> places
> > > in
> > > > > 6WIND code base without a deep search).
> > > > >
> > > > > In my opinion, 1/ is better, except we notice a significant
> > > > > performance,
> > > > > because the (implicit) behavior is unchanged.
> > > > >
> > > > > Whatever the solution, some documentation has to be added.
> > > > >
> > > > > Olivier
> > > > >
> > > >
> > > > Unfortunately, I don't think that anything but the first option
> will
> > > go into 20.11 and stable releases of older versions, so I stand by
> my
> > > acknowledgment of the patch.
> > >
> > > If we are affraid about 20.11 performance (it is legitimate, few
> days
> > > before the release), we can target 21.02. After all, everybody
> lives
> > > with this bug since 2017, so there is no urgency. If accepted and
> well
> > > tested, it can be backported in stable branches.
> >
> > +1
> >
> > Good thinking, Olivier!
>
> Looking at the changes once again, it probably can be reworked a bit:
>
> - if (m->next != NULL) {
> - m->next = NULL;
> - m->nb_segs = 1;
> - }
>
> + if (m->next != NULL)
> + m->next = NULL;
> + if (m->nb_segs != 1)
> + m->nb_segs = 1;
>
> That way we add one more condition checking, but I suppose it
> shouldn't be that perf critical.
> That way for direct,non-segmented mbuf it still should be write-free.
> Except cases as you described above: chain(), then split().
>
> Of-course we still need to do perf testing for that approach too.
> So if your preference it to postpone it till 21.02 - that's ok for me.
> Konstantin
With this suggestion, I cannot imagine any performance drop for direct, non-segmented mbufs: It now reads m->nb_segs, residing in the mbuf's first cache line, but the function already reads m->refcnt in the first cache line; so no cache misses are introduced.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 10:07 0% ` Morten Brørup
@ 2020-11-06 11:53 0% ` Ananyev, Konstantin
2020-11-06 12:23 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-11-06 11:53 UTC (permalink / raw)
To: Morten Brørup, Olivier Matz; +Cc: Andrew Rybchenko, dev
> > > > > > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > >>> m->nb_seg must be reset on mbuf free
> > whatever
> > > > the
> > > > > > value
> > > > > > > > of m->next,
> > > > > > > > > > > > > > >>> because it can happen that m->nb_seg is !=
> > 1.
> > > > For
> > > > > > > > instance in this
> > > > > > > > > > > > > > >>> case:
> > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset
> > nb_seg in
> > > > the
> > > > > > > > initial m1
> > > > > > > > > > > > > > >>> segment (this is not required), after this
> > code
> > > > the
> > > > > > > > mbuf chain
> > > > > > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > >>> Freeing this mbuf chain will not restore
> > > > nb_seg=1
> > > > > > in
> > > > > > > > the second
> > > > > > > > > > > > > > >>> segment.
> > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > > > > > > >> {
> > > > > > > > > > > > > > >> ...
> > > > > > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > > > > > >> }
> > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > >> m1->next != NULL, so it will enter the if()
> > > > block,
> > > > > > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > > > > > >> What I am missing here?
> > > > > > > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > > > > > >> - }
> > > > > > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Ah, sorry. I oversimplified the example and
> > now
> > > > it
> > > > > > does
> > > > > > > > not
> > > > > > > > > > > > > > > show the issue...
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The full example also adds a split() to break
> > the
> > > > > > mbuf
> > > > > > > > chain
> > > > > > > > > > > > > > > between m1 and m2. The kind of thing that
> > would
> > > > be
> > > > > > done
> > > > > > > > for
> > > > > > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > If so, may be the right solution is to care
> > about
> > > > > > nb_segs
> > > > > > > > > > > > > > when next is set to NULL on split? Any place
> > when
> > > > next
> > > > > > is
> > > > > > > > set
> > > > > > > > > > > > > > to NULL. Just to keep the optimization in a
> > more
> > > > > > generic
> > > > > > > > place.
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > The problem with that approach is that there are
> > > > already
> > > > > > > > several
> > > > > > > > > > > > > existing split() or trim() implementations in
> > > > different
> > > > > > DPDK-
> > > > > > > > based
> > > > > > > > > > > > > applications. For instance, we have some in
> > > > 6WINDGate. If
> > > > > > we
> > > > > > > > force
> > > > > > > > > > > > > applications to set nb_seg to 1 when resetting
> > next,
> > > > it
> > > > > > has
> > > > > > > > to be
> > > > > > > > > > > > > documented because it is not straightforward.
> > > > > > > > > > > >
> > > > > > > > > > > > I think it is better to go that way.
> > > > > > > > > > > > From my perspective it seems natural to reset
> > nb_seg at
> > > > > > same
> > > > > > > > time
> > > > > > > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > > > > > > >
> > > > > > > > > > > While it is not explicitly stated for nb_segs, to me
> > it
> > > > was
> > > > > > clear
> > > > > > > > that
> > > > > > > > > > > nb_segs is only valid in the first segment, like for
> > many
> > > > > > fields
> > > > > > > > (port,
> > > > > > > > > > > ol_flags, vlan, rss, ...).
> > > > > > > > > > >
> > > > > > > > > > > If we say that nb_segs has to be valid in any
> > segments,
> > > > it
> > > > > > means
> > > > > > > > that
> > > > > > > > > > > chain() or split() will have to update it in all
> > > > segments,
> > > > > > which
> > > > > > > > is not
> > > > > > > > > > > efficient.
> > > > > > > > > >
> > > > > > > > > > Why in all?
> > > > > > > > > > We can state that nb_segs on non-first segment should
> > > > always
> > > > > > equal
> > > > > > > > 1.
> > > > > > > > > > As I understand in that case, both split() and chain()
> > have
> > > > to
> > > > > > > > update nb_segs
> > > > > > > > > > only for head mbufs, rest ones will remain untouched.
> > > > > > > > >
> > > > > > > > > Well, anyway, I think it's strange to have a constraint
> > on m-
> > > > > > >nb_segs
> > > > > > > > for
> > > > > > > > > non-first segment. We don't have that kind of constraints
> > for
> > > > > > other
> > > > > > > > fields.
> > > > > > > >
> > > > > > > > True, we don't. But this is one of the fields we consider
> > > > critical
> > > > > > > > for proper work of mbuf alloc/free mechanism.
> > > > > > > >
> > > > > > >
> > > > > > > I am not sure that requiring m->nb_segs == 1 on non-first
> > > > segments
> > > > > > will provide any benefits.
> > > > > >
> > > > > > It would make this patch unneeded.
> > > > > > So, for direct, non-segmented mbufs pktmbuf_free() will remain
> > > > write-
> > > > > > free.
> > > > >
> > > > > I see. Then I agree with Konstantin that alternative solutions
> > should
> > > > be considered.
> > > > >
> > > > > The benefit regarding free()'ing non-segmented mbufs - which is a
> > > > very common operation - certainly outweighs the cost of requiring
> > > > split()/chain() operations to set the new head mbuf's nb_segs = 1.
> > > > >
> > > > > Nonetheless, the bug needs to be fixed somehow.
> > > > >
> > > > > If we can't come up with a better solution that doesn't break the
> > > > ABI, we are forced to accept the patch.
> > > > >
> > > > > Unless the techboard accepts to break the ABI in order to avoid
> > the
> > > > performance cost of this patch.
> > > >
> > > > Did someone notice a performance drop with this patch?
> > > > On my side, I don't see any regression on a L3 use case.
> > >
> > > I am afraid that the DPDK performance regression tests are based on
> > TX immediately following RX, so cache misses in TX may go by unnoticed
> > because RX warmed up the cache for TX already. And similarly for RX
> > reusing mbufs that have been warmed up by the preceding free() at TX.
> > >
> > > Please consider testing the performance difference with the mbuf
> > being completely cold at TX, and going completely cold again before
> > being reused for RX.
> > >
> > > >
> > > > Let's sumarize: splitting a mbuf chain and freeing it causes
> > subsequent
> > > > mbuf
> > > > allocation to return a mbuf which is not correctly initialized.
> > There
> > > > are 2
> > > > options to fix it:
> > > >
> > > > 1/ change the mbuf free function (this patch)
> > > >
> > > > - m->nb_segs would behave like many other field: valid in the
> > first
> > > > segment, ignored in other segments
> > > > - may impact performance (suspected)
> > > >
> > > > 2/ change all places where a mbuf chain is split, or trimmed
> > > >
> > > > - m->nb_segs would have a specific behavior: count the number of
> > > > segments in the first mbuf, should be 1 in the last segment,
> > > > ignored in other ones.
> > > > - no code change in mbuf library, so no performance impact
> > > > - need to patch all places where we do a mbuf split or trim.
> > From
> > > > afar,
> > > > I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> > > > applications
> > > > may have to be patched (for instance, I already found 3 places
> > in
> > > > 6WIND code base without a deep search).
> > > >
> > > > In my opinion, 1/ is better, except we notice a significant
> > > > performance,
> > > > because the (implicit) behavior is unchanged.
> > > >
> > > > Whatever the solution, some documentation has to be added.
> > > >
> > > > Olivier
> > > >
> > >
> > > Unfortunately, I don't think that anything but the first option will
> > go into 20.11 and stable releases of older versions, so I stand by my
> > acknowledgment of the patch.
> >
> > If we are affraid about 20.11 performance (it is legitimate, few days
> > before the release), we can target 21.02. After all, everybody lives
> > with this bug since 2017, so there is no urgency. If accepted and well
> > tested, it can be backported in stable branches.
>
> +1
>
> Good thinking, Olivier!
Looking at the changes once again, it probably can be reworked a bit:
- if (m->next != NULL) {
- m->next = NULL;
- m->nb_segs = 1;
- }
+ if (m->next != NULL)
+ m->next = NULL;
+ if (m->nb_segs != 1)
+ m->nb_segs = 1;
That way we add one more condition checking, but I suppose it
shouldn't be that perf critical.
That way for direct,non-segmented mbuf it still should be write-free.
Except cases as you described above: chain(), then split().
Of-course we still need to do perf testing for that approach too.
So if your preference it to postpone it till 21.02 - that's ok for me.
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 10:04 0% ` Olivier Matz
@ 2020-11-06 10:07 0% ` Morten Brørup
2020-11-06 11:53 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-06 10:07 UTC (permalink / raw)
To: Olivier Matz; +Cc: Ananyev, Konstantin, Andrew Rybchenko, dev
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, November 6, 2020 11:05 AM
>
> On Fri, Nov 06, 2020 at 09:50:45AM +0100, Morten Brørup wrote:
> > > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > Sent: Friday, November 6, 2020 9:21 AM
> > >
> > > On Fri, Nov 06, 2020 at 08:52:58AM +0100, Morten Brørup wrote:
> > > > > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> > > > > Sent: Friday, November 6, 2020 12:55 AM
> > > > >
> > > > > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >>> m->nb_seg must be reset on mbuf free
> whatever
> > > the
> > > > > value
> > > > > > > of m->next,
> > > > > > > > > > > > > >>> because it can happen that m->nb_seg is !=
> 1.
> > > For
> > > > > > > instance in this
> > > > > > > > > > > > > >>> case:
> > > > > > > > > > > > > >>>
> > > > > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > > > > >>>
> > > > > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset
> nb_seg in
> > > the
> > > > > > > initial m1
> > > > > > > > > > > > > >>> segment (this is not required), after this
> code
> > > the
> > > > > > > mbuf chain
> > > > > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > > > > >>>
> > > > > > > > > > > > > >>> Freeing this mbuf chain will not restore
> > > nb_seg=1
> > > > > in
> > > > > > > the second
> > > > > > > > > > > > > >>> segment.
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > > > > > >> {
> > > > > > > > > > > > > >> ...
> > > > > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > > > > >> }
> > > > > > > > > > > > > >>
> > > > > > > > > > > > > >> m1->next != NULL, so it will enter the if()
> > > block,
> > > > > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > > > > >> What I am missing here?
> > > > > > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > > > > >> - }
> > > > > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Ah, sorry. I oversimplified the example and
> now
> > > it
> > > > > does
> > > > > > > not
> > > > > > > > > > > > > > show the issue...
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The full example also adds a split() to break
> the
> > > > > mbuf
> > > > > > > chain
> > > > > > > > > > > > > > between m1 and m2. The kind of thing that
> would
> > > be
> > > > > done
> > > > > > > for
> > > > > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > If so, may be the right solution is to care
> about
> > > > > nb_segs
> > > > > > > > > > > > > when next is set to NULL on split? Any place
> when
> > > next
> > > > > is
> > > > > > > set
> > > > > > > > > > > > > to NULL. Just to keep the optimization in a
> more
> > > > > generic
> > > > > > > place.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > The problem with that approach is that there are
> > > already
> > > > > > > several
> > > > > > > > > > > > existing split() or trim() implementations in
> > > different
> > > > > DPDK-
> > > > > > > based
> > > > > > > > > > > > applications. For instance, we have some in
> > > 6WINDGate. If
> > > > > we
> > > > > > > force
> > > > > > > > > > > > applications to set nb_seg to 1 when resetting
> next,
> > > it
> > > > > has
> > > > > > > to be
> > > > > > > > > > > > documented because it is not straightforward.
> > > > > > > > > > >
> > > > > > > > > > > I think it is better to go that way.
> > > > > > > > > > > From my perspective it seems natural to reset
> nb_seg at
> > > > > same
> > > > > > > time
> > > > > > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > > > > > >
> > > > > > > > > > While it is not explicitly stated for nb_segs, to me
> it
> > > was
> > > > > clear
> > > > > > > that
> > > > > > > > > > nb_segs is only valid in the first segment, like for
> many
> > > > > fields
> > > > > > > (port,
> > > > > > > > > > ol_flags, vlan, rss, ...).
> > > > > > > > > >
> > > > > > > > > > If we say that nb_segs has to be valid in any
> segments,
> > > it
> > > > > means
> > > > > > > that
> > > > > > > > > > chain() or split() will have to update it in all
> > > segments,
> > > > > which
> > > > > > > is not
> > > > > > > > > > efficient.
> > > > > > > > >
> > > > > > > > > Why in all?
> > > > > > > > > We can state that nb_segs on non-first segment should
> > > always
> > > > > equal
> > > > > > > 1.
> > > > > > > > > As I understand in that case, both split() and chain()
> have
> > > to
> > > > > > > update nb_segs
> > > > > > > > > only for head mbufs, rest ones will remain untouched.
> > > > > > > >
> > > > > > > > Well, anyway, I think it's strange to have a constraint
> on m-
> > > > > >nb_segs
> > > > > > > for
> > > > > > > > non-first segment. We don't have that kind of constraints
> for
> > > > > other
> > > > > > > fields.
> > > > > > >
> > > > > > > True, we don't. But this is one of the fields we consider
> > > critical
> > > > > > > for proper work of mbuf alloc/free mechanism.
> > > > > > >
> > > > > >
> > > > > > I am not sure that requiring m->nb_segs == 1 on non-first
> > > segments
> > > > > will provide any benefits.
> > > > >
> > > > > It would make this patch unneeded.
> > > > > So, for direct, non-segmented mbufs pktmbuf_free() will remain
> > > write-
> > > > > free.
> > > >
> > > > I see. Then I agree with Konstantin that alternative solutions
> should
> > > be considered.
> > > >
> > > > The benefit regarding free()'ing non-segmented mbufs - which is a
> > > very common operation - certainly outweighs the cost of requiring
> > > split()/chain() operations to set the new head mbuf's nb_segs = 1.
> > > >
> > > > Nonetheless, the bug needs to be fixed somehow.
> > > >
> > > > If we can't come up with a better solution that doesn't break the
> > > ABI, we are forced to accept the patch.
> > > >
> > > > Unless the techboard accepts to break the ABI in order to avoid
> the
> > > performance cost of this patch.
> > >
> > > Did someone notice a performance drop with this patch?
> > > On my side, I don't see any regression on a L3 use case.
> >
> > I am afraid that the DPDK performance regression tests are based on
> TX immediately following RX, so cache misses in TX may go by unnoticed
> because RX warmed up the cache for TX already. And similarly for RX
> reusing mbufs that have been warmed up by the preceding free() at TX.
> >
> > Please consider testing the performance difference with the mbuf
> being completely cold at TX, and going completely cold again before
> being reused for RX.
> >
> > >
> > > Let's sumarize: splitting a mbuf chain and freeing it causes
> subsequent
> > > mbuf
> > > allocation to return a mbuf which is not correctly initialized.
> There
> > > are 2
> > > options to fix it:
> > >
> > > 1/ change the mbuf free function (this patch)
> > >
> > > - m->nb_segs would behave like many other field: valid in the
> first
> > > segment, ignored in other segments
> > > - may impact performance (suspected)
> > >
> > > 2/ change all places where a mbuf chain is split, or trimmed
> > >
> > > - m->nb_segs would have a specific behavior: count the number of
> > > segments in the first mbuf, should be 1 in the last segment,
> > > ignored in other ones.
> > > - no code change in mbuf library, so no performance impact
> > > - need to patch all places where we do a mbuf split or trim.
> From
> > > afar,
> > > I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> > > applications
> > > may have to be patched (for instance, I already found 3 places
> in
> > > 6WIND code base without a deep search).
> > >
> > > In my opinion, 1/ is better, except we notice a significant
> > > performance,
> > > because the (implicit) behavior is unchanged.
> > >
> > > Whatever the solution, some documentation has to be added.
> > >
> > > Olivier
> > >
> >
> > Unfortunately, I don't think that anything but the first option will
> go into 20.11 and stable releases of older versions, so I stand by my
> acknowledgment of the patch.
>
> If we are affraid about 20.11 performance (it is legitimate, few days
> before the release), we can target 21.02. After all, everybody lives
> with this bug since 2017, so there is no urgency. If accepted and well
> tested, it can be backported in stable branches.
+1
Good thinking, Olivier!
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 8:50 0% ` Morten Brørup
@ 2020-11-06 10:04 0% ` Olivier Matz
2020-11-06 10:07 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2020-11-06 10:04 UTC (permalink / raw)
To: Morten Brørup; +Cc: Ananyev, Konstantin, Andrew Rybchenko, dev
On Fri, Nov 06, 2020 at 09:50:45AM +0100, Morten Brørup wrote:
> > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > Sent: Friday, November 6, 2020 9:21 AM
> >
> > On Fri, Nov 06, 2020 at 08:52:58AM +0100, Morten Brørup wrote:
> > > > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> > > > Sent: Friday, November 6, 2020 12:55 AM
> > > >
> > > > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > > > >>
> > > > > > > > > > > > >>> m->nb_seg must be reset on mbuf free whatever
> > the
> > > > value
> > > > > > of m->next,
> > > > > > > > > > > > >>> because it can happen that m->nb_seg is != 1.
> > For
> > > > > > instance in this
> > > > > > > > > > > > >>> case:
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset nb_seg in
> > the
> > > > > > initial m1
> > > > > > > > > > > > >>> segment (this is not required), after this code
> > the
> > > > > > mbuf chain
> > > > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> Freeing this mbuf chain will not restore
> > nb_seg=1
> > > > in
> > > > > > the second
> > > > > > > > > > > > >>> segment.
> > > > > > > > > > > > >>
> > > > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > > > > >> {
> > > > > > > > > > > > >> ...
> > > > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > > > >> }
> > > > > > > > > > > > >>
> > > > > > > > > > > > >> m1->next != NULL, so it will enter the if()
> > block,
> > > > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > > > >> What I am missing here?
> > > > > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > > > >> - }
> > > > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > > > >
> > > > > > > > > > > > > Ah, sorry. I oversimplified the example and now
> > it
> > > > does
> > > > > > not
> > > > > > > > > > > > > show the issue...
> > > > > > > > > > > > >
> > > > > > > > > > > > > The full example also adds a split() to break the
> > > > mbuf
> > > > > > chain
> > > > > > > > > > > > > between m1 and m2. The kind of thing that would
> > be
> > > > done
> > > > > > for
> > > > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > If so, may be the right solution is to care about
> > > > nb_segs
> > > > > > > > > > > > when next is set to NULL on split? Any place when
> > next
> > > > is
> > > > > > set
> > > > > > > > > > > > to NULL. Just to keep the optimization in a more
> > > > generic
> > > > > > place.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > The problem with that approach is that there are
> > already
> > > > > > several
> > > > > > > > > > > existing split() or trim() implementations in
> > different
> > > > DPDK-
> > > > > > based
> > > > > > > > > > > applications. For instance, we have some in
> > 6WINDGate. If
> > > > we
> > > > > > force
> > > > > > > > > > > applications to set nb_seg to 1 when resetting next,
> > it
> > > > has
> > > > > > to be
> > > > > > > > > > > documented because it is not straightforward.
> > > > > > > > > >
> > > > > > > > > > I think it is better to go that way.
> > > > > > > > > > From my perspective it seems natural to reset nb_seg at
> > > > same
> > > > > > time
> > > > > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > > > > >
> > > > > > > > > While it is not explicitly stated for nb_segs, to me it
> > was
> > > > clear
> > > > > > that
> > > > > > > > > nb_segs is only valid in the first segment, like for many
> > > > fields
> > > > > > (port,
> > > > > > > > > ol_flags, vlan, rss, ...).
> > > > > > > > >
> > > > > > > > > If we say that nb_segs has to be valid in any segments,
> > it
> > > > means
> > > > > > that
> > > > > > > > > chain() or split() will have to update it in all
> > segments,
> > > > which
> > > > > > is not
> > > > > > > > > efficient.
> > > > > > > >
> > > > > > > > Why in all?
> > > > > > > > We can state that nb_segs on non-first segment should
> > always
> > > > equal
> > > > > > 1.
> > > > > > > > As I understand in that case, both split() and chain() have
> > to
> > > > > > update nb_segs
> > > > > > > > only for head mbufs, rest ones will remain untouched.
> > > > > > >
> > > > > > > Well, anyway, I think it's strange to have a constraint on m-
> > > > >nb_segs
> > > > > > for
> > > > > > > non-first segment. We don't have that kind of constraints for
> > > > other
> > > > > > fields.
> > > > > >
> > > > > > True, we don't. But this is one of the fields we consider
> > critical
> > > > > > for proper work of mbuf alloc/free mechanism.
> > > > > >
> > > > >
> > > > > I am not sure that requiring m->nb_segs == 1 on non-first
> > segments
> > > > will provide any benefits.
> > > >
> > > > It would make this patch unneeded.
> > > > So, for direct, non-segmented mbufs pktmbuf_free() will remain
> > write-
> > > > free.
> > >
> > > I see. Then I agree with Konstantin that alternative solutions should
> > be considered.
> > >
> > > The benefit regarding free()'ing non-segmented mbufs - which is a
> > very common operation - certainly outweighs the cost of requiring
> > split()/chain() operations to set the new head mbuf's nb_segs = 1.
> > >
> > > Nonetheless, the bug needs to be fixed somehow.
> > >
> > > If we can't come up with a better solution that doesn't break the
> > ABI, we are forced to accept the patch.
> > >
> > > Unless the techboard accepts to break the ABI in order to avoid the
> > performance cost of this patch.
> >
> > Did someone notice a performance drop with this patch?
> > On my side, I don't see any regression on a L3 use case.
>
> I am afraid that the DPDK performance regression tests are based on TX immediately following RX, so cache misses in TX may go by unnoticed because RX warmed up the cache for TX already. And similarly for RX reusing mbufs that have been warmed up by the preceding free() at TX.
>
> Please consider testing the performance difference with the mbuf being completely cold at TX, and going completely cold again before being reused for RX.
>
> >
> > Let's sumarize: splitting a mbuf chain and freeing it causes subsequent
> > mbuf
> > allocation to return a mbuf which is not correctly initialized. There
> > are 2
> > options to fix it:
> >
> > 1/ change the mbuf free function (this patch)
> >
> > - m->nb_segs would behave like many other field: valid in the first
> > segment, ignored in other segments
> > - may impact performance (suspected)
> >
> > 2/ change all places where a mbuf chain is split, or trimmed
> >
> > - m->nb_segs would have a specific behavior: count the number of
> > segments in the first mbuf, should be 1 in the last segment,
> > ignored in other ones.
> > - no code change in mbuf library, so no performance impact
> > - need to patch all places where we do a mbuf split or trim. From
> > afar,
> > I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> > applications
> > may have to be patched (for instance, I already found 3 places in
> > 6WIND code base without a deep search).
> >
> > In my opinion, 1/ is better, except we notice a significant
> > performance,
> > because the (implicit) behavior is unchanged.
> >
> > Whatever the solution, some documentation has to be added.
> >
> > Olivier
> >
>
> Unfortunately, I don't think that anything but the first option will go into 20.11 and stable releases of older versions, so I stand by my acknowledgment of the patch.
If we are affraid about 20.11 performance (it is legitimate, few days
before the release), we can target 21.02. After all, everybody lives
with this bug since 2017, so there is no urgency. If accepted and well
tested, it can be backported in stable branches.
> Which reminds me: Consider reporting the bug in Bugzilla.
>
> >
> > >
> > > >
> > > > >
> > > > > E.g. the second segment of a three-segment chain will still have
> > m-
> > > > >next != NULL, so it cannot be used as a gate to prevent accessing
> > m-
> > > > > >next.
> > > > >
> > > > > I might have overlooked something, though.
> > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Saying that nb_segs has to be valid for the first and
> > last
> > > > > > segment seems
> > > > > > > > > really odd to me. What is the logic behind that except
> > > > keeping
> > > > > > this test
> > > > > > > > > as is?
> > > > > > > > >
> > > > > > > > > In any case, it has to be better documented.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Olivier
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > > I think the approach from
> > > > > > > > > > > this patch is safer.
> > > > > > > > > >
> > > > > > > > > > It might be easier from perspective that changes in
> > less
> > > > places
> > > > > > are required,
> > > > > > > > > > Though I think that this patch will introduce some
> > > > performance
> > > > > > drop.
> > > > > > > > > > As now each mbuf_prefree_seg() will cause update of 2
> > cache
> > > > > > lines unconditionally.
> > > > > > > > > >
> > > > > > > > > > > By the way, for 21.11, if we are able to do some
> > > > > > optimizations and have
> > > > > > > > > > > both pool (index?) and next in the first cache line,
> > we
> > > > may
> > > > > > reconsider
> > > > > > > > > > > the fact that next and nb_segs are already set for
> > new
> > > > > > allocated mbufs,
> > > > > > > > > > > because it is not straightforward either.
> > > > > > > > > >
> > > > > > > > > > My suggestion - let's put future optimization
> > discussion
> > > > aside
> > > > > > for now,
> > > > > > > > > > and concentrate on that particular patch.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > > After this operation, we have 2 mbuf chain:
> > > > > > > > > > > > > - m0 with 2 segments, the last one has next=NULL
> > but
> > > > > > nb_seg=2
> > > > > > > > > > > > > - new_m with 1 segment
> > > > > > > > > > > > >
> > > > > > > > > > > > > Freeing m0 will not restore nb_seg=1 in the
> > second
> > > > > > segment.
> > > > > > > > > > > > >
> > > > > > > > > > > > >> Assumes that it is ok to have an mbuf with
> > > > > > > > > > > > >> nb_seg > 1 and next == NULL.
> > > > > > > > > > > > >> Which seems wrong to me.
> > > > > > > > > > > > >
> > > > > > > > > > > > > I don't think it is wrong: nb_seg is just ignored
> > > > when
> > > > > > not in the first
> > > > > > > > > > > > > segment, and there is nothing saying it should be
> > set
> > > > to
> > > > > > 1. Typically,
> > > > > > > > > > > > > rte_pktmbuf_chain() does not change it, and I
> > guess
> > > > it's
> > > > > > the same for
> > > > > > > > > > > > > many similar functions in applications.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Olivier
> > > > > > > > > > > > >
> > > > > > > > > > > > >>
> > > > > > > > > > > > >>
> > > > > > > > > > > > >>> This is expected that mbufs stored in pool have
> > > > their
> > > > > > > > > > > > >>> nb_seg field set to 1.
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields
> > while
> > > > in
> > > > > > pool")
> > > > > > > > > > > > >>> Cc: stable@dpdk.org
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> Signed-off-by: Olivier Matz
> > > > <olivier.matz@6wind.com>
> > > > > > > > > > > > >>> ---
> > > > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.c | 6 ++----
> > > > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.h | 12 ++++--------
> > > > > > > > > > > > >>> 2 files changed, 6 insertions(+), 12
> > deletions(-)
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.c
> > > > > > b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > > >>> index 8a456e5e64..e632071c23 100644
> > > > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > > >>> @@ -129,10 +129,8 @@
> > > > > > rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> rte_mbuf_ext_refcnt_set(m->shinfo, 1);
> > > > > > > > > > > > >>> m->ol_flags = EXT_ATTACHED_MBUF;
> > > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > > >>> - }
> > > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > > >>> rte_mbuf_raw_free(m);
> > > > > > > > > > > > >>> }
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > > > > b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > > >>> index a1414ed7cd..ef5800c8ef 100644
> > > > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > > >>> @@ -1329,10 +1329,8 @@
> > > > rte_pktmbuf_prefree_seg(struct
> > > > > > rte_mbuf *m)
> > > > > > > > > > > > >>> return NULL;
> > > > > > > > > > > > >>> }
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > > >>> - }
> > > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> return m;
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> @@ -1346,10 +1344,8 @@
> > > > rte_pktmbuf_prefree_seg(struct
> > > > > > rte_mbuf *m)
> > > > > > > > > > > > >>> return NULL;
> > > > > > > > > > > > >>> }
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > > >>> - }
> > > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > > >>> rte_mbuf_refcnt_set(m, 1);
> > > > > > > > > > > > >>>
> > > > > > > > > > > > >>> return m;
> > > > > > > > > > > > >>> --
> > > > > > > > > > > > >>> 2.25.1
> > > > > > > > > > > > >>
> > > > > > > > > > > >
> > >
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 8:20 0% ` Olivier Matz
@ 2020-11-06 8:50 0% ` Morten Brørup
2020-11-06 10:04 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-06 8:50 UTC (permalink / raw)
To: Olivier Matz; +Cc: Ananyev, Konstantin, Andrew Rybchenko, dev
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, November 6, 2020 9:21 AM
>
> On Fri, Nov 06, 2020 at 08:52:58AM +0100, Morten Brørup wrote:
> > > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> > > Sent: Friday, November 6, 2020 12:55 AM
> > >
> > > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > > >>
> > > > > > > > > > > >>> m->nb_seg must be reset on mbuf free whatever
> the
> > > value
> > > > > of m->next,
> > > > > > > > > > > >>> because it can happen that m->nb_seg is != 1.
> For
> > > > > instance in this
> > > > > > > > > > > >>> case:
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset nb_seg in
> the
> > > > > initial m1
> > > > > > > > > > > >>> segment (this is not required), after this code
> the
> > > > > mbuf chain
> > > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> Freeing this mbuf chain will not restore
> nb_seg=1
> > > in
> > > > > the second
> > > > > > > > > > > >>> segment.
> > > > > > > > > > > >>
> > > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > > > >> {
> > > > > > > > > > > >> ...
> > > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > > >> }
> > > > > > > > > > > >>
> > > > > > > > > > > >> m1->next != NULL, so it will enter the if()
> block,
> > > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > > >> What I am missing here?
> > > > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > > >> - }
> > > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > > >
> > > > > > > > > > > > Ah, sorry. I oversimplified the example and now
> it
> > > does
> > > > > not
> > > > > > > > > > > > show the issue...
> > > > > > > > > > > >
> > > > > > > > > > > > The full example also adds a split() to break the
> > > mbuf
> > > > > chain
> > > > > > > > > > > > between m1 and m2. The kind of thing that would
> be
> > > done
> > > > > for
> > > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > If so, may be the right solution is to care about
> > > nb_segs
> > > > > > > > > > > when next is set to NULL on split? Any place when
> next
> > > is
> > > > > set
> > > > > > > > > > > to NULL. Just to keep the optimization in a more
> > > generic
> > > > > place.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > The problem with that approach is that there are
> already
> > > > > several
> > > > > > > > > > existing split() or trim() implementations in
> different
> > > DPDK-
> > > > > based
> > > > > > > > > > applications. For instance, we have some in
> 6WINDGate. If
> > > we
> > > > > force
> > > > > > > > > > applications to set nb_seg to 1 when resetting next,
> it
> > > has
> > > > > to be
> > > > > > > > > > documented because it is not straightforward.
> > > > > > > > >
> > > > > > > > > I think it is better to go that way.
> > > > > > > > > From my perspective it seems natural to reset nb_seg at
> > > same
> > > > > time
> > > > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > > > >
> > > > > > > > While it is not explicitly stated for nb_segs, to me it
> was
> > > clear
> > > > > that
> > > > > > > > nb_segs is only valid in the first segment, like for many
> > > fields
> > > > > (port,
> > > > > > > > ol_flags, vlan, rss, ...).
> > > > > > > >
> > > > > > > > If we say that nb_segs has to be valid in any segments,
> it
> > > means
> > > > > that
> > > > > > > > chain() or split() will have to update it in all
> segments,
> > > which
> > > > > is not
> > > > > > > > efficient.
> > > > > > >
> > > > > > > Why in all?
> > > > > > > We can state that nb_segs on non-first segment should
> always
> > > equal
> > > > > 1.
> > > > > > > As I understand in that case, both split() and chain() have
> to
> > > > > update nb_segs
> > > > > > > only for head mbufs, rest ones will remain untouched.
> > > > > >
> > > > > > Well, anyway, I think it's strange to have a constraint on m-
> > > >nb_segs
> > > > > for
> > > > > > non-first segment. We don't have that kind of constraints for
> > > other
> > > > > fields.
> > > > >
> > > > > True, we don't. But this is one of the fields we consider
> critical
> > > > > for proper work of mbuf alloc/free mechanism.
> > > > >
> > > >
> > > > I am not sure that requiring m->nb_segs == 1 on non-first
> segments
> > > will provide any benefits.
> > >
> > > It would make this patch unneeded.
> > > So, for direct, non-segmented mbufs pktmbuf_free() will remain
> write-
> > > free.
> >
> > I see. Then I agree with Konstantin that alternative solutions should
> be considered.
> >
> > The benefit regarding free()'ing non-segmented mbufs - which is a
> very common operation - certainly outweighs the cost of requiring
> split()/chain() operations to set the new head mbuf's nb_segs = 1.
> >
> > Nonetheless, the bug needs to be fixed somehow.
> >
> > If we can't come up with a better solution that doesn't break the
> ABI, we are forced to accept the patch.
> >
> > Unless the techboard accepts to break the ABI in order to avoid the
> performance cost of this patch.
>
> Did someone notice a performance drop with this patch?
> On my side, I don't see any regression on a L3 use case.
I am afraid that the DPDK performance regression tests are based on TX immediately following RX, so cache misses in TX may go by unnoticed because RX warmed up the cache for TX already. And similarly for RX reusing mbufs that have been warmed up by the preceding free() at TX.
Please consider testing the performance difference with the mbuf being completely cold at TX, and going completely cold again before being reused for RX.
>
> Let's sumarize: splitting a mbuf chain and freeing it causes subsequent
> mbuf
> allocation to return a mbuf which is not correctly initialized. There
> are 2
> options to fix it:
>
> 1/ change the mbuf free function (this patch)
>
> - m->nb_segs would behave like many other field: valid in the first
> segment, ignored in other segments
> - may impact performance (suspected)
>
> 2/ change all places where a mbuf chain is split, or trimmed
>
> - m->nb_segs would have a specific behavior: count the number of
> segments in the first mbuf, should be 1 in the last segment,
> ignored in other ones.
> - no code change in mbuf library, so no performance impact
> - need to patch all places where we do a mbuf split or trim. From
> afar,
> I see at least mbuf_cut_seg_ofs() in DPDK. Some external
> applications
> may have to be patched (for instance, I already found 3 places in
> 6WIND code base without a deep search).
>
> In my opinion, 1/ is better, except we notice a significant
> performance,
> because the (implicit) behavior is unchanged.
>
> Whatever the solution, some documentation has to be added.
>
> Olivier
>
Unfortunately, I don't think that anything but the first option will go into 20.11 and stable releases of older versions, so I stand by my acknowledgment of the patch.
Which reminds me: Consider reporting the bug in Bugzilla.
>
> >
> > >
> > > >
> > > > E.g. the second segment of a three-segment chain will still have
> m-
> > > >next != NULL, so it cannot be used as a gate to prevent accessing
> m-
> > > > >next.
> > > >
> > > > I might have overlooked something, though.
> > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Saying that nb_segs has to be valid for the first and
> last
> > > > > segment seems
> > > > > > > > really odd to me. What is the logic behind that except
> > > keeping
> > > > > this test
> > > > > > > > as is?
> > > > > > > >
> > > > > > > > In any case, it has to be better documented.
> > > > > > > >
> > > > > > > >
> > > > > > > > Olivier
> > > > > > > >
> > > > > > > >
> > > > > > > > > > I think the approach from
> > > > > > > > > > this patch is safer.
> > > > > > > > >
> > > > > > > > > It might be easier from perspective that changes in
> less
> > > places
> > > > > are required,
> > > > > > > > > Though I think that this patch will introduce some
> > > performance
> > > > > drop.
> > > > > > > > > As now each mbuf_prefree_seg() will cause update of 2
> cache
> > > > > lines unconditionally.
> > > > > > > > >
> > > > > > > > > > By the way, for 21.11, if we are able to do some
> > > > > optimizations and have
> > > > > > > > > > both pool (index?) and next in the first cache line,
> we
> > > may
> > > > > reconsider
> > > > > > > > > > the fact that next and nb_segs are already set for
> new
> > > > > allocated mbufs,
> > > > > > > > > > because it is not straightforward either.
> > > > > > > > >
> > > > > > > > > My suggestion - let's put future optimization
> discussion
> > > aside
> > > > > for now,
> > > > > > > > > and concentrate on that particular patch.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > > After this operation, we have 2 mbuf chain:
> > > > > > > > > > > > - m0 with 2 segments, the last one has next=NULL
> but
> > > > > nb_seg=2
> > > > > > > > > > > > - new_m with 1 segment
> > > > > > > > > > > >
> > > > > > > > > > > > Freeing m0 will not restore nb_seg=1 in the
> second
> > > > > segment.
> > > > > > > > > > > >
> > > > > > > > > > > >> Assumes that it is ok to have an mbuf with
> > > > > > > > > > > >> nb_seg > 1 and next == NULL.
> > > > > > > > > > > >> Which seems wrong to me.
> > > > > > > > > > > >
> > > > > > > > > > > > I don't think it is wrong: nb_seg is just ignored
> > > when
> > > > > not in the first
> > > > > > > > > > > > segment, and there is nothing saying it should be
> set
> > > to
> > > > > 1. Typically,
> > > > > > > > > > > > rte_pktmbuf_chain() does not change it, and I
> guess
> > > it's
> > > > > the same for
> > > > > > > > > > > > many similar functions in applications.
> > > > > > > > > > > >
> > > > > > > > > > > > Olivier
> > > > > > > > > > > >
> > > > > > > > > > > >>
> > > > > > > > > > > >>
> > > > > > > > > > > >>> This is expected that mbufs stored in pool have
> > > their
> > > > > > > > > > > >>> nb_seg field set to 1.
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields
> while
> > > in
> > > > > pool")
> > > > > > > > > > > >>> Cc: stable@dpdk.org
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> Signed-off-by: Olivier Matz
> > > <olivier.matz@6wind.com>
> > > > > > > > > > > >>> ---
> > > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.c | 6 ++----
> > > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.h | 12 ++++--------
> > > > > > > > > > > >>> 2 files changed, 6 insertions(+), 12
> deletions(-)
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.c
> > > > > b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > >>> index 8a456e5e64..e632071c23 100644
> > > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > > >>> @@ -129,10 +129,8 @@
> > > > > rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> rte_mbuf_ext_refcnt_set(m->shinfo, 1);
> > > > > > > > > > > >>> m->ol_flags = EXT_ATTACHED_MBUF;
> > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > >>> - }
> > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > >>> rte_mbuf_raw_free(m);
> > > > > > > > > > > >>> }
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > > > b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > >>> index a1414ed7cd..ef5800c8ef 100644
> > > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > > >>> @@ -1329,10 +1329,8 @@
> > > rte_pktmbuf_prefree_seg(struct
> > > > > rte_mbuf *m)
> > > > > > > > > > > >>> return NULL;
> > > > > > > > > > > >>> }
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > >>> - }
> > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> return m;
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> @@ -1346,10 +1344,8 @@
> > > rte_pktmbuf_prefree_seg(struct
> > > > > rte_mbuf *m)
> > > > > > > > > > > >>> return NULL;
> > > > > > > > > > > >>> }
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > > >>> - }
> > > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > > >>> rte_mbuf_refcnt_set(m, 1);
> > > > > > > > > > > >>>
> > > > > > > > > > > >>> return m;
> > > > > > > > > > > >>> --
> > > > > > > > > > > >>> 2.25.1
> > > > > > > > > > > >>
> > > > > > > > > > >
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
2020-11-06 7:52 4% ` Morten Brørup
@ 2020-11-06 8:20 0% ` Olivier Matz
2020-11-06 8:50 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2020-11-06 8:20 UTC (permalink / raw)
To: Morten Brørup; +Cc: Ananyev, Konstantin, Andrew Rybchenko, dev
On Fri, Nov 06, 2020 at 08:52:58AM +0100, Morten Brørup wrote:
> > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> > Sent: Friday, November 6, 2020 12:55 AM
> >
> > > > > > > > > > >> Hi Olivier,
> > > > > > > > > > >>
> > > > > > > > > > >>> m->nb_seg must be reset on mbuf free whatever the
> > value
> > > > of m->next,
> > > > > > > > > > >>> because it can happen that m->nb_seg is != 1. For
> > > > instance in this
> > > > > > > > > > >>> case:
> > > > > > > > > > >>>
> > > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > > >>>
> > > > > > > > > > >>> As rte_pktmbuf_chain() does not reset nb_seg in the
> > > > initial m1
> > > > > > > > > > >>> segment (this is not required), after this code the
> > > > mbuf chain
> > > > > > > > > > >>> have 3 segments:
> > > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > > >>>
> > > > > > > > > > >>> Freeing this mbuf chain will not restore nb_seg=1
> > in
> > > > the second
> > > > > > > > > > >>> segment.
> > > > > > > > > > >>
> > > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > > >> {
> > > > > > > > > > >> ...
> > > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > > >> m->next = NULL;
> > > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > > >> }
> > > > > > > > > > >>
> > > > > > > > > > >> m1->next != NULL, so it will enter the if() block,
> > > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > > >> What I am missing here?
> > > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > > >> - m->next = NULL;
> > > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > > >> - }
> > > > > > > > > > >> + m->next = NULL;
> > > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > > >
> > > > > > > > > > > Ah, sorry. I oversimplified the example and now it
> > does
> > > > not
> > > > > > > > > > > show the issue...
> > > > > > > > > > >
> > > > > > > > > > > The full example also adds a split() to break the
> > mbuf
> > > > chain
> > > > > > > > > > > between m1 and m2. The kind of thing that would be
> > done
> > > > for
> > > > > > > > > > > software TCP segmentation.
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > If so, may be the right solution is to care about
> > nb_segs
> > > > > > > > > > when next is set to NULL on split? Any place when next
> > is
> > > > set
> > > > > > > > > > to NULL. Just to keep the optimization in a more
> > generic
> > > > place.
> > > > > > > >
> > > > > > > >
> > > > > > > > > The problem with that approach is that there are already
> > > > several
> > > > > > > > > existing split() or trim() implementations in different
> > DPDK-
> > > > based
> > > > > > > > > applications. For instance, we have some in 6WINDGate. If
> > we
> > > > force
> > > > > > > > > applications to set nb_seg to 1 when resetting next, it
> > has
> > > > to be
> > > > > > > > > documented because it is not straightforward.
> > > > > > > >
> > > > > > > > I think it is better to go that way.
> > > > > > > > From my perspective it seems natural to reset nb_seg at
> > same
> > > > time
> > > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > > >
> > > > > > > While it is not explicitly stated for nb_segs, to me it was
> > clear
> > > > that
> > > > > > > nb_segs is only valid in the first segment, like for many
> > fields
> > > > (port,
> > > > > > > ol_flags, vlan, rss, ...).
> > > > > > >
> > > > > > > If we say that nb_segs has to be valid in any segments, it
> > means
> > > > that
> > > > > > > chain() or split() will have to update it in all segments,
> > which
> > > > is not
> > > > > > > efficient.
> > > > > >
> > > > > > Why in all?
> > > > > > We can state that nb_segs on non-first segment should always
> > equal
> > > > 1.
> > > > > > As I understand in that case, both split() and chain() have to
> > > > update nb_segs
> > > > > > only for head mbufs, rest ones will remain untouched.
> > > > >
> > > > > Well, anyway, I think it's strange to have a constraint on m-
> > >nb_segs
> > > > for
> > > > > non-first segment. We don't have that kind of constraints for
> > other
> > > > fields.
> > > >
> > > > True, we don't. But this is one of the fields we consider critical
> > > > for proper work of mbuf alloc/free mechanism.
> > > >
> > >
> > > I am not sure that requiring m->nb_segs == 1 on non-first segments
> > will provide any benefits.
> >
> > It would make this patch unneeded.
> > So, for direct, non-segmented mbufs pktmbuf_free() will remain write-
> > free.
>
> I see. Then I agree with Konstantin that alternative solutions should be considered.
>
> The benefit regarding free()'ing non-segmented mbufs - which is a very common operation - certainly outweighs the cost of requiring split()/chain() operations to set the new head mbuf's nb_segs = 1.
>
> Nonetheless, the bug needs to be fixed somehow.
>
> If we can't come up with a better solution that doesn't break the ABI, we are forced to accept the patch.
>
> Unless the techboard accepts to break the ABI in order to avoid the performance cost of this patch.
Did someone notice a performance drop with this patch?
On my side, I don't see any regression on a L3 use case.
Let's sumarize: splitting a mbuf chain and freeing it causes subsequent mbuf
allocation to return a mbuf which is not correctly initialized. There are 2
options to fix it:
1/ change the mbuf free function (this patch)
- m->nb_segs would behave like many other field: valid in the first
segment, ignored in other segments
- may impact performance (suspected)
2/ change all places where a mbuf chain is split, or trimmed
- m->nb_segs would have a specific behavior: count the number of
segments in the first mbuf, should be 1 in the last segment,
ignored in other ones.
- no code change in mbuf library, so no performance impact
- need to patch all places where we do a mbuf split or trim. From afar,
I see at least mbuf_cut_seg_ofs() in DPDK. Some external applications
may have to be patched (for instance, I already found 3 places in
6WIND code base without a deep search).
In my opinion, 1/ is better, except we notice a significant performance,
because the (implicit) behavior is unchanged.
Whatever the solution, some documentation has to be added.
Olivier
>
> >
> > >
> > > E.g. the second segment of a three-segment chain will still have m-
> > >next != NULL, so it cannot be used as a gate to prevent accessing m-
> > > >next.
> > >
> > > I might have overlooked something, though.
> > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Saying that nb_segs has to be valid for the first and last
> > > > segment seems
> > > > > > > really odd to me. What is the logic behind that except
> > keeping
> > > > this test
> > > > > > > as is?
> > > > > > >
> > > > > > > In any case, it has to be better documented.
> > > > > > >
> > > > > > >
> > > > > > > Olivier
> > > > > > >
> > > > > > >
> > > > > > > > > I think the approach from
> > > > > > > > > this patch is safer.
> > > > > > > >
> > > > > > > > It might be easier from perspective that changes in less
> > places
> > > > are required,
> > > > > > > > Though I think that this patch will introduce some
> > performance
> > > > drop.
> > > > > > > > As now each mbuf_prefree_seg() will cause update of 2 cache
> > > > lines unconditionally.
> > > > > > > >
> > > > > > > > > By the way, for 21.11, if we are able to do some
> > > > optimizations and have
> > > > > > > > > both pool (index?) and next in the first cache line, we
> > may
> > > > reconsider
> > > > > > > > > the fact that next and nb_segs are already set for new
> > > > allocated mbufs,
> > > > > > > > > because it is not straightforward either.
> > > > > > > >
> > > > > > > > My suggestion - let's put future optimization discussion
> > aside
> > > > for now,
> > > > > > > > and concentrate on that particular patch.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > > After this operation, we have 2 mbuf chain:
> > > > > > > > > > > - m0 with 2 segments, the last one has next=NULL but
> > > > nb_seg=2
> > > > > > > > > > > - new_m with 1 segment
> > > > > > > > > > >
> > > > > > > > > > > Freeing m0 will not restore nb_seg=1 in the second
> > > > segment.
> > > > > > > > > > >
> > > > > > > > > > >> Assumes that it is ok to have an mbuf with
> > > > > > > > > > >> nb_seg > 1 and next == NULL.
> > > > > > > > > > >> Which seems wrong to me.
> > > > > > > > > > >
> > > > > > > > > > > I don't think it is wrong: nb_seg is just ignored
> > when
> > > > not in the first
> > > > > > > > > > > segment, and there is nothing saying it should be set
> > to
> > > > 1. Typically,
> > > > > > > > > > > rte_pktmbuf_chain() does not change it, and I guess
> > it's
> > > > the same for
> > > > > > > > > > > many similar functions in applications.
> > > > > > > > > > >
> > > > > > > > > > > Olivier
> > > > > > > > > > >
> > > > > > > > > > >>
> > > > > > > > > > >>
> > > > > > > > > > >>> This is expected that mbufs stored in pool have
> > their
> > > > > > > > > > >>> nb_seg field set to 1.
> > > > > > > > > > >>>
> > > > > > > > > > >>> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while
> > in
> > > > pool")
> > > > > > > > > > >>> Cc: stable@dpdk.org
> > > > > > > > > > >>>
> > > > > > > > > > >>> Signed-off-by: Olivier Matz
> > <olivier.matz@6wind.com>
> > > > > > > > > > >>> ---
> > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.c | 6 ++----
> > > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.h | 12 ++++--------
> > > > > > > > > > >>> 2 files changed, 6 insertions(+), 12 deletions(-)
> > > > > > > > > > >>>
> > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.c
> > > > b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > >>> index 8a456e5e64..e632071c23 100644
> > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > > >>> @@ -129,10 +129,8 @@
> > > > rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
> > > > > > > > > > >>>
> > > > > > > > > > >>> rte_mbuf_ext_refcnt_set(m->shinfo, 1);
> > > > > > > > > > >>> m->ol_flags = EXT_ATTACHED_MBUF;
> > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > >>> - }
> > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > >>> rte_mbuf_raw_free(m);
> > > > > > > > > > >>> }
> > > > > > > > > > >>>
> > > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > > b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > >>> index a1414ed7cd..ef5800c8ef 100644
> > > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > > >>> @@ -1329,10 +1329,8 @@
> > rte_pktmbuf_prefree_seg(struct
> > > > rte_mbuf *m)
> > > > > > > > > > >>> return NULL;
> > > > > > > > > > >>> }
> > > > > > > > > > >>>
> > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > >>> - }
> > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > >>>
> > > > > > > > > > >>> return m;
> > > > > > > > > > >>>
> > > > > > > > > > >>> @@ -1346,10 +1344,8 @@
> > rte_pktmbuf_prefree_seg(struct
> > > > rte_mbuf *m)
> > > > > > > > > > >>> return NULL;
> > > > > > > > > > >>> }
> > > > > > > > > > >>>
> > > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > > >>> - }
> > > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > > >>> rte_mbuf_refcnt_set(m, 1);
> > > > > > > > > > >>>
> > > > > > > > > > >>> return m;
> > > > > > > > > > >>> --
> > > > > > > > > > >>> 2.25.1
> > > > > > > > > > >>
> > > > > > > > > >
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free
@ 2020-11-06 7:52 4% ` Morten Brørup
2020-11-06 8:20 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-06 7:52 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier Matz; +Cc: Andrew Rybchenko, dev
> From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com]
> Sent: Friday, November 6, 2020 12:55 AM
>
> > > > > > > > > >> Hi Olivier,
> > > > > > > > > >>
> > > > > > > > > >>> m->nb_seg must be reset on mbuf free whatever the
> value
> > > of m->next,
> > > > > > > > > >>> because it can happen that m->nb_seg is != 1. For
> > > instance in this
> > > > > > > > > >>> case:
> > > > > > > > > >>>
> > > > > > > > > >>> m1 = rte_pktmbuf_alloc(mp);
> > > > > > > > > >>> rte_pktmbuf_append(m1, 500);
> > > > > > > > > >>> m2 = rte_pktmbuf_alloc(mp);
> > > > > > > > > >>> rte_pktmbuf_append(m2, 500);
> > > > > > > > > >>> rte_pktmbuf_chain(m1, m2);
> > > > > > > > > >>> m0 = rte_pktmbuf_alloc(mp);
> > > > > > > > > >>> rte_pktmbuf_append(m0, 500);
> > > > > > > > > >>> rte_pktmbuf_chain(m0, m1);
> > > > > > > > > >>>
> > > > > > > > > >>> As rte_pktmbuf_chain() does not reset nb_seg in the
> > > initial m1
> > > > > > > > > >>> segment (this is not required), after this code the
> > > mbuf chain
> > > > > > > > > >>> have 3 segments:
> > > > > > > > > >>> - m0: next=m1, nb_seg=3
> > > > > > > > > >>> - m1: next=m2, nb_seg=2
> > > > > > > > > >>> - m2: next=NULL, nb_seg=1
> > > > > > > > > >>>
> > > > > > > > > >>> Freeing this mbuf chain will not restore nb_seg=1
> in
> > > the second
> > > > > > > > > >>> segment.
> > > > > > > > > >>
> > > > > > > > > >> Hmm, not sure why is that?
> > > > > > > > > >> You are talking about freeing m1, right?
> > > > > > > > > >> rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> > > > > > > > > >> {
> > > > > > > > > >> ...
> > > > > > > > > >> if (m->next != NULL) {
> > > > > > > > > >> m->next = NULL;
> > > > > > > > > >> m->nb_segs = 1;
> > > > > > > > > >> }
> > > > > > > > > >>
> > > > > > > > > >> m1->next != NULL, so it will enter the if() block,
> > > > > > > > > >> and will reset both next and nb_segs.
> > > > > > > > > >> What I am missing here?
> > > > > > > > > >> Thinking in more generic way, that change:
> > > > > > > > > >> - if (m->next != NULL) {
> > > > > > > > > >> - m->next = NULL;
> > > > > > > > > >> - m->nb_segs = 1;
> > > > > > > > > >> - }
> > > > > > > > > >> + m->next = NULL;
> > > > > > > > > >> + m->nb_segs = 1;
> > > > > > > > > >
> > > > > > > > > > Ah, sorry. I oversimplified the example and now it
> does
> > > not
> > > > > > > > > > show the issue...
> > > > > > > > > >
> > > > > > > > > > The full example also adds a split() to break the
> mbuf
> > > chain
> > > > > > > > > > between m1 and m2. The kind of thing that would be
> done
> > > for
> > > > > > > > > > software TCP segmentation.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > If so, may be the right solution is to care about
> nb_segs
> > > > > > > > > when next is set to NULL on split? Any place when next
> is
> > > set
> > > > > > > > > to NULL. Just to keep the optimization in a more
> generic
> > > place.
> > > > > > >
> > > > > > >
> > > > > > > > The problem with that approach is that there are already
> > > several
> > > > > > > > existing split() or trim() implementations in different
> DPDK-
> > > based
> > > > > > > > applications. For instance, we have some in 6WINDGate. If
> we
> > > force
> > > > > > > > applications to set nb_seg to 1 when resetting next, it
> has
> > > to be
> > > > > > > > documented because it is not straightforward.
> > > > > > >
> > > > > > > I think it is better to go that way.
> > > > > > > From my perspective it seems natural to reset nb_seg at
> same
> > > time
> > > > > > > we reset next, otherwise inconsistency will occur.
> > > > > >
> > > > > > While it is not explicitly stated for nb_segs, to me it was
> clear
> > > that
> > > > > > nb_segs is only valid in the first segment, like for many
> fields
> > > (port,
> > > > > > ol_flags, vlan, rss, ...).
> > > > > >
> > > > > > If we say that nb_segs has to be valid in any segments, it
> means
> > > that
> > > > > > chain() or split() will have to update it in all segments,
> which
> > > is not
> > > > > > efficient.
> > > > >
> > > > > Why in all?
> > > > > We can state that nb_segs on non-first segment should always
> equal
> > > 1.
> > > > > As I understand in that case, both split() and chain() have to
> > > update nb_segs
> > > > > only for head mbufs, rest ones will remain untouched.
> > > >
> > > > Well, anyway, I think it's strange to have a constraint on m-
> >nb_segs
> > > for
> > > > non-first segment. We don't have that kind of constraints for
> other
> > > fields.
> > >
> > > True, we don't. But this is one of the fields we consider critical
> > > for proper work of mbuf alloc/free mechanism.
> > >
> >
> > I am not sure that requiring m->nb_segs == 1 on non-first segments
> will provide any benefits.
>
> It would make this patch unneeded.
> So, for direct, non-segmented mbufs pktmbuf_free() will remain write-
> free.
I see. Then I agree with Konstantin that alternative solutions should be considered.
The benefit regarding free()'ing non-segmented mbufs - which is a very common operation - certainly outweighs the cost of requiring split()/chain() operations to set the new head mbuf's nb_segs = 1.
Nonetheless, the bug needs to be fixed somehow.
If we can't come up with a better solution that doesn't break the ABI, we are forced to accept the patch.
Unless the techboard accepts to break the ABI in order to avoid the performance cost of this patch.
>
> >
> > E.g. the second segment of a three-segment chain will still have m-
> >next != NULL, so it cannot be used as a gate to prevent accessing m-
> > >next.
> >
> > I might have overlooked something, though.
> >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > Saying that nb_segs has to be valid for the first and last
> > > segment seems
> > > > > > really odd to me. What is the logic behind that except
> keeping
> > > this test
> > > > > > as is?
> > > > > >
> > > > > > In any case, it has to be better documented.
> > > > > >
> > > > > >
> > > > > > Olivier
> > > > > >
> > > > > >
> > > > > > > > I think the approach from
> > > > > > > > this patch is safer.
> > > > > > >
> > > > > > > It might be easier from perspective that changes in less
> places
> > > are required,
> > > > > > > Though I think that this patch will introduce some
> performance
> > > drop.
> > > > > > > As now each mbuf_prefree_seg() will cause update of 2 cache
> > > lines unconditionally.
> > > > > > >
> > > > > > > > By the way, for 21.11, if we are able to do some
> > > optimizations and have
> > > > > > > > both pool (index?) and next in the first cache line, we
> may
> > > reconsider
> > > > > > > > the fact that next and nb_segs are already set for new
> > > allocated mbufs,
> > > > > > > > because it is not straightforward either.
> > > > > > >
> > > > > > > My suggestion - let's put future optimization discussion
> aside
> > > for now,
> > > > > > > and concentrate on that particular patch.
> > > > > > >
> > > > > > > >
> > > > > > > > > > After this operation, we have 2 mbuf chain:
> > > > > > > > > > - m0 with 2 segments, the last one has next=NULL but
> > > nb_seg=2
> > > > > > > > > > - new_m with 1 segment
> > > > > > > > > >
> > > > > > > > > > Freeing m0 will not restore nb_seg=1 in the second
> > > segment.
> > > > > > > > > >
> > > > > > > > > >> Assumes that it is ok to have an mbuf with
> > > > > > > > > >> nb_seg > 1 and next == NULL.
> > > > > > > > > >> Which seems wrong to me.
> > > > > > > > > >
> > > > > > > > > > I don't think it is wrong: nb_seg is just ignored
> when
> > > not in the first
> > > > > > > > > > segment, and there is nothing saying it should be set
> to
> > > 1. Typically,
> > > > > > > > > > rte_pktmbuf_chain() does not change it, and I guess
> it's
> > > the same for
> > > > > > > > > > many similar functions in applications.
> > > > > > > > > >
> > > > > > > > > > Olivier
> > > > > > > > > >
> > > > > > > > > >>
> > > > > > > > > >>
> > > > > > > > > >>> This is expected that mbufs stored in pool have
> their
> > > > > > > > > >>> nb_seg field set to 1.
> > > > > > > > > >>>
> > > > > > > > > >>> Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while
> in
> > > pool")
> > > > > > > > > >>> Cc: stable@dpdk.org
> > > > > > > > > >>>
> > > > > > > > > >>> Signed-off-by: Olivier Matz
> <olivier.matz@6wind.com>
> > > > > > > > > >>> ---
> > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.c | 6 ++----
> > > > > > > > > >>> lib/librte_mbuf/rte_mbuf.h | 12 ++++--------
> > > > > > > > > >>> 2 files changed, 6 insertions(+), 12 deletions(-)
> > > > > > > > > >>>
> > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.c
> > > b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > >>> index 8a456e5e64..e632071c23 100644
> > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.c
> > > > > > > > > >>> @@ -129,10 +129,8 @@
> > > rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
> > > > > > > > > >>>
> > > > > > > > > >>> rte_mbuf_ext_refcnt_set(m->shinfo, 1);
> > > > > > > > > >>> m->ol_flags = EXT_ATTACHED_MBUF;
> > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > >>> - }
> > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > >>> rte_mbuf_raw_free(m);
> > > > > > > > > >>> }
> > > > > > > > > >>>
> > > > > > > > > >>> diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > >>> index a1414ed7cd..ef5800c8ef 100644
> > > > > > > > > >>> --- a/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > >>> +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > > > > > > >>> @@ -1329,10 +1329,8 @@
> rte_pktmbuf_prefree_seg(struct
> > > rte_mbuf *m)
> > > > > > > > > >>> return NULL;
> > > > > > > > > >>> }
> > > > > > > > > >>>
> > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > >>> - }
> > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > >>>
> > > > > > > > > >>> return m;
> > > > > > > > > >>>
> > > > > > > > > >>> @@ -1346,10 +1344,8 @@
> rte_pktmbuf_prefree_seg(struct
> > > rte_mbuf *m)
> > > > > > > > > >>> return NULL;
> > > > > > > > > >>> }
> > > > > > > > > >>>
> > > > > > > > > >>> - if (m->next != NULL) {
> > > > > > > > > >>> - m->next = NULL;
> > > > > > > > > >>> - m->nb_segs = 1;
> > > > > > > > > >>> - }
> > > > > > > > > >>> + m->next = NULL;
> > > > > > > > > >>> + m->nb_segs = 1;
> > > > > > > > > >>> rte_mbuf_refcnt_set(m, 1);
> > > > > > > > > >>>
> > > > > > > > > >>> return m;
> > > > > > > > > >>> --
> > > > > > > > > >>> 2.25.1
> > > > > > > > > >>
> > > > > > > > >
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v9 6/6] doc: update release notes now for block allow changes
@ 2020-11-05 22:36 4% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-11-05 22:36 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Remove the deprecation notice and add description to the release notes.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/rel_notes/deprecation.rst | 23 -----------------------
doc/guides/rel_notes/release_20_11.rst | 11 +++++++++++
2 files changed, 11 insertions(+), 23 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f3258eb3f725..d459a25eabe3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -28,29 +28,6 @@ Deprecation Notices
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
for returning key match count. It will ease handling of no-match case.
-* eal: The terms blacklist and whitelist to describe devices used
- by DPDK will be replaced in the 20.11 relase.
- This will apply to command line arguments as well as macros.
-
- The macro ``RTE_DEV_BLACKLISTED`` will be replaced with ``RTE_DEV_EXCLUDED``
- and ``RTE_DEV_WHITELISTED`` will be replaced with ``RTE_DEV_INCLUDED``
- ``RTE_BUS_SCAN_BLACKLIST`` and ``RTE_BUS_SCAN_WHITELIST`` will be
- replaced with ``RTE_BUS_SCAN_EXCLUDED`` and ``RTE_BUS_SCAN_INCLUDED``
- respectively. Likewise ``RTE_DEVTYPE_BLACKLISTED_PCI`` and
- ``RTE_DEVTYPE_WHITELISTED_PCI`` will be replaced with
- ``RTE_DEVTYPE_EXCLUDED`` and ``RTE_DEVTYPE_INCLUDED``.
-
- The old macros will be marked as deprecated in 20.11 and any
- usage will cause a compile warning. They will be removed in
- a future release.
-
- The command line arguments to ``rte_eal_init`` will change from
- ``-b, --pci-blacklist`` to ``-x, --exclude`` and
- ``-w, --pci-whitelist`` to ``-i, --include``.
- The old command line arguments will continue to be accepted in 20.11
- but will cause a runtime warning message. The old arguments will
- be removed in a future release.
-
* eal: The function ``rte_eal_remote_launch`` will return new error codes
after read or write error on the pipe, instead of calling ``rte_panic``.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6bbd6ee93922..df955e2214c4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -644,6 +644,17 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The selection of devices on the EAL command line has been
+ changed from ``--pci-blacklist`` and ``--pci-whitelist``
+ to ``--block`` and ``--allow``. The short form option for
+ using a device is now ``-a`` instead of ``-w``.
+
+ The internal macros for ``RTE_DEV_BLACKLISTED`` and ``RTE_DEV_WHITELISTED``
+ have been replaced with ``RTE_DEV_BLOCKED`` and ``RTE_DEV_ALLOWED``.
+
+ There are compatibility macros and command line mappings to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
--
2.27.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-05 9:35 3% ` Morten Brørup
@ 2020-11-05 10:29 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-11-05 10:29 UTC (permalink / raw)
To: Morten Brørup
Cc: Ananyev, Konstantin, Olivier Matz, Slava Ovsiienko,
NBU-Contact-Thomas Monjalon, dev, techboard, Ajit Khaparde,
Andrew Rybchenko, Yigit, Ferruh, david.marchand, jerinj,
honnappa.nagarahalli, maxime.coquelin, stephen, hemant.agrawal,
Matan Azrad, Shahaf Shuler
On Thu, Nov 05, 2020 at 10:35:45AM +0100, Morten Brørup wrote:
> There is a simple alternative for applications with a single mbuf pool to avoid accessing m->pool.
>
> We could add a global variable pointing to the single mbuf pool.
>
> It would be NULL by default.
>
> It would be set by rte_pktmbuf_pool_create() on first invocation, and reset back to NULL on following invocations. (There would need to be a counter too, to prevent setting it again on the third invocation.)
>
> All functions accessing m->pool would use the global mbuf pool pointer if set, and otherwise use the m->pool pointer, like this:
>
> - rte_mempool_put(m->pool, m);
> + rte_mempool_put(global_mbuf_pool ? global_mbuf_pool : m->pool, m);
>
> This optimization can be implemented without ABI breakage:
>
> Since m->pool is initialized as always, functions that are not modified to use the global_mbuf_pool will simply continue using m->pool, not knowing that a global mbuf pool exists.
>
Very interesting idea. Definitely worth considering. A TX function would
only have to check the global variable once at the start of cleanup too,
and if set, it can use bulk frees without any additional work.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-05 0:25 0% ` Ananyev, Konstantin
@ 2020-11-05 9:35 3% ` Morten Brørup
2020-11-05 10:29 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-05 9:35 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier Matz
Cc: Slava Ovsiienko, NBU-Contact-Thomas Monjalon, dev, techboard,
Ajit Khaparde, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
Richardson, Bruce, jerinj, honnappa.nagarahalli, maxime.coquelin,
stephen, hemant.agrawal, Matan Azrad, Shahaf Shuler
There is a simple alternative for applications with a single mbuf pool to avoid accessing m->pool.
We could add a global variable pointing to the single mbuf pool.
It would be NULL by default.
It would be set by rte_pktmbuf_pool_create() on first invocation, and reset back to NULL on following invocations. (There would need to be a counter too, to prevent setting it again on the third invocation.)
All functions accessing m->pool would use the global mbuf pool pointer if set, and otherwise use the m->pool pointer, like this:
- rte_mempool_put(m->pool, m);
+ rte_mempool_put(global_mbuf_pool ? global_mbuf_pool : m->pool, m);
This optimization can be implemented without ABI breakage:
Since m->pool is initialized as always, functions that are not modified to use the global_mbuf_pool will simply continue using m->pool, not knowing that a global mbuf pool exists.
Med venlig hilsen / kind regards
- Morten Brørup
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-04 15:00 0% ` Olivier Matz
@ 2020-11-05 0:25 0% ` Ananyev, Konstantin
2020-11-05 9:35 3% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-11-05 0:25 UTC (permalink / raw)
To: Olivier Matz, Morten Brørup
Cc: Slava Ovsiienko, NBU-Contact-Thomas Monjalon, dev, techboard,
Ajit Khaparde, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
Richardson, Bruce, jerinj, honnappa.nagarahalli, maxime.coquelin,
stephen, hemant.agrawal, Matan Azrad, Shahaf Shuler
>
> Hi,
>
> On Tue, Nov 03, 2020 at 04:03:46PM +0100, Morten Brørup wrote:
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Slava Ovsiienko
> > > Sent: Tuesday, November 3, 2020 3:03 PM
> > >
> > > Hi, Morten
> > >
> > > > From: Morten Brørup <mb@smartsharesystems.com>
> > > > Sent: Tuesday, November 3, 2020 14:10
> > > >
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > Sent: Monday, November 2, 2020 4:58 PM
> > > > >
> > > > > +Cc techboard
> > > > >
> > > > > We need benchmark numbers in order to take a decision.
> > > > > Please all, prepare some arguments and numbers so we can discuss
> > > the
> > > > > mbuf layout in the next techboard meeting.
>
> I did some quick tests, and it appears to me that just moving the pool
> pointer to the first cache line has not a significant impact.
Hmm, as I remember Thomas mentioned about 5%+ improvement
with that change. Though I suppose a lot depends from actual test-case.
Would be good to know when it does help and when it doesn't.
>
> However, I agree with Morten that there is some room for optimization
> around m->pool: I did a hack in the ixgbe driver to assume there is only
> one mbuf pool. This simplifies a lot the freeing of mbufs in Tx, because
> we don't have to group them in bulks that shares the same pool (see
> ixgbe_tx_free_bufs()). The impact of this hack is quite good: +~5% on a
> real-life forwarding use case.
I think we already have such optimization ability within DPDK:
#define DEV_TX_OFFLOAD_MBUF_FAST_FREE 0x00010000
/**< Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
Seems over-optimistic to me, but many PMDs do support it.
>
> It is maybe possible to store the pool in the sw ring to avoid a later
> access to m->pool. Having a pool index as suggested by Morten would also
> help to reduce used room in sw ring in this case. But this is a bit
> off-topic :)
>
>
>
> > > > I propose that the techboard considers this from two angels:
> > > >
> > > > 1. Long term goals and their relative priority. I.e. what can be
> > > achieved with
> > > > wide-ranging modifications, requiring yet another ABI break and due
> > > notices.
> > > >
> > > > 2. Short term goals, i.e. what can be achieved for this release.
> > > >
> > > >
> > > > My suggestions follow...
> > > >
> > > > 1. Regarding long term goals:
> > > >
> > > > I have argued that simple forwarding of non-segmented packets using
> > > only the
> > > > first mbuf cache line can be achieved by making three
> > > > modifications:
> > > >
> > > > a) Move m->tx_offload to the first cache line.
> > > Not all PMDs use this field on Tx. HW might support the checksum
> > > offloads
> > > directly, not requiring these fields at all.
>
> To me, a driver should use m->tx_offload, because the application
> specifies the offset where the checksum has to be done, in case the hw
> is not able to recognize the protocol.
>
> > > > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > > > instead of the 64 bit m->pool pointer in the second cache line.
> > > 256 mpool looks enough, as for me. Regarding the indirect access to the
> > > pool
> > > (via some table) - it might introduce some performance impact.
> >
> > It might, but I hope that it is negligible, so the benefits outweigh the disadvantages.
> >
> > It would have to be measured, though.
> >
> > And m->pool is only used for free()'ing (and detach()'ing) mbufs.
> >
> > > For example,
> > > mlx5 PMD strongly relies on pool field for allocating mbufs in Rx
> > > datapath.
> > > We're going to update (o-o, we found point to optimize), but for now it
> > > does.
> >
> > Without looking at the source code, I don't think the PMD is using m->pool in the RX datapath, I think it is using a pool dedicated to a
> receive queue used for RX descriptors in the PMD (i.e. driver->queue->pool).
> >
> > >
> > > > c) Do not access m->next when we know that it is NULL.
> > > > We can use m->nb_segs == 1 or some other invariant as the gate.
> > > > It can be implemented by adding an m->next accessor function:
> > > > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > > > {
> > > > return m->nb_segs == 1 ? NULL : m->next;
> > > > }
> > >
> > > Sorry, not sure about this. IIRC, nb_segs is valid in the first
> > > segment/mbuf only.
> > > If we have the 4 segments in the pkt we see nb_seg=4 in the first one,
> > > and the nb_seg=1
> > > in the others. The next field is NULL in the last mbuf only. Am I wrong
> > > and miss something ?
> >
> > You are correct.
> >
> > This would have to be updated too. Either by increasing m->nb_seg in the following segments, or by splitting up relevant functions into
> functions for working on first segments (incl. non-segmented packets), and functions for working on following segments of segmented
> packets.
>
> Instead of maintaining a valid nb_segs, a HAS_NEXT flag would be easier
> to implement. However it means that an accessor needs to be used instead
> of any m->next access.
>
> > > > Regarding the priority of this goal, I guess that simple forwarding
> > > of non-
> > > > segmented packets is probably the path taken by the majority of
> > > packets
> > > > handled by DPDK.
> > > >
> > > > An alternative goal could be:
> > > > Do not touch the second cache line during RX.
> > > > A comment in the mbuf structure says so, but it is not true anymore.
> > > >
> > > > (I guess that regression testing didn't catch this because the tests
> > > perform TX
> > > > immediately after RX, so the cache miss just moves from the TX to the
> > > RX part
> > > > of the test application.)
> > > >
> > > >
> > > > 2. Regarding short term goals:
> > > >
> > > > The current DPDK source code looks to me like m->next is the most
> > > frequently
> > > > accessed field in the second cache line, so it makes sense moving
> > > this to the
> > > > first cache line, rather than m->pool.
> > > > Benchmarking may help here.
> > >
> > > Moreover, for the segmented packets the packet size is supposed to be
> > > large,
> > > and it imposes the relatively low packet rate, so probably optimization
> > > of
> > > moving next to the 1st cache line might be negligible at all. Just
> > > compare 148Mpps of
> > > 64B pkts and 4Mpps of 3000B pkts over 100Gbps link. Currently we are on
> > > benchmarking
> > > and did not succeed yet on difference finding. The benefit can't be
> > > expressed in mpps delta,
> > > we should measure CPU clocks, but Rx queue is almost always empty - we
> > > have an empty
> > > loops. So, if we have the boost - it is extremely hard to catch one.
> >
> > Very good point regarding the value of such an optimization, Slava!
> >
> > And when free()'ing packets, both m->next and m->pool are touched.
> >
> > So perhaps the free()/detach() functions in the mbuf library can be modified to handle first segments (and non-segmented packets) and
> following segments differently, so accessing m->next can be avoided for non-segmented packets. Then m->pool should be moved to the
> first cache line.
> >
>
> I also think that Moving m->pool without doing something else about
> m->next is probably useless. And it's too late for 20.11 to do
> additionnal changes, so I suggest to postpone the field move to 21.11,
> once we have a clearer view of possible optimizations.
>
> Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] mbuf: minor cleanup
2020-10-20 11:55 0% ` Thomas Monjalon
@ 2020-11-04 22:17 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2020-11-04 22:17 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Olivier Matz
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, October 20, 2020 1:56 PM
>
> Hi Morten,
> Any update about this patch please?
Thomas,
v4 contains all the modifications suggested by Olivier:
https://patchwork.dpdk.org/patch/82198/
Except a depreciation notice about the old names without RTE_. I'm not sure about the formal requirements for depreciation notices. If there are no formal requirements, please feel free to add a notice when merging.
Changes regarding missing RTE_ prefix:
* Converted the MBUF_RAW_ALLOC_CHECK() macro to an
__rte_mbuf_raw_sanity_check() inline function.
Added backwards compatible macro with the original name.
* Renamed the MBUF_INVALID_PORT definition to RTE_MBUF_PORT_INVALID.
Added backwards compatible definition with the original name.
>
> 07/10/2020 11:16, Olivier Matz:
> > Hi Morten,
> >
> > Thanks for this cleanup. Please see some comments below.
> >
> > On Wed, Sep 16, 2020 at 12:40:13PM +0200, Morten Brørup wrote:
> > > The mbuf header files had some commenting style errors that affected
> the
> > > API documentation.
> > > Also, the RTE_ prefix was missing on a macro and a definition.
> > >
> > > Note: This patch does not touch the offload and attachment flags that
> are
> > > also missing the RTE_ prefix.
> > >
> > > Changes only affecting documentation:
> > > * Removed the MBUF_INVALID_PORT definition from rte_mbuf.h; it is
> > > already defined in rte_mbuf_core.h.
> > > This removal also reestablished the description of the
> > > rte_pktmbuf_reset() function.
> > > * Corrected the comment related to RTE_MBUF_MAX_NB_SEGS.
> > > * Corrected the comment related to PKT_TX_QINQ_PKT.
> > >
> > > Changes regarding missing RTE_ prefix:
> > > * Converted the MBUF_RAW_ALLOC_CHECK() macro to an
> > > __rte_mbuf_raw_sanity_check() inline function.
> > > Added backwards compatible macro with the original name.
> > > * Renamed the MBUF_INVALID_PORT definition to RTE_MBUF_PORT_INVALID.
> > > Added backwards compatible definition with the original name.
> > >
> > > v2:
> > > * Use RTE_MBUF_PORT_INVALID instead of MBUF_INVALID_PORT in rte_mbuf.c.
> > >
> > > v3:
> > > * The functions/macros used in __rte_mbuf_raw_sanity_check() require
> > > RTE_ENABLE_ASSERT or RTE_LIBRTE_MBUF_DEBUG, or they don't use the
> mbuf
> > > parameter, which generates a compiler waning. So mark the mbuf
> parameter
> > > __rte_unused if none of them are defined.
> > >
> > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 7 ----
> > > lib/librte_mbuf/rte_mbuf.c | 4 +-
> > > lib/librte_mbuf/rte_mbuf.h | 55 +++++++++++++++++++---------
> > > lib/librte_mbuf/rte_mbuf_core.h | 9 +++--
> > > 4 files changed, 45 insertions(+), 30 deletions(-)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > > index 279eccb04..88d7d0761 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -294,13 +294,6 @@ Deprecation Notices
> > > - https://patches.dpdk.org/patch/71457/
> > > - https://patches.dpdk.org/patch/71456/
> > >
> > > -* rawdev: The rawdev APIs which take a device-specific structure as
> > > - parameter directly, or indirectly via a "private" pointer inside
> another
> > > - structure, will be modified to take an additional parameter of the
> > > - structure size. The affected APIs will include
> ``rte_rawdev_info_get``,
> > > - ``rte_rawdev_configure``, ``rte_rawdev_queue_conf_get`` and
> > > - ``rte_rawdev_queue_setup``.
> > > -
> > > * acl: ``RTE_ACL_CLASSIFY_NUM`` enum value will be removed.
> > > This enum value is not used inside DPDK, while it prevents to add
> new
> > > classify algorithms without causing an ABI breakage.
> >
> > I think this change is not related.
> >
> > This makes me think that a deprecation notice could be done for the
> > old names without the RTE_ prefix, to be removed in 21.11.
> >
> >
> > > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> > > index 8a456e5e6..53a015311 100644
> > > --- a/lib/librte_mbuf/rte_mbuf.c
> > > +++ b/lib/librte_mbuf/rte_mbuf.c
> > > @@ -104,7 +104,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
> > > /* init some constant fields */
> > > m->pool = mp;
> > > m->nb_segs = 1;
> > > - m->port = MBUF_INVALID_PORT;
> > > + m->port = RTE_MBUF_PORT_INVALID;
> > > rte_mbuf_refcnt_set(m, 1);
> > > m->next = NULL;
> > > }
> > > @@ -207,7 +207,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
> > > /* init some constant fields */
> > > m->pool = mp;
> > > m->nb_segs = 1;
> > > - m->port = MBUF_INVALID_PORT;
> > > + m->port = RTE_MBUF_PORT_INVALID;
> > > m->ol_flags = EXT_ATTACHED_MBUF;
> > > rte_mbuf_refcnt_set(m, 1);
> > > m->next = NULL;
> > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > > index 7259575a7..406d3abb2 100644
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -554,12 +554,36 @@ __rte_experimental
> > > int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
> > > const char **reason);
> > >
> > > -#define MBUF_RAW_ALLOC_CHECK(m) do { \
> > > - RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1); \
> > > - RTE_ASSERT((m)->next == NULL); \
> > > - RTE_ASSERT((m)->nb_segs == 1); \
> > > - __rte_mbuf_sanity_check(m, 0); \
> > > -} while (0)
> > > +#if defined(RTE_ENABLE_ASSERT) || defined(RTE_LIBRTE_MBUF_DEBUG)
> >
> > I don't see why this #if is needed. Wouldn't it work to have only
> > one function definition with the __rte_unused attribute?
> >
> > > +/**
> > > + * Sanity checks on a reinitialized mbuf.
> > > + *
> > > + * Check the consistency of the given reinitialized mbuf.
> > > + * The function will cause a panic if corruption is detected.
> > > + *
> > > + * Check that the mbuf is properly reinitialized (refcnt=1, next=NULL,
> > > + * nb_segs=1), as done by rte_pktmbuf_prefree_seg().
> > > + *
> >
> > Maybe indicate that these checks are only done when debug is on.
> >
> > > + * @param m
> > > + * The mbuf to be checked.
> > > + */
> > > +static __rte_always_inline void
> > > +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m)
> > > +{
> > > + RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> > > + RTE_ASSERT(m->next == NULL);
> > > + RTE_ASSERT(m->nb_segs == 1);
> > > + __rte_mbuf_sanity_check(m, 0);
> > > +}
> > > +#else
> > > +static __rte_always_inline void
> > > +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m __rte_unused)
> > > +{
> > > + /* Nothing here. */
> > > +}
> > > +#endif
> > > +/** For backwards compatibility. */
> > > +#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
> >
> > It looks that MBUF_RAW_ALLOC_CHECK() is also used in drivers/net/sfc,
> > I think it should be updated too.
> >
> > >
> > > /**
> > > * Allocate an uninitialized mbuf from mempool *mp*.
> > > @@ -586,7 +610,7 @@ static inline struct rte_mbuf
> *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> > >
> > > if (rte_mempool_get(mp, (void **)&m) < 0)
> > > return NULL;
> > > - MBUF_RAW_ALLOC_CHECK(m);
> > > + __rte_mbuf_raw_sanity_check(m);
> > > return m;
> > > }
> > >
> > > @@ -609,10 +633,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
> > > {
> > > RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
> > > (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
> > > - RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> > > - RTE_ASSERT(m->next == NULL);
> > > - RTE_ASSERT(m->nb_segs == 1);
> > > - __rte_mbuf_sanity_check(m, 0);
> > > + __rte_mbuf_raw_sanity_check(m);
> > > rte_mempool_put(m->pool, m);
> > > }
> > >
> > > @@ -858,8 +879,6 @@ static inline void
> rte_pktmbuf_reset_headroom(struct rte_mbuf *m)
> > > * @param m
> > > * The packet mbuf to be reset.
> > > */
> > > -#define MBUF_INVALID_PORT UINT16_MAX
> > > -
> > > static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> > > {
> > > m->next = NULL;
> > > @@ -868,7 +887,7 @@ static inline void rte_pktmbuf_reset(struct
> rte_mbuf *m)
> > > m->vlan_tci = 0;
> > > m->vlan_tci_outer = 0;
> > > m->nb_segs = 1;
> > > - m->port = MBUF_INVALID_PORT;
> > > + m->port = RTE_MBUF_PORT_INVALID;
> > >
> > > m->ol_flags &= EXT_ATTACHED_MBUF;
> > > m->packet_type = 0;
> > > @@ -931,22 +950,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct
> rte_mempool *pool,
> > > switch (count % 4) {
> > > case 0:
> > > while (idx != count) {
> > > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > > rte_pktmbuf_reset(mbufs[idx]);
> > > idx++;
> > > /* fall-through */
> > > case 3:
> > > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > > rte_pktmbuf_reset(mbufs[idx]);
> > > idx++;
> > > /* fall-through */
> > > case 2:
> > > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > > rte_pktmbuf_reset(mbufs[idx]);
> > > idx++;
> > > /* fall-through */
> > > case 1:
> > > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > > rte_pktmbuf_reset(mbufs[idx]);
> > > idx++;
> > > /* fall-through */
> > > diff --git a/lib/librte_mbuf/rte_mbuf_core.h
> b/lib/librte_mbuf/rte_mbuf_core.h
> > > index 8cd7137ac..4ac5609e3 100644
> > > --- a/lib/librte_mbuf/rte_mbuf_core.h
> > > +++ b/lib/librte_mbuf/rte_mbuf_core.h
> > > @@ -272,7 +272,7 @@ extern "C" {
> > > * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is
> set.
> > > */
> > > #define PKT_TX_QINQ (1ULL << 49)
> > > -/* this old name is deprecated */
> > > +/** This old name is deprecated. */
> > > #define PKT_TX_QINQ_PKT PKT_TX_QINQ
> > >
> > > /**
> > > @@ -686,7 +686,7 @@ struct rte_mbuf_ext_shared_info {
> > > };
> > > };
> > >
> > > -/**< Maximum number of nb_segs allowed. */
> > > +/** Maximum number of nb_segs allowed. */
> > > #define RTE_MBUF_MAX_NB_SEGS UINT16_MAX
> > >
> > > /**
> > > @@ -714,7 +714,10 @@ struct rte_mbuf_ext_shared_info {
> > > #define RTE_MBUF_DIRECT(mb) \
> > > (!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
> > >
> > > -#define MBUF_INVALID_PORT UINT16_MAX
> > > +/** NULL value for the uint16_t port type. */
> > > +#define RTE_MBUF_PORT_INVALID UINT16_MAX
> >
> > I don't really like talking about "NULL". What do you think instead of
> > this wording?
> >
> > /** Uninitialized or unspecified port */
> >
> > > +/** For backwards compatibility. */
> > > +#define MBUF_INVALID_PORT RTE_MBUF_PORT_INVALID
> > >
> > > /**
> > > * A macro that points to an offset into the data in the mbuf.
> >
> > Thanks,
> > Olivier
> >
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 15:03 0% ` Morten Brørup
@ 2020-11-04 15:00 0% ` Olivier Matz
2020-11-05 0:25 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2020-11-04 15:00 UTC (permalink / raw)
To: Morten Brørup
Cc: Slava Ovsiienko, NBU-Contact-Thomas Monjalon, dev, techboard,
Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, Yigit,
Ferruh, david.marchand, Richardson, Bruce, jerinj,
honnappa.nagarahalli, maxime.coquelin, stephen, hemant.agrawal,
Matan Azrad, Shahaf Shuler
Hi,
On Tue, Nov 03, 2020 at 04:03:46PM +0100, Morten Brørup wrote:
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Slava Ovsiienko
> > Sent: Tuesday, November 3, 2020 3:03 PM
> >
> > Hi, Morten
> >
> > > From: Morten Brørup <mb@smartsharesystems.com>
> > > Sent: Tuesday, November 3, 2020 14:10
> > >
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Sent: Monday, November 2, 2020 4:58 PM
> > > >
> > > > +Cc techboard
> > > >
> > > > We need benchmark numbers in order to take a decision.
> > > > Please all, prepare some arguments and numbers so we can discuss
> > the
> > > > mbuf layout in the next techboard meeting.
I did some quick tests, and it appears to me that just moving the pool
pointer to the first cache line has not a significant impact.
However, I agree with Morten that there is some room for optimization
around m->pool: I did a hack in the ixgbe driver to assume there is only
one mbuf pool. This simplifies a lot the freeing of mbufs in Tx, because
we don't have to group them in bulks that shares the same pool (see
ixgbe_tx_free_bufs()). The impact of this hack is quite good: +~5% on a
real-life forwarding use case.
It is maybe possible to store the pool in the sw ring to avoid a later
access to m->pool. Having a pool index as suggested by Morten would also
help to reduce used room in sw ring in this case. But this is a bit
off-topic :)
> > > I propose that the techboard considers this from two angels:
> > >
> > > 1. Long term goals and their relative priority. I.e. what can be
> > achieved with
> > > wide-ranging modifications, requiring yet another ABI break and due
> > notices.
> > >
> > > 2. Short term goals, i.e. what can be achieved for this release.
> > >
> > >
> > > My suggestions follow...
> > >
> > > 1. Regarding long term goals:
> > >
> > > I have argued that simple forwarding of non-segmented packets using
> > only the
> > > first mbuf cache line can be achieved by making three
> > > modifications:
> > >
> > > a) Move m->tx_offload to the first cache line.
> > Not all PMDs use this field on Tx. HW might support the checksum
> > offloads
> > directly, not requiring these fields at all.
To me, a driver should use m->tx_offload, because the application
specifies the offset where the checksum has to be done, in case the hw
is not able to recognize the protocol.
> > > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > > instead of the 64 bit m->pool pointer in the second cache line.
> > 256 mpool looks enough, as for me. Regarding the indirect access to the
> > pool
> > (via some table) - it might introduce some performance impact.
>
> It might, but I hope that it is negligible, so the benefits outweigh the disadvantages.
>
> It would have to be measured, though.
>
> And m->pool is only used for free()'ing (and detach()'ing) mbufs.
>
> > For example,
> > mlx5 PMD strongly relies on pool field for allocating mbufs in Rx
> > datapath.
> > We're going to update (o-o, we found point to optimize), but for now it
> > does.
>
> Without looking at the source code, I don't think the PMD is using m->pool in the RX datapath, I think it is using a pool dedicated to a receive queue used for RX descriptors in the PMD (i.e. driver->queue->pool).
>
> >
> > > c) Do not access m->next when we know that it is NULL.
> > > We can use m->nb_segs == 1 or some other invariant as the gate.
> > > It can be implemented by adding an m->next accessor function:
> > > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > > {
> > > return m->nb_segs == 1 ? NULL : m->next;
> > > }
> >
> > Sorry, not sure about this. IIRC, nb_segs is valid in the first
> > segment/mbuf only.
> > If we have the 4 segments in the pkt we see nb_seg=4 in the first one,
> > and the nb_seg=1
> > in the others. The next field is NULL in the last mbuf only. Am I wrong
> > and miss something ?
>
> You are correct.
>
> This would have to be updated too. Either by increasing m->nb_seg in the following segments, or by splitting up relevant functions into functions for working on first segments (incl. non-segmented packets), and functions for working on following segments of segmented packets.
Instead of maintaining a valid nb_segs, a HAS_NEXT flag would be easier
to implement. However it means that an accessor needs to be used instead
of any m->next access.
> > > Regarding the priority of this goal, I guess that simple forwarding
> > of non-
> > > segmented packets is probably the path taken by the majority of
> > packets
> > > handled by DPDK.
> > >
> > > An alternative goal could be:
> > > Do not touch the second cache line during RX.
> > > A comment in the mbuf structure says so, but it is not true anymore.
> > >
> > > (I guess that regression testing didn't catch this because the tests
> > perform TX
> > > immediately after RX, so the cache miss just moves from the TX to the
> > RX part
> > > of the test application.)
> > >
> > >
> > > 2. Regarding short term goals:
> > >
> > > The current DPDK source code looks to me like m->next is the most
> > frequently
> > > accessed field in the second cache line, so it makes sense moving
> > this to the
> > > first cache line, rather than m->pool.
> > > Benchmarking may help here.
> >
> > Moreover, for the segmented packets the packet size is supposed to be
> > large,
> > and it imposes the relatively low packet rate, so probably optimization
> > of
> > moving next to the 1st cache line might be negligible at all. Just
> > compare 148Mpps of
> > 64B pkts and 4Mpps of 3000B pkts over 100Gbps link. Currently we are on
> > benchmarking
> > and did not succeed yet on difference finding. The benefit can't be
> > expressed in mpps delta,
> > we should measure CPU clocks, but Rx queue is almost always empty - we
> > have an empty
> > loops. So, if we have the boost - it is extremely hard to catch one.
>
> Very good point regarding the value of such an optimization, Slava!
>
> And when free()'ing packets, both m->next and m->pool are touched.
>
> So perhaps the free()/detach() functions in the mbuf library can be modified to handle first segments (and non-segmented packets) and following segments differently, so accessing m->next can be avoided for non-segmented packets. Then m->pool should be moved to the first cache line.
>
I also think that Moving m->pool without doing something else about
m->next is probably useless. And it's too late for 20.11 to do
additionnal changes, so I suggest to postpone the field move to 21.11,
once we have a clearer view of possible optimizations.
Olivier
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC v3 2/2] ethdev: introduce sft lib
@ 2020-11-04 13:17 1% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2020-11-04 13:17 UTC (permalink / raw)
To: andreyv, mdr
Cc: alexr, andrey.vesnovaty, arybchenko, dev, elibr, ferruh.yigit,
orika, ozsh, roniba, thomas, viacheslavo
Defines RTE SFT (Stateful Flow Table) APIs for Stateful Flow Table library.
Currently, DPDK enables only stateless offloading, using the rte_flow.
stateless means that each packet is handled without any knowledge of
previous or future packets.
As we look at the industry, there is much demand to save a context across
packets that belong to a connection.
Examples for such applications:
- Next-generation firewalls
- Intrusion detection/prevention systems (IDS/IPS): Suricata, snort
- SW/Virtual Switching: OVS
The goals of the SFT library:
- Accelerate flow recognition & its context retrieval for further
lookaside processing.
- Enable context-aware flow handling offload.
The solution suggested is to create a lib that will enable saving states
between different packets that belong to the same connection.
The solution will also enable better HW abstraction than the one we get
from using the rte_flow. The reason for this is that saving states is
not atomic action like we have in rte_flow and also can't be done fully
in HW (The first packets must be seen by the application).
Saying the above this lib is based on interacting with the rte_flow but it
doesn't replace it or encapsulate it.
Key design points.
- The SFT should offload as much as possible to HW.
- The SFT is designed to work alongside the rte_flow.
- The SFT has its own ops that the PMD needs to implement.
- The SFT works on 5 tuple + zone (a user-defined value)
Basic usage flow:
1. Application insert a flow that matches all eth traffic and have
sft action along with jump action. (in the future this jump can be
avoided and in doing so saving some jumps,
but for the most generic and complete solution we think
that allow the application full control of the packet process using
rte_flow is better.)
2. Application insert a flow in the target group that matches the packet
state. Based on this state the application performs the needed
actions. This flow can also be merged with other matching criteria.
The application will also add a flow in the target group that will
upload to the application any packet with a miss state.
3. First eth packet arrives and is routed to the SFT HW component.
since this is the first packet the SFT will have a miss and will
mark the packet with miss state and forward it to the target group.
4. The application will pull the packet from the queue and will send it to
be processed by the sft lib.
5. The SFT will extract HW packet state and if valid the zone or the
flow-id, and report it back to the application.
6. Application will see that this is a new connection, so it will issue
SFT command to create a new connection with a selected state.
The SFT will create a HW flow, that matches the 5 tuple + zone and
sets the state of the packet. The state can be any u8 value, it is
the responsibility of the application to match on the value.
7. On the next packet arriving to the HW it will jump to the SFT and
in the SFT HW there will be a match which will result in setting
the packet state and ID according to the application request.
8. In case of later miss (at some other group) or application logic
that the packet should be routed back to the application.
The application will call the SFT lib with the new mbuf,
which will result in the flow-id returned to the application
along with the context attached to this connection.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Signed-off-by: Ori Kam <orika@nvidia.com>
---
lib/librte_ethdev/meson.build | 3 +
lib/librte_ethdev/rte_ethdev_version.map | 19 +
lib/librte_ethdev/rte_sft.c | 9 +
lib/librte_ethdev/rte_sft.h | 877 +++++++++++++++++++++++
lib/librte_ethdev/rte_sft_driver.h | 201 ++++++
5 files changed, 1109 insertions(+)
create mode 100644 lib/librte_ethdev/rte_sft.c
create mode 100644 lib/librte_ethdev/rte_sft.h
create mode 100644 lib/librte_ethdev/rte_sft_driver.h
diff --git a/lib/librte_ethdev/meson.build b/lib/librte_ethdev/meson.build
index 8fc24e8c8a..064e3c9443 100644
--- a/lib/librte_ethdev/meson.build
+++ b/lib/librte_ethdev/meson.build
@@ -9,6 +9,7 @@ sources = files('ethdev_private.c',
'rte_ethdev.c',
'rte_flow.c',
'rte_mtr.c',
+ 'rte_sft.c',
'rte_tm.c')
headers = files('rte_ethdev.h',
@@ -24,6 +25,8 @@ headers = files('rte_ethdev.h',
'rte_flow_driver.h',
'rte_mtr.h',
'rte_mtr_driver.h',
+ 'rte_sft.h',
+ 'rte_sft_driver.h',
'rte_tm.h',
'rte_tm_driver.h')
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index f8a0945812..e3c829b494 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -232,6 +232,25 @@ EXPERIMENTAL {
rte_eth_fec_get_capability;
rte_eth_fec_get;
rte_eth_fec_set;
+ rte_sft_drain_mbuf;
+ rte_sft_fini;
+ rte_sft_flow_activate;
+ rte_sft_flow_create;
+ rte_sft_flow_destroy;
+ rte_sft_flow_get_client_obj;
+ rte_sft_flow_get_status;
+ rte_sft_flow_query;
+ rte_sft_flow_set_aging;
+ rte_sft_flow_set_client_obj;
+ rte_sft_flow_set_data;
+ rte_sft_flow_set_offload;
+ rte_sft_flow_set_state;
+ rte_sft_flow_touch;
+ rte_sft_init;
+ rte_sft_process_mbuf;
+ rte_sft_process_mbuf_with_zone;
+
+
};
INTERNAL {
diff --git a/lib/librte_ethdev/rte_sft.c b/lib/librte_ethdev/rte_sft.c
new file mode 100644
index 0000000000..f3d3945545
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft.c
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+
+#include "rte_sft.h"
+#include "rte_sft_driver.h"
+
+/* Placeholder for RTE SFT library APIs implementation */
diff --git a/lib/librte_ethdev/rte_sft.h b/lib/librte_ethdev/rte_sft.h
new file mode 100644
index 0000000000..d295bb0b7a
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft.h
@@ -0,0 +1,877 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef _RTE_SFT_H_
+#define _RTE_SFT_H_
+
+/**
+ * @file
+ *
+ * RTE SFT API
+ *
+ * Defines RTE SFT APIs for Statefull Flow Table library.
+ *
+ * The SFT lib is part of the ethdev class, the reason for this is that the main
+ * idea is to leverage the HW offload that the ethdev allow using the rte_flow.
+ *
+ * SFT General description:
+ * SFT library provides a framework for applications that need to maintain
+ * context across different packets of the connection.
+ * Examples for such applications:
+ * - Next-generation firewalls
+ * - Intrusion detection/prevention systems (IDS/IPS): Suricata, Snort
+ * - SW/Virtual Switching: OVS
+ * The goals of the SFT library:
+ * - Accelerate flow recognition & its context retrieval for further look-aside
+ * processing.
+ * - Enable context-aware flow handling offload.
+ *
+ * The SFT is designed to use HW offload to get the best performance.
+ * This is done on two levels. The first one is marking the packet with flow id
+ * to speed the lookup of the flow in the data structure.
+ * The second is done be connecting the SFT results to the rte_flow for
+ * continuing packet process.
+ *
+ * Definitions and Abbreviations:
+ * - 5-tuple: defined by:
+ * -- Source IP address
+ * -- Source port
+ * -- Destination IP address
+ * -- Destination port
+ * -- IP protocol number
+ * - 7-tuple: 5-tuple, zone and port (see struct rte_sft_7tuple)
+ * - 5/7-tuple: 5/7-tuple of the packet from connection initiator
+ * - revers 5/7-tuple: 5/7-tuple of the packet from connection initiate
+ * - application: SFT library API consumer
+ * - APP: see application
+ * - CID: client ID
+ * - CT: connection tracking
+ * - FID: Flow identifier
+ * - FIF: First In Flow
+ * - Flow: defined by 7-tuple and its reverse i.e. flow is bidirectional
+ * - SFT: Stateful Flow Table
+ * - user: see application
+ * - zone: additional user defined value used as differentiator for
+ * connections having same 5-tuple (for example different VXLAN
+ * connections with same inner 5-tuple).
+ *
+ * SFT components:
+ *
+ * +-----------------------------------+
+ * | RTE flow |
+ * | |
+ * | +-------------------------------+ | +----------------+
+ * | | group X | | | RTE_SFT |
+ * | | | | | |
+ * | | +---------------------------+ | | | |
+ * | | | rule ... | | | | |
+ * | | | . | | | +-----------+----+
+ * | | | . | | | |
+ * | | | . | | | entry
+ * | | +---------------------------+ | | create
+ * | | | rule | | | |
+ * | | | patterns ... +---------+ |
+ * | | | actions | | | | |
+ * | | | SFT (zone=Z) | | | | |
+ * | | | JUMP (group=Y) | | | lookup |
+ * | | +---------------------------+ | | zone=Z, |
+ * | | | rule ... | | | 5tuple |
+ * | | | . | | | | |
+ * | | | . | | | +--v-------------+
+ * | | | . | | | | SFT | |
+ * | | | | | | | | |
+ * | | +---------------------------+ | | | +--v--+ |
+ * | | | | | | | |
+ * | +-------------------------------+ | | | PMD | |
+ * | | | | | |
+ * | | | +-----+ |
+ * | +-------------------------------+ | | |
+ * | | group Y | | | |
+ * | | | | | set state |
+ * | | +---------------------------+ | | | set data |
+ * | | | rule | | | +--------+-------+
+ * | | | patterns | | | |
+ * | | | SFT (state=UNDEFINED) | | | |
+ * | | | actions RSS | | | |
+ * | | +---------------------------+ | | |
+ * | | | rule | | | |
+ * | | | patterns | | | |
+ * | | | SFT (state=INVALID) | <-------------+
+ * | | | actions DROP | | | forward
+ * | | +---------------------------+ | | group=Y
+ * | | | rule | | |
+ * | | | patterns | | |
+ * | | | SFT (state=ACCEPTED) | | |
+ * | | | actions PORT | | |
+ * | | +---------------------------+ | |
+ * | | ... | |
+ * | | | |
+ * | +-------------------------------+ |
+ * | ... |
+ * | |
+ * +-----------------------------------+
+ *
+ * SFT as datastructure:
+ * SFT can be treated as datastructure maintaining flow context across its
+ * lifetime. SFT flow entry represents bidirectional network flow and defined by
+ * 7-tuple & its reverse 7-tuple.
+ * Each entry in SFT has:
+ * - FID: 1:1 mapped & used as entry handle & encapsulating internal
+ * implementation of the entry.
+ * - State: user-defined value attached to each entry, the only library
+ * reserved value for state unset (the actual value defined by SFT
+ * configuration). The application should define flow state encodings and
+ * set it for flow via rte_sft_flow_set_ctx() than what actions should be
+ * applied on packets can be defined via related RTE flow rule matching SFT
+ * state (see rules in SFT components diagram above).
+ * - Timestamp: for the last seen in flow packet used for flow aging mechanism
+ * implementation.
+ * - Client Objects: user-defined flow contexts attached as opaques to flow.
+ * - Acceleration & offloading - utilize RTE flow capabilities, when supported
+ * (see action ``SFT``), for flow lookup acceleration and further
+ * context-aware flow handling offload.
+ * - CT state: optionally for TCP connections CT state can be maintained
+ * (see enum rte_sft_flow_ct_state).
+ * - Out of order TCP packets: optionally SFT can keep out of order TCP
+ * packets aside the flow context till the arrival of the missing in-order
+ * packet.
+ *
+ * RTE flow changes:
+ * The SFT flow state (or context) for RTE flow is defined by fields of
+ * struct rte_flow_item_sft.
+ * To utilize SFT capabilities new item and action types introduced:
+ * - item SFT: matching on SFT flow state (see RTE_FLOW_ITEM_TYPE_SFT).
+ * - action SFT: retrieve SFT flow context and attached it to the processed
+ * packet (see RTE_FLOW_ACTION_TYPE_SFT).
+ *
+ * The contents of per port SFT serving RTE flow action ``SFT`` managed via
+ * SFT PMD APIs (see struct rte_sft_ops).
+ * The SFT flow state/context retrieval performed by user-defined zone ``SFT``
+ * action argument and processed packet 5-tuple.
+ * If in scope of action ``SFT`` there is no context/state for the flow in SFT
+ * undefined state attached to the packet meaning that the flow is not
+ * recognized by SFT, most probably FIF packet.
+ *
+ * Once the SFT state set for a packet it can match on item SFT
+ * (see RTE_FLOW_ITEM_TYPE_SFT) and forwarding design can be done for the
+ * packet, for example:
+ * - if state value == x than queue for further processing by the application
+ * - if state value == y than forward it to eth port (full offload)
+ * - if state value == 'undefined' than queue for further processing by
+ * the application (handle FIF packets)
+ *
+ * Processing packets with SFT library:
+ *
+ * FIF packet:
+ * To recognize upcoming packets of the SFT flow every FIF packet should be
+ * forwarded to the application utilizing the SFT library. Non-FIF packets can
+ * be processed by the application or its processing can be fully offloaded.
+ * Processing of the packets in SFT library starts with rte_sft_process_mbuf
+ * or rte_sft_process_mbuf_with_zone. If mbuf recognized as FIF application
+ * should make a design to destroy flow or complete flow creation process in
+ * SFT using rte_sft_flow_activate.
+ *
+ * Recognized SFT flow:
+ * Once struct rte_sft_flow_status with valid fid field possessed by application
+ * it can:
+ * - mange client objects on it (see client_obj field in
+ * struct rte_sft_flow_status) using rte_sft_flow_<OP>_client_obj APIs
+ * - analyze user-defined flow state and CT state.
+ * - set flow state to be attached to the upcoming packets by action ``SFT``
+ * via struct rte_sft_flow_status API.
+ * - decide to destroy flow via rte_sft_flow_destroy API.
+ *
+ * Flow aging:
+ *
+ * SFT library manages the aging for each flow. On flow creation, it's
+ * assigned an aging value, the maximal number of seconds passed since the
+ * last flow packet arrived, once exceeded flow considered aged.
+ * The application notified of aged flow asynchronously via event queues.
+ * The device and port IDs tuple to identify the event queue to enqueue
+ * flow aged events passed on flow creation as arguments
+ * (see rte_sft_flow_activate). It's the application responsibility to
+ * initialize event queues and assign them to each flow for EOF event
+ * notifications.
+ * Aged EOF event handling:
+ * - Should be considered as application responsibility.
+ * - The last stage should be the release of the flow resources via
+ * rte_sft_flow_destroy API.
+ * - All client objects should be removed from flow before the
+ * rte_sft_flow_destroy API call.
+ * See the description of ret_sft_flow_destroy for an example of aged flow
+ * handling.
+ *
+ * SFT API thread safety:
+ *
+ * Since the SFT lib is designed to work as part of the Fast-Path, The SFT
+ * is not thread safe, in order to enable better working with multiple threads
+ * the SFT lib uses the queue approach, where each queue can only be accessesd
+ * by one thread while one thread can access multiple queues.
+ *
+ * SFT Library initialization and cleanup:
+ *
+ * SFT library should be considered as a single instance, preconfigured and
+ * initialized via rte_sft_init() API.
+ * SFT library resource deallocation and cleanup should be done via
+ * rte_sft_init() API as a stage of the application termination procedure.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_errno.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_flow.h>
+
+/**
+ * L3/L4 5-tuple - src/dest IP and port and IP protocol.
+ *
+ * Used for flow/connection identification.
+ */
+RTE_STD_C11
+struct rte_sft_5tuple {
+ union {
+ struct {
+ rte_be32_t src_addr; /**< IPv4 source address. */
+ rte_be32_t dst_addr; /**< IPv4 destination address. */
+ } ipv4;
+ struct {
+ uint8_t src_addr[16]; /**< IPv6 source address. */
+ uint8_t dst_addr[16]; /**< IPv6 destination address. */
+ } ipv6;
+ };
+ rte_be16_t src_port; /**< Source port. */
+ rte_be16_t dst_port; /**< Destination port. */
+ uint8_t proto; /**< IP protocol. */
+ uint8_t is_ipv6: 1; /**< True for valid IPv6 fields. Otherwise IPv4. */
+};
+
+/**
+ * Port flow identification.
+ *
+ * @p zone used for setups where 5-tuple is not enough to identify flow.
+ * For example different VLANs/VXLANs may have similar 5-tuples.
+ */
+struct rte_sft_7tuple {
+ struct rte_sft_5tuple flow_5tuple; /**< L3/L4 5-tuple. */
+ uint32_t zone; /**< Zone assigned to flow. */
+ uint16_t port_id; /** <Port identifier of Ethernet device. */
+};
+
+/**
+ * Structure describes SFT library configuration
+ */
+struct rte_sft_conf {
+ uint16_t nb_queues; /**< Preferred number of queues */
+ uint32_t udp_aging; /**< UDP proto default aging in sec */
+ uint32_t tcp_aging; /**< TCP proto default aging in sec */
+ uint32_t tcp_syn_aging; /**< TCP SYN default aging in sec. */
+ uint32_t default_aging; /**< All unlisted proto default aging in sec. */
+ uint32_t nb_max_entries; /**< Max entries in SFT. */
+ uint8_t app_data_len; /**< Number of uint32 of app data. */
+ uint32_t support_partial_match: 1;
+ /**< App can partial match on the data. */
+ uint32_t reorder_enable: 1;
+ /**< TCP packet reordering feature enabled bit. */
+ uint32_t tcp_ct_enable: 1;
+ /**< TCP connection tracking based on standard. */
+ uint32_t reserved: 30;
+};
+
+/**
+ * Structure that holds the action configuration.
+ */
+struct rte_sft_actions_specs {
+ struct rte_sft_5tuple *initiator_nat;
+ /**< The NAT configuration for the initiator flow. */
+ struct rte_sft_5tuple *reverse_nat;
+ /**< The NAT configuration for the reverse flow. */
+ uint64_t aging; /**< the aging time out in sec. */
+};
+
+#define RTE_SFT_ACTION_INITIATOR_NAT (1ul << 0)
+/**< NAT action should be done on the initiator traffic. */
+#define RTE_SFT_ACTION_REVERSE_NAT (1ul << 1)
+/**< NAT action should be done on the reverse traffic. */
+#define RTE_SFT_ACTION_COUNT (1ul << 2) /**< Enable count action. */
+#define RTE_SFT_ACTION_AGE (1ul << 3) /**< Enable ageing action. */
+
+
+/**
+ * Structure that holds the count data.
+ */
+struct rte_sft_query_data {
+ uint64_t nb_bytes; /**< Number of bytes that passed in the flow. */
+ uint64_t nb_packets; /**< Number of packets that passed in the flow. */
+ uint32_t age; /**< Seconds passed since last seen packet. */
+ uint32_t aging;
+ /**< Flow considered aged once this age (seconds) reached. */
+ uint32_t nb_bytes_valid: 1; /**< Number of bytes is valid. */
+ uint32_t nb_packets_valid: 1; /* Number of packets is valid. */
+ uint32_t nb_age_valid: 1; /* Age is valid. */
+ uint32_t nb_aging_valid: 1; /* Aging is valid. */
+ uint32_t reserved: 28;
+};
+
+/**
+ * Structure describes the state of the flow in SFT.
+ */
+struct rte_sft_flow_status {
+ uint32_t fid; /**< SFT flow id. */
+ uint32_t zone; /**< Zone for lookup in SFT */
+ uint8_t state; /**< Application defined bidirectional flow state. */
+ uint8_t proto_state; /**< The state based on the protocol. */
+ uint16_t proto; /**< L4 protocol. */
+ /**< Connection tracking flow state, based on standard. */
+ uint32_t nb_in_order_mbufs;
+ /**< Number of in-order mbufs available for drain */
+ uint32_t activated: 1; /**< Flow was activated. */
+ uint32_t zone_valid: 1; /**< Zone field is valid. */
+ uint32_t proto_state_change: 1; /**< Protocol state was changed. */
+ uint32_t fragmented: 1; /**< Last flow mbuf was fragmented. */
+ uint32_t out_of_order: 1; /**< Last flow mbuf was out of order (TCP). */
+ uint32_t offloaded: 1;
+ /**< The connection is offload and no packet should be stored. */
+ uint32_t initiator: 1; /**< marks if the mbuf is from the initiator. */
+ uint32_t reserved: 25;
+ uint32_t data[];
+ /**< Application data. The length is defined by the configuration. */
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_flow_error.cause.
+ */
+enum rte_sft_error_type {
+ RTE_SFT_ERROR_TYPE_NONE, /**< No error. */
+ RTE_SFT_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_SFT_ERROR_TYPE_FLOW_NOT_DEFINED, /**< The FID is not defined. */
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by SFT, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_sft_error {
+ enum rte_sft_error_type type; /**< Cause field and error types. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get SFT flow status, based on the fid.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] status
+ * Structure to dump actual SFT flow status.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_get_status(const uint16_t queue, const uint32_t fid,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined data.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param data
+ * User defined data. The len is defined at configuration time.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_data(uint16_t queue, uint32_t fid, const uint32_t *data,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined state.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param state
+ * User state.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_state(uint16_t queue, uint32_t fid, const uint8_t state,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined state.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param offload
+ * set if flow is offloaded.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_offload(uint16_t queue, uint32_t fid, bool offload,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Initialize SFT library instance.
+ *
+ * @param conf
+ * SFT library instance configuration.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_init(const struct rte_sft_conf *conf, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Finalize SFT library instance.
+ * Cleanup & release allocated resources.
+ */
+__rte_experimental
+void
+rte_sft_fini(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process mbuf received on RX queue.
+ *
+ * This function checks the mbuf against the SFT database and return the
+ * connection status that this mbuf belongs to.
+ *
+ * If status.activated = 1 and status.offloaded = 0 the input mbuf is
+ * considered consumed and the application is not allowed to use it or free it,
+ * instead the application should use the mbuf pointed by the mbuf_out.
+ * In case the mbuf is out of order or fragmented the mbuf_out will be NULL.
+ *
+ * If status.activated = 0 or status.offloaded = 1, the input mbuf is not
+ * consumed and the mbuf_out will always be NULL.
+ *
+ * This function doesn't create new entry in the SFT.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * if status.activated and status.offload = 0.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Connection status based on the last in mbuf.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_process_mbuf(uint16_t queue, struct rte_mbuf *mbuf_in,
+ struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process mbuf received on RX queue while zone value provided by caller.
+ *
+ * The behaviour of this function is similar to rte_sft_process_mbuf except
+ * the lookup in SFT procedure. The lookup in SFT always done by the *zone*
+ * arg and 5-tuple 5-tuple, extracted form mbuf outer header contents.
+ *
+ * @see rte_sft_process_mbuf
+ *
+ * @param queue
+ * The sft queue number.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * after successful call to this function.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Connection status based on the last in mbuf.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_process_mbuf_with_zone(uint16_t queue, struct rte_mbuf *mbuf_in,
+ uint32_t zone, struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Drain next in order mbuf.
+ *
+ * This function behaves similar to rte_sft_process_mbuf() but acts on packets
+ * accumulated in SFT flow due to missing in order packet. Processing done on
+ * single mbuf at a time and `in order`. Other than above the behavior is
+ * same as of rte_sft_process_mbuf for flow defined & activated & mbuf isn't
+ * fragmented & 'in order'. This function should be called when
+ * rte_sft_process_mbuf or rte_sft_process_mbuf_with_zone sets
+ * status->nb_in_order_mbufs output param !=0 and until
+ * status->nb_in_order_mbufs == 0.
+ * Flow should be locked by caller (see rte_sft_flow_lock).
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param nb_out
+ * Number of buffers to be drained.
+ * @param initiator
+ * true packets that will be drained belongs to the initiator.
+ * @param[out] status
+ * Connection status based on the last mbuf that was drained.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * The number of mbufs that were drained, negative value in case
+ * of error and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_drain_mbuf(uint16_t queue, uint32_t fid, struct rte_mbuf **mbuf_out,
+ uint16_t nb_out, bool initiator,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Activate flow in SFT.
+ *
+ * This function creates an entry in the SFT for this connection.
+ * The reasons for 2 phase flow creation procedure:
+ * 1. Missing reverse flow - flow context is shared for both flow directions
+ * i.e. in order maintain bidirectional flow context in RTE SFT packets
+ * arriving from both directions should be identified as packets of the
+ * RTE SFT flow. Consequently, before the creation of the SFT flow caller
+ * should provide reverse flow direction 7-tuple.
+ * 2. The caller of rte_sft_process_mbuf/rte_sft_process_mbuf_with_zone should
+ * be notified that arrived mbuf is first in flow & decide whether to
+ * create a new flow or disregard this packet.
+ * This function completes the creation of the bidirectional SFT flow & creates
+ * entry for 7-tuple on SFT PMD defined by the tuple port for both
+ * initiator/initiate 7-tuples.
+ * Flow aging, connection tracking state & out of order handling will be
+ * initialized according to the content of the *mbuf_in* passes to
+ * rte_sft_process_mbuf/_with_zone during phase 1 of flow creation.
+ * Once this function returns upcoming calls rte_sft_process_mbuf/_with_zone
+ * with 7-tuple or its reverse will return the handle to this flow.
+ * Flow should be locked by the caller (see rte_sft_flow_lock).
+ *
+ * @param queue
+ * The SFT queue.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * after successful call to this function.
+ * @param reverse_tuple
+ * Expected response flow 7-tuple.
+ * @param state
+ * User defined state to set.
+ * @param data
+ * User defined data, the len is configured during sft init.
+ * @param proto_enable
+ * Enables maintenance of status->proto_state connection tracking value
+ * for the flow. otherwise status->proto_state will be initialized with zeros.
+ * @param dev_id
+ * Event dev ID to enqueue end of flow event.
+ * @param port_id
+ * Event port ID to enqueue end of flow event.
+ * @param actions
+ * Flags that indicate which actions should be done on the packet before
+ * returning it to the rte_flow.
+ * @param action_specs
+ * Hold the actions configuration.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Structure to dump SFT flow status once activated.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_activate(uint16_t queue, struct rte_mbuf *mbuf_in,
+ const struct rte_sft_7tuple *reverse_tuple,
+ uint8_t state, uint32_t *data, uint8_t proto_enable,
+ uint8_t dev_id, uint8_t port_id, uint64_t actions,
+ const struct rte_sft_actions_specs *action_specs,
+ struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Artificially create SFT flow.
+ *
+ * Function to create SFT flow before reception of the first flow packet.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param tuple
+ * Expected initiator flow 7-tuple.
+ * @param reverse_tuple
+ * Expected initiate flow 7-tuple.
+ * @param state
+ * User defined state to set.
+ * @param data
+ * User defined data, the len is configured during sft init.
+ * @param proto_enable
+ * Enables maintenance of status->proto_state connection tracking value
+ * for the flow. otherwise status->proto_state will be initialized with zeros.
+ * @param[out] status
+ * Connection status.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * - on success: 0, locked SFT flow recognized by status->fid.
+ * - on error: a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_create(uint16_t queue, const struct rte_sft_7tuple *tuple,
+ const struct rte_sft_7tuple *reverse_tuple,
+ const struct rte_flow_item_sft *ctx,
+ uint8_t ct_enable,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Removes flow from SFT.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID to destroy.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_destroy(uint16_t queue, uint32_t fid, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Query counter and aging data.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] data.
+ * SFT flow ID.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_query(uint16_t queue, uint32_t fid,
+ struct rte_sft_query_data *data,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Reset flow age to zero.
+ *
+ * Simulates last flow packet with timestamp set to just now.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_touch(uint16_t queue, uint32_t fid, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set flow aging to specific value.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param aging
+ * New flow aging value.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_aging(uint16_t queue, uint32_t fid, uint32_t aging,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set client object for given client ID.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param client_id
+ * Client ID to set object for.
+ * @param client_obj
+ * Pointer to opaque client object structure.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_client_obj(uint16_t queue, uint32_t fid, uint8_t client_id,
+ void *client_obj, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get client object for given client ID.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param client_id
+ * Client ID to get object for.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * A valid client object opaque pointer in case of success, NULL otherwise
+ * and rte_sft_error is set.
+ */
+__rte_experimental
+void *
+rte_sft_flow_get_client_obj(uint16_t queue, const uint32_t fid,
+ uint8_t client_id, struct rte_sft_error *error);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SFT_H_ */
diff --git a/lib/librte_ethdev/rte_sft_driver.h b/lib/librte_ethdev/rte_sft_driver.h
new file mode 100644
index 0000000000..6ae3c4b997
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft_driver.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_SFT_DRIVER_H_
+#define RTE_SFT_DRIVER_H_
+
+/**
+ * @file
+ * RTE generic SFT API (driver side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_ethdev_driver.h"
+#include "rte_sft.h"
+#include "rte_flow.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_sft_entry;
+
+#define RTE_SFT_STATE_FLAG_FID_VALID (1 << 0)
+#define RTE_SFT_STATE_FLAG_ZONE_VALID (1 << 1)
+#define RTE_SFT_STATE_FLAG_FLOW_MISS (1 << 2)
+
+#define RTE_SFT_MISS_TCP_FLAGS (1 << 0)
+
+RTE_STD_C11
+struct rte_sft_decode_info {
+ union {
+ uint32_t fid; /**< The fid value. */
+ uint32_t zone; /**< The zone value. */
+ };
+ uint32_t state;
+ /**< Flags that mark the packet state. see RTE_SFT_STATE_FLAG_*. */
+};
+
+/**
+ * @internal
+ * Insert a flow to the SFT HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param fid
+ * Flow ID.
+ * @param queue
+ * The sft working queue.
+ * @param pattern
+ * The matching pattern.
+ * @param miss_conditions
+ * The conditions that forces a miss even if the 5 tuple was matched
+ * see RTE_SFT_MISS_*.
+ * @param actions
+ * Set pf actions to apply in case the flow was hit. If no terminating action
+ * (queue, rss, drop, port) was given, the terminating action should be taken
+ * from the flow that resulted in the SFT.
+ * @param miss_actions
+ * Set pf actions to apply in case the flow was hit. but the miss conditions
+ * were hit. (6 tuple match but tcp flags are on) If no terminating action
+ * (queue, rss, drop, port) was given, the terminating action should be taken
+ * from the flow that resulted in the SFT.
+ * @param data
+ * The application data to attached to the flow.
+ * @param data_len
+ * The length of the data in uint32_t increments.
+ * @param state
+ * The application state to set.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Pointer to sft_entry in case of success, null otherwise and rte_sft_error
+ * is set.
+ */
+typedef struct rte_sft_entry *(*sft_entry_create_t)
+ (struct rte_eth_dev *dev, uint32_t fid, uint16_t queue,
+ const struct rte_flow_item *pattern, uint64_t miss_conditions,
+ const struct rte_flow_action *actions,
+ const struct rte_flow_action *miss_actions,
+ const uint32_t *data, uint16_t data_len, uint8_t state,
+ struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Modify the state and the data of SFT flow in HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param data
+ * The application data to attached to the flow.
+ * @param data_len
+ * The length of the data in uint32_t increments.
+ * @param state
+ * The application state to set.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_modify_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ const uint32_t *data, uint16_t data_len,
+ uint8_t state, struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Destroy SFT flow in HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_destroy_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Decode sft state and FID from mbuf.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param mbuf
+ * The input mbuf.
+ * @param info[out]
+ * The decoded sft data.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_decode_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ struct rte_mbuf *mbuf,
+ struct rte_sft_decode_info *info,
+ struct rte_sft_error *error);
+
+/**
+ * Generic sft operations structure implemented and returned by PMDs.
+ *
+ * If successful, this operation must result in a pointer to a PMD-specific.
+ *
+ * See also rte_sft_ops_get().
+ *
+ * These callback functions are not supposed to be used by applications
+ * directly, which must rely on the API defined in rte_sft.h.
+ */
+struct rte_sft_ops {
+ sft_entry_create_t sft_create_entry;
+ sft_entry_modify_t sft_entry_modify;
+ sft_entry_destroy_t sft_entry_destroy;
+ sft_entry_decode_t sft_entry_decode;
+};
+
+/**
+ * Get generic sft operations structure from a port.
+ *
+ * @param port_id
+ * Port identifier to query.
+ * @param[out] error
+ * Pointer to flow error structure.
+ *
+ * @return
+ * The flow operations structure associated with port_id, NULL in case of
+ * error, in which case rte_errno is set and the error structure contains
+ * additional details.
+ */
+const struct rte_sft_ops *
+rte_sft_ops_get(uint16_t port_id, struct rte_sft_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_SFT_DRIVER_H_ */
--
2.25.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 2/2] ethdev: introduce sft lib
@ 2020-11-04 12:59 1% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2020-11-04 12:59 UTC (permalink / raw)
To: andreyv, mdr
Cc: alexr, andrey.vesnovaty, arybchenko, dev, elibr, ferruh.yigit,
orika, ozsh, roniba, thomas, viacheslavo
Defines RTE SFT (Stateful Flow Table) APIs for Stateful Flow Table library.
Currently, DPDK enables only stateless offloading, using the rte_flow.
stateless means that each packet is handled without any knowledge of
privious or future packets.
As we look at the industry, there is much demand to save a context across
packets that belong to a connection.
Examples for such applications:
- Next-generation firewalls
- Intrusion detection/prevention systems (IDS/IPS): Suricata, snort
- SW/Virtual Switching: OVS
The goals of the SFT library:
- Accelerate flow recognition & its context retrieval for further
lookaside processing.
- Enable context-aware flow handling offload.
The solution suggested is to create a lib that will enable saving states
between different packets that belong to the same connection.
The solution will also enable better HW abstraction than the one we get
from using the rte_flow. The reason for this is that saving states is
not atomic action like we have in rte_flow and also can't be done fully
in HW (The first packets must be seen by the application).
Saying the above this lib is based on interacting with the rte_flow but it
doesn't replace it or encapsulate it.
Key design points.
- The SFT should offload as much as possible to HW.
- The SFT is designed to work alongside the rte_flow.
- The SFT has its own ops that the PMD needs to implement.
- The SFT works on 5 tuple + zone (a user-defined value)
Basic usage flow:
1. Application insert a flow that matches all eth traffic and have
sft action along with jump action. (in the future this jump can be
avoided and in doing so saving some jumps,
but for the most generic and complete solution we think
that allow the application full control of the packet process using
rte_flow is better.)
2. Application insert a flow in the target group that matches the packet
state. Based on this state the application performs the needed
actions. This flow can also be merged with other matching criteria.
The application will also add a flow in the target group that will
upload to the application any packet with a miss state.
3. First eth packet arrives and is routed to the SFT HW component.
since this is the first packet the SFT will have a miss and will
mark the packet with miss state and forward it to the target group.
4. The application will pull the packet from the queue and will send it to
be processed by the sft lib.
5. The SFT will extract HW packet state and if valid the zone or the
flow-id, and report it back to the application.
6. Application will see that this is a new connection, so it will issue
SFT command to create a new connection with a selected state.
The SFT will create a HW flow, that matches the 5 tuple + zone and
sets the state of the packet. The state can be any u8 value, it is
the responsibility of the application to match on the value.
7. On the next packet arriving to the HW it will jump to the SFT and
in the SFT HW there will be a match which will result in setting
the packet state and ID according to the application request.
8. In case of later miss (at some other group) or application logic
that the packet should be routed back to the application.
The application will call the SFT lib with the new mbuf,
which will result in the flow-id returend to the application
along with the context attached to this connection.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Signed-off-by: Ori Kam <orika@nvidia.com>
---
lib/librte_ethdev/meson.build | 3 +
lib/librte_ethdev/rte_ethdev_version.map | 19 +
lib/librte_ethdev/rte_sft.c | 9 +
lib/librte_ethdev/rte_sft.h | 878 +++++++++++++++++++++++
lib/librte_ethdev/rte_sft_driver.h | 201 ++++++
5 files changed, 1110 insertions(+)
create mode 100644 lib/librte_ethdev/rte_sft.c
create mode 100644 lib/librte_ethdev/rte_sft.h
create mode 100644 lib/librte_ethdev/rte_sft_driver.h
diff --git a/lib/librte_ethdev/meson.build b/lib/librte_ethdev/meson.build
index 8fc24e8c8a..064e3c9443 100644
--- a/lib/librte_ethdev/meson.build
+++ b/lib/librte_ethdev/meson.build
@@ -9,6 +9,7 @@ sources = files('ethdev_private.c',
'rte_ethdev.c',
'rte_flow.c',
'rte_mtr.c',
+ 'rte_sft.c',
'rte_tm.c')
headers = files('rte_ethdev.h',
@@ -24,6 +25,8 @@ headers = files('rte_ethdev.h',
'rte_flow_driver.h',
'rte_mtr.h',
'rte_mtr_driver.h',
+ 'rte_sft.h',
+ 'rte_sft_driver.h',
'rte_tm.h',
'rte_tm_driver.h')
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index f8a0945812..e3c829b494 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -232,6 +232,25 @@ EXPERIMENTAL {
rte_eth_fec_get_capability;
rte_eth_fec_get;
rte_eth_fec_set;
+ rte_sft_drain_mbuf;
+ rte_sft_fini;
+ rte_sft_flow_activate;
+ rte_sft_flow_create;
+ rte_sft_flow_destroy;
+ rte_sft_flow_get_client_obj;
+ rte_sft_flow_get_status;
+ rte_sft_flow_query;
+ rte_sft_flow_set_aging;
+ rte_sft_flow_set_client_obj;
+ rte_sft_flow_set_data;
+ rte_sft_flow_set_offload;
+ rte_sft_flow_set_state;
+ rte_sft_flow_touch;
+ rte_sft_init;
+ rte_sft_process_mbuf;
+ rte_sft_process_mbuf_with_zone;
+
+
};
INTERNAL {
diff --git a/lib/librte_ethdev/rte_sft.c b/lib/librte_ethdev/rte_sft.c
new file mode 100644
index 0000000000..f3d3945545
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft.c
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+
+#include "rte_sft.h"
+#include "rte_sft_driver.h"
+
+/* Placeholder for RTE SFT library APIs implementation */
diff --git a/lib/librte_ethdev/rte_sft.h b/lib/librte_ethdev/rte_sft.h
new file mode 100644
index 0000000000..edd8671cad
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft.h
@@ -0,0 +1,878 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef _RTE_SFT_H_
+#define _RTE_SFT_H_
+
+/**
+ * @file
+ *
+ * RTE SFT API
+ *
+ * Defines RTE SFT APIs for Statefull Flow Table library.
+ *
+ * The SFT lib is part of the ethdev class, the reason for this is that the main
+ * idea is to leverage the HW offload that the ethdev allow using the rte_flow.
+ *
+ * SFT General description:
+ * SFT library provides a framework for applications that need to maintain
+ * context across different packets of the connection.
+ * Examples for such applications:
+ * - Next-generation firewalls
+ * - Intrusion detection/prevention systems (IDS/IPS): Suricata, Snort
+ * - SW/Virtual Switching: OVS
+ * The goals of the SFT library:
+ * - Accelerate flow recognition & its context retrieval for further look-aside
+ * processing.
+ * - Enable context-aware flow handling offload.
+ *
+ * The SFT is designed to use HW offload to get the best performance.
+ * This is done on two levels. The first one is marking the packet with flow id
+ * to speed the lookup of the flow in the data structure.
+ * The second is done be connecting the SFT results to the rte_flow for
+ * continuing packet process.
+ *
+ * Definitions and Abbreviations:
+ * - 5-tuple: defined by:
+ * -- Source IP address
+ * -- Source port
+ * -- Destination IP address
+ * -- Destination port
+ * -- IP protocol number
+ * - 7-tuple: 5-tuple, zone and port (see struct rte_sft_7tuple)
+ * - 5/7-tuple: 5/7-tuple of the packet from connection initiator
+ * - revers 5/7-tuple: 5/7-tuple of the packet from connection initiate
+ * - application: SFT library API consumer
+ * - APP: see application
+ * - CID: client ID
+ * - CT: connection tracking
+ * - FID: Flow identifier
+ * - FIF: First In Flow
+ * - Flow: defined by 7-tuple and its reverse i.e. flow is bidirectional
+ * - SFT: Stateful Flow Table
+ * - user: see application
+ * - zone: additional user defined value used as differentiator for
+ * connections having same 5-tuple (for example different VXLAN
+ * connections with same inner 5-tuple).
+ *
+ * SFT components:
+ *
+ * +-----------------------------------+
+ * | RTE flow |
+ * | |
+ * | +-------------------------------+ | +----------------+
+ * | | group X | | | RTE_SFT |
+ * | | | | | |
+ * | | +---------------------------+ | | | |
+ * | | | rule ... | | | | |
+ * | | | . | | | +-----------+----+
+ * | | | . | | | |
+ * | | | . | | | entry
+ * | | +---------------------------+ | | create
+ * | | | rule | | | |
+ * | | | patterns ... +---------+ |
+ * | | | actions | | | | |
+ * | | | SFT (zone=Z) | | | | |
+ * | | | JUMP (group=Y) | | | lookup |
+ * | | +---------------------------+ | | zone=Z, |
+ * | | | rule ... | | | 5tuple |
+ * | | | . | | | | |
+ * | | | . | | | +--v-------------+
+ * | | | . | | | | SFT | |
+ * | | | | | | | | |
+ * | | +---------------------------+ | | | +--v--+ |
+ * | | | | | | | |
+ * | +-------------------------------+ | | | PMD | |
+ * | | | | | |
+ * | | | +-----+ |
+ * | +-------------------------------+ | | |
+ * | | group Y | | | |
+ * | | | | | set state |
+ * | | +---------------------------+ | | | set data |
+ * | | | rule | | | +--------+-------+
+ * | | | patterns | | | |
+ * | | | SFT (state=UNDEFINED) | | | |
+ * | | | actions RSS | | | |
+ * | | +---------------------------+ | | |
+ * | | | rule | | | |
+ * | | | patterns | | | |
+ * | | | SFT (state=INVALID) | <-------------+
+ * | | | actions DROP | | | forward
+ * | | +---------------------------+ | | group=Y
+ * | | | rule | | |
+ * | | | patterns | | |
+ * | | | SFT (state=ACCEPTED) | | |
+ * | | | actions PORT | | |
+ * | | +---------------------------+ | |
+ * | | ... | |
+ * | | | |
+ * | +-------------------------------+ |
+ * | ... |
+ * | |
+ * +-----------------------------------+
+ *
+ * SFT as datastructure:
+ * SFT can be treated as datastructure maintaining flow context across its
+ * lifetime. SFT flow entry represents bidirectional network flow and defined by
+ * 7-tuple & its reverse 7-tuple.
+ * Each entry in SFT has:
+ * - FID: 1:1 mapped & used as entry handle & encapsulating internal
+ * implementation of the entry.
+ * - State: user-defined value attached to each entry, the only library
+ * reserved value for state unset (the actual value defined by SFT
+ * configuration). The application should define flow state encodings and
+ * set it for flow via rte_sft_flow_set_ctx() than what actions should be
+ * applied on packets can be defined via related RTE flow rule matching SFT
+ * state (see rules in SFT components diagram above).
+ * - Timestamp: for the last seen in flow packet used for flow aging mechanism
+ * implementation.
+ * - Client Objects: user-defined flow contexts attached as opaques to flow.
+ * - Acceleration & offloading - utilize RTE flow capabilities, when supported
+ * (see action ``SFT``), for flow lookup acceleration and further
+ * context-aware flow handling offload.
+ * - CT state: optionally for TCP connections CT state can be maintained
+ * (see enum rte_sft_flow_ct_state).
+ * - Out of order TCP packets: optionally SFT can keep out of order TCP
+ * packets aside the flow context till the arrival of the missing in-order
+ * packet.
+ *
+ * RTE flow changes:
+ * The SFT flow state (or context) for RTE flow is defined by fields of
+ * struct rte_flow_item_sft.
+ * To utilize SFT capabilities new item and action types introduced:
+ * - item SFT: matching on SFT flow state (see RTE_FLOW_ITEM_TYPE_SFT).
+ * - action SFT: retrieve SFT flow context and attache it to the processed
+ * packet (see RTE_FLOW_ACTION_TYPE_SFT).
+ *
+ * The contents of per port SFT serving RTE flow action ``SFT`` managed via
+ * SFT PMD APIs (see struct rte_sft_ops).
+ * The SFT flow state/context retrieval performed by user-defined zone ``SFT``
+ * action argument and processed packet 5-tuple.
+ * If in scope of action ``SFT`` there is no context/state for the flow in SFT
+ * undefined sate attached to the packet meaning that the flow is not
+ * recognized by SFT, most probably FIF packet.
+ *
+ * Once the SFT state set for a packet it can match on item SFT
+ * (see RTE_FLOW_ITEM_TYPE_SFT) and forwarding design can be done for the
+ * packet, for example:
+ * - if state value == x than queue for further processing by the application
+ * - if state value == y than forward it to eth port (full offload)
+ * - if state value == 'undefined' than queue for further processing by
+ * the application (handle FIF packets)
+ *
+ * Processing packets with SFT library:
+ *
+ * FIF packet:
+ * To recognize upcoming packets of the SFT flow every FIF packet should be
+ * forwarded to the application utilizing the SFT library. Non-FIF packets can
+ * be processed by the application or its processing can be fully offloaded.
+ * Processing of the packets in SFT library starts with rte_sft_process_mbuf
+ * or rte_sft_process_mbuf_with_zone. If mbuf recognized as FIF application
+ * should make a design to destroy flow or complete flow creation process in
+ * SFT using rte_sft_flow_activate.
+ *
+ * Recognized SFT flow:
+ * Once struct rte_sft_flow_status with valid fid field possessed by application
+ * it can:
+ * - mange client objects on it (see client_obj field in
+ * struct rte_sft_flow_status) using rte_sft_flow_<OP>_client_obj APIs
+ * - analyze user-defined flow state and CT state (see state & ct_sate fields
+ * in struct rte_sft_flow_status).
+ * - set flow state to be attached to the upcoming packets by action ``SFT``
+ * via struct rte_sft_flow_status API.
+ * - decide to destroy flow via rte_sft_flow_destroy API.
+ *
+ * Flow aging:
+ *
+ * SFT library manages the aging for each flow. On flow creation, it's
+ * assigned an aging value, the maximal number of seconds passed since the
+ * last flow packet arrived, once exceeded flow considered aged.
+ * The application notified of aged flow asynchronously via event queues.
+ * The device and port IDs tuple to identify the event queue to enqueue
+ * flow aged events passed on flow creation as arguments
+ * (see rte_sft_flow_activate). It's the application responsibility to
+ * initialize event queues and assign them to each flow for EOF event
+ * notifications.
+ * Aged EOF event handling:
+ * - Should be considered as application responsibility.
+ * - The last stage should be the release of the flow resources via
+ * rte_sft_flow_destroy API.
+ * - All client objects should be removed from flow before the
+ * rte_sft_flow_destroy API call.
+ * See the description of ret_sft_flow_destroy for an example of aged flow
+ * handling.
+ *
+ * SFT API thread safety:
+ *
+ * Since the SFT lib is designed to work as part of the Fast-Path, The SFT
+ * is not thread safe, in order to enable better working with multiple threads
+ * the SFT lib uses the queue approach, where each queue can only be accessesd
+ * by one thread while one thread can access multiple queues.
+ *
+ * SFT Library initialization and cleanup:
+ *
+ * SFT library should be considered as a single instance, preconfigured and
+ * initialized via rte_sft_init() API.
+ * SFT library resource deallocation and cleanup should be done via
+ * rte_sft_init() API as a stage of the application termination procedure.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_errno.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_flow.h>
+
+/**
+ * L3/L4 5-tuple - src/dest IP and port and IP protocol.
+ *
+ * Used for flow/connection identification.
+ */
+RTE_STD_C11
+struct rte_sft_5tuple {
+ union {
+ struct {
+ rte_be32_t src_addr; /**< IPv4 source address. */
+ rte_be32_t dst_addr; /**< IPv4 destination address. */
+ } ipv4;
+ struct {
+ uint8_t src_addr[16]; /**< IPv6 source address. */
+ uint8_t dst_addr[16]; /**< IPv6 destination address. */
+ } ipv6;
+ };
+ rte_be16_t src_port; /**< Source port. */
+ rte_be16_t dst_port; /**< Destination port. */
+ uint8_t proto; /**< IP protocol. */
+ uint8_t is_ipv6: 1; /**< True for valid IPv6 fields. Otherwise IPv4. */
+};
+
+/**
+ * Port flow identification.
+ *
+ * @p zone used for setups where 5-tuple is not enough to identify flow.
+ * For example different VLANs/VXLANs may have similar 5-tuples.
+ */
+struct rte_sft_7tuple {
+ struct rte_sft_5tuple flow_5tuple; /**< L3/L4 5-tuple. */
+ uint32_t zone; /**< Zone assigned to flow. */
+ uint16_t port_id; /** <Port identifier of Ethernet device. */
+};
+
+/**
+ * Structure describes SFT library configuration
+ */
+struct rte_sft_conf {
+ uint16_t nb_queues; /**< Preferred number of queues */
+ uint32_t udp_aging; /**< UDP proto default aging in sec */
+ uint32_t tcp_aging; /**< TCP proto default aging in sec */
+ uint32_t tcp_syn_aging; /**< TCP SYN default aging in sec. */
+ uint32_t default_aging; /**< All unlisted proto default aging in sec. */
+ uint32_t nb_max_entries; /**< Max entries in SFT. */
+ uint8_t app_data_len; /**< Number of uint32 of app data. */
+ uint32_t support_partial_match: 1;
+ /**< App can partial match on the data. */
+ uint32_t reorder_enable: 1;
+ /**< TCP packet reordering feature enabled bit. */
+ uint32_t tcp_ct_enable: 1;
+ /**< TCP connection tracking based on standard. */
+ uint32_t reserved: 30;
+};
+
+/**
+ * Structure that holds the action configuration.
+ */
+struct rte_sft_actions_specs {
+ struct rte_sft_5tuple *initiator_nat;
+ /**< The NAT configuration for the initiator flow. */
+ struct rte_sft_5tuple *reverse_nat;
+ /**< The NAT configuration for the reverse flow. */
+ uint64_t aging; /**< the aging time out in sec. */
+};
+
+#define RTE_SFT_ACTION_INITIATOR_NAT (1ul << 0)
+/**< NAT action should be done on the initiator traffic. */
+#define RTE_SFT_ACTION_REVERSE_NAT (1ul << 1)
+/**< NAT action should be done on the reverse traffic. */
+#define RTE_SFT_ACTION_COUNT (1ul << 2) /**< Enable count action. */
+#define RTE_SFT_ACTION_AGE (1ul << 3) /**< Enable ageing action. */
+
+
+/**
+ * Structure that holds the count data.
+ */
+struct rte_sft_query_data {
+ uint64_t nb_bytes; /**< Number of bytes that passed in the flow. */
+ uint64_t nb_packets; /**< Number of packets that passed in the flow. */
+ uint32_t age; /**< Seconds passed since last seen packet. */
+ uint32_t aging;
+ /**< Flow considered aged once this age (seconds) reached. */
+ uint32_t nb_bytes_valid: 1; /**< Number of bytes is valid. */
+ uint32_t nb_packets_valid: 1; /* Number of packets is valid. */
+ uint32_t nb_age_valid: 1; /* Age is valid. */
+ uint32_t nb_aging_valid: 1; /* Aging is valid. */
+ uint32_t reserved: 28;
+};
+
+/**
+ * Structure describes the state of the flow in SFT.
+ */
+struct rte_sft_flow_status {
+ uint32_t fid; /**< SFT flow id. */
+ uint32_t zone; /**< Zone for lookup in SFT */
+ uint8_t state; /**< Application defined bidirectional flow state. */
+ uint8_t proto_state; /**< The state based on the protocol. */
+ uint16_t proto; /**< L4 protocol. */
+ /**< Connection tracking flow state, based on standard. */
+ uint32_t nb_in_order_mbufs;
+ /**< Number of in-order mbufs available for drain */
+ uint32_t activated: 1; /**< Flow was activated. */
+ uint32_t zone_valid: 1; /**< Zone field is valid. */
+ uint32_t proto_state_change: 1; /**< Protocol state was changed. */
+ uint32_t fragmented: 1; /**< Last flow mbuf was fragmented. */
+ uint32_t out_of_order: 1; /**< Last flow mbuf was out of order (TCP). */
+ uint32_t offloaded: 1;
+ /**< The connection is offload and no packet should be stored. */
+ uint32_t initiator: 1; /**< marks if the mbuf is from the initiator. */
+ uint32_t reserved: 25;
+ uint32_t data[];
+ /**< Application data. The length is defined by the configuration. */
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_flow_error.cause.
+ */
+enum rte_sft_error_type {
+ RTE_SFT_ERROR_TYPE_NONE, /**< No error. */
+ RTE_SFT_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_SFT_ERROR_TYPE_FLOW_NOT_DEFINED, /**< The FID is not defined. */
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by SFT, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_sft_error {
+ enum rte_sft_error_type type; /**< Cause field and error types. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get SFT flow status, based on the fid.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] status
+ * Structure to dump actual SFT flow status.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_get_status(const uint16_t queue, const uint32_t fid,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined data.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param data
+ * User defined data. The len is defined at configuration time.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_data(uint16_t queue, uint32_t fid, const uint32_t *data,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined state.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param state
+ * User state.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_state(uint16_t queue, uint32_t fid, const uint8_t state,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set user defined state.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param offload
+ * set if flow is offloaded.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_offload(uint16_t queue, uint32_t fid, bool offload,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Initialize SFT library instance.
+ *
+ * @param conf
+ * SFT library instance configuration.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_init(const struct rte_sft_conf *conf, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Finalize SFT library instance.
+ * Cleanup & release allocated resources.
+ */
+__rte_experimental
+void
+rte_sft_fini(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process mbuf received on RX queue.
+ *
+ * This function checks the mbuf against the SFT database and return the
+ * connection status that this mbuf belongs to.
+ *
+ * If status.activated = 1 and status.offloaded = 0 the input mbuf is
+ * considered consumed and the application is not allowed to use it or free it,
+ * instead the application should use the mbuf pointed by the mbuf_out.
+ * Incase the mbuf is out of order or fragmented the mbuf_out will be NULL.
+ *
+ * If status.activated = 0 or status.offloaded = 1, the input mbuf is not
+ * consumed and the mbuf_out will allways be NULL.
+ *
+ * This function doesn't create new entry in the SFT.
+ *
+ * @param queue
+ * The sft queue number.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * if status.activated and status.offload = 0.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Connection status based on the last in mbuf.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_process_mbuf(uint16_t queue, struct rte_mbuf *mbuf_in,
+ struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process mbuf received on RX queue while zone value provided by caller.
+ *
+ * The behaviour of this function is similar to rte_sft_process_mbuf except
+ * the lookup in SFT procedure. The lookup in SFT always done by the *zone*
+ * arg and 5-tuple 5-tuple, extracted form mbuf outer header contents.
+ *
+ * @see rte_sft_process_mbuf
+ *
+ * @param queue
+ * The sft queue number.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * after successful call to this function.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Connection status based on the last in mbuf.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * 0 on success , a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_process_mbuf_with_zone(uint16_t queue, struct rte_mbuf *mbuf_in,
+ uint32_t zone, struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Drain next in order mbuf.
+ *
+ * This function behaves similar to rte_sft_process_mbuf() but acts on packets
+ * accumulated in SFT flow due to missing in order packet. Processing done on
+ * single mbuf at a time and `in order`. Other than above the behavior is
+ * same as of rte_sft_process_mbuf for flow defined & activated & mbuf isn't
+ * fragmented & 'in order'. This function should be called when
+ * rte_sft_process_mbuf or rte_sft_process_mbuf_with_zone sets
+ * status->nb_in_order_mbufs output param !=0 and until
+ * status->nb_in_order_mbufs == 0.
+ * Flow should be locked by caller (see rte_sft_flow_lock).
+ *
+ * @param queue
+ * The sft queue number.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param nb_out
+ * Number of buffers to be drained.
+ * @param initiator
+ * true packets that will be drained belongs to the initiator.
+ * @param[out] status
+ * Connection status based on the last mbuf that was drained.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialize in case of
+ * error only.
+ *
+ * @return
+ * The number of mbufs that were drained, negative value in case
+ * of error and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_drain_mbuf(uint16_t queue, uint32_t fid, struct rte_mbuf **mbuf_out,
+ uint16_t nb_out, bool initiator,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Activate flow in SFT.
+ *
+ * This function creates an entry in the SFT for this connection.
+ * The reasons for 2 phase flow creation procedure:
+ * 1. Missing reverse flow - flow context is shared for both flow directions
+ * i.e. in order maintain bidirectional flow context in RTE SFT packets
+ * arriving from both directions should be identified as packets of the
+ * RTE SFT flow. Consequently, before the creation of the SFT flow caller
+ * should provide reverse flow direction 7-tuple.
+ * 2. The caller of rte_sft_process_mbuf/rte_sft_process_mbuf_with_zone should
+ * be notified that arrived mbuf is first in flow & decide whether to
+ * create a new flow or disregard this packet.
+ * This function completes the creation of the bidirectional SFT flow & creates
+ * entry for 7-tuple on SFT PMD defined by the tuple port for both
+ * initiator/initiate 7-tuples.
+ * Flow aging, connection tracking state & out of order handling will be
+ * initialized according to the content of the *mbuf_in* passes to
+ * rte_sft_process_mbuf/_with_zone during phase 1 of flow creation.
+ * Once this function returns upcoming calls rte_sft_process_mbuf/_with_zone
+ * with 7-tuple or its reverse will return the handle to this flow.
+ * Flow should be locked by the caller (see rte_sft_flow_lock).
+ *
+ * @param queue
+ * The SFT queue.
+ * @param[in] mbuf_in
+ * mbuf to process; mbuf pointer considered 'consumed' and should not be used
+ * after successful call to this function.
+ * @param reverse_tuple
+ * Expected response flow 7-tuple.
+ * @param state
+ * User defined state to set.
+ * @param data
+ * User defined data, the len is configured during sft init.
+ * @param proto_enable
+ * Enables maintenance of status->proto_state connection tracking value
+ * for the flow. otherwise status->proto_state will be initialized with zeros.
+ * @param dev_id
+ * Event dev ID to enqueue end of flow event.
+ * @param port_id
+ * Event port ID to enqueue end of flow event.
+ * @param actions
+ * Flags that indicate which actions should be done on the packet before
+ * returning it to the rte_flow.
+ * @param action_specs
+ * Hold the actions configuration.
+ * @param[out] mbuf_out
+ * last processed not fragmented and in order mbuf.
+ * @param[out] status
+ * Structure to dump SFT flow status once activated.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_activate(uint16_t queue, struct rte_mbuf *mbuf_in,
+ const struct rte_sft_7tuple *reverse_tuple,
+ uint8_t state, uint32_t *data, uint8_t proto_enable,
+ uint8_t dev_id, uint8_t port_id, uint64_t actions,
+ const struct rte_sft_actions_specs *action_specs,
+ struct rte_mbuf **mbuf_out,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Artificially create SFT flow.
+ *
+ * Function to create SFT flow before reception of the first flow packet.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param tuple
+ * Expected initiator flow 7-tuple.
+ * @param reverse_tuple
+ * Expected initiate flow 7-tuple.
+ * @param state
+ * User defined state to set.
+ * @param data
+ * User defined data, the len is configured during sft init.
+ * @param proto_enable
+ * Enables maintenance of status->proto_state connection tracking value
+ * for the flow. otherwise status->proto_state will be initialized with zeros.
+ * @param[out] status
+ * Connection status.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * - on success: 0, locked SFT flow recognized by status->fid.
+ * - on error: a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_create(uint16_t queue, const struct rte_sft_7tuple *tuple,
+ const struct rte_sft_7tuple *reverse_tuple,
+ const struct rte_flow_item_sft *ctx,
+ uint8_t ct_enable,
+ struct rte_sft_flow_status *status,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Removes flow from SFT.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID to destroy.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_destroy(uint16_t queue, uint32_t fid, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Query counter and aging data.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] data.
+ * SFT flow ID.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_query(uint16_t queue, uint32_t fid,
+ struct rte_sft_query_data *data,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Reset flow age to zero.
+ *
+ * Simulates last flow packet with timestamp set to just now.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_touch(uint16_t queue, uint32_t fid, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set flow aging to specific value.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param aging
+ * New flow aging value.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_aging(uint16_t queue, uint32_t fid, uint32_t aging,
+ struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Set client object for given client ID.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param client_id
+ * Client ID to set object for.
+ * @param client_obj
+ * Pointer to opaque client object structure.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_sft_error is set.
+ */
+__rte_experimental
+int
+rte_sft_flow_set_client_obj(uint16_t queue, uint32_t fid, uint8_t client_id,
+ void *client_obj, struct rte_sft_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get client object for given client ID.
+ *
+ * @param queue
+ * The SFT queue.
+ * @param fid
+ * SFT flow ID.
+ * @param client_id
+ * Client ID to get object for.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. SFT initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * A valid client object opaque pointer in case of success, NULL otherwise
+ * and rte_sft_error is set.
+ */
+__rte_experimental
+void *
+rte_sft_flow_get_client_obj(uint16_t queue, const uint32_t fid,
+ uint8_t client_id, struct rte_sft_error *error);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SFT_H_ */
diff --git a/lib/librte_ethdev/rte_sft_driver.h b/lib/librte_ethdev/rte_sft_driver.h
new file mode 100644
index 0000000000..4f1964dab6
--- /dev/null
+++ b/lib/librte_ethdev/rte_sft_driver.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_SFT_DRIVER_H_
+#define RTE_SFT_DRIVER_H_
+
+/**
+ * @file
+ * RTE generic SFT API (driver side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_ethdev_driver.h"
+#include "rte_sft.h"
+#include "rte_flow.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_sft_entry;
+
+#define RTE_SFT_STATE_FLAG_FID_VALID (1 << 0)
+#define RTE_SFT_STATE_FLAG_ZONE_VALID (1 << 1)
+#define RTE_SFT_STATE_FLAG_FLOW_MISS (1 << 2)
+
+#define RTE_SFT_MISS_TCP_FLAGS (1 << 0)
+
+RTE_STD_C11
+struct rte_sft_decode_info {
+ union {
+ uint32_t fid; /**< The fid value. */
+ uint32_t zone; /**< The zone value. */
+ };
+ uint32_t state;
+ /**< Flags that mark the packet state. see RTE_SFT_STATE_FLAG_*. */
+};
+
+/**
+ * @internal
+ * Insert a flow to the SFT HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param fid
+ * Flow ID.
+ * @param queue
+ * The sft working queue.
+ * @param pattern
+ * The matching pattern.
+ * @param miss_conditions
+ * The conditions that forces a miss even if the 5 tuple was matched
+ * see RTE_SFT_MISS_*.
+ * @param actions
+ * Set pf actions to apply in case the flow was hit. If no terminating action
+ * (queue, rss, drop, port) was given, the terminating action should be taken
+ * from the flow that resulted in the SFT.
+ * @param miss_actions
+ * Set pf actions to apply in case the flow was hit. but the miss conditions
+ * were hit. (6 tuple match but tcp flags are on) If no terminating action
+ * (queue, rss, drop, port) was given, the terminating action should be taken
+ * from the flow that resulted in the SFT.
+ * @param data
+ * The application data to attached to the flow.
+ * @param data_len
+ * The length of the data in uint32_t increments.
+ * @param state
+ * The application state to set.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Pointer to sft_entry in case of success, null otherwise and rte_sft_error
+ * is set.
+ */
+typedef struct rte_sft_entry *(*sft_entry_create_t)
+ (struct rte_eth_dev *dev, uint32_t fid, uint16_t queue,
+ const struct rte_flow_item *pattern, uint64_t miss_conditions,
+ const struct rte_flow_action *actions,
+ const struct rte_flow_action *miss_actions,
+ const uint32_t *data, uint16_t data_len, uint8_t state,
+ struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Modify the state and the data of SFT flow in HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param data
+ * The application data to attached to the flow.
+ * @param data_len
+ * The length of the data in uint32_t increments.
+ * @param state
+ * The application state to set.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_modify_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ const uint32_t *data, uint16_t data_len,
+ uint8_t state, struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Destroy SFT flow in HW component.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_destory_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ struct rte_sft_error *error);
+
+/**
+ * @internal
+ * Decode sft state and FID from mbuf.
+ *
+ * @param dev
+ * ethdev handle of port.
+ * @param entry
+ * The entry to modify.
+ * @param queue
+ * The sft working queue.
+ * @param mbuf
+ * The input mbuf.
+ * @param info[out]
+ * The decoded sft data.
+ * @param error[out]
+ * Verbose of the error.
+ *
+ * @return
+ * Negative errno value on error, 0 on success.
+ */
+typedef int *(*sft_entry_decode_t)(struct rte_eth_dev *dev,
+ struct rte_sft_entry *entry, uint16_t queue,
+ struct rte_mbuf *mbuf,
+ struct rte_sft_decode_info *info,
+ struct rte_sft_error *error);
+
+/**
+ * Generic sft operations structure implemented and returned by PMDs.
+ *
+ * If successful, this operation must result in a pointer to a PMD-specific.
+ *
+ * See also rte_sft_ops_get().
+ *
+ * These callback functions are not supposed to be used by applications
+ * directly, which must rely on the API defined in rte_sft.h.
+ */
+struct rte_sft_ops {
+ sft_entry_create_t sft_create_entry;
+ sft_entry_modify_t sft_entry_modify;
+ sft_entry_destory_t sft_entry_destory;
+ sft_entry_decode_t sft_entry_decode;
+};
+
+/**
+ * Get generic sft operations structure from a port.
+ *
+ * @param port_id
+ * Port identifier to query.
+ * @param[out] error
+ * Pointer to flow error structure.
+ *
+ * @return
+ * The flow operations structure associated with port_id, NULL in case of
+ * error, in which case rte_errno is set and the error structure contains
+ * additional details.
+ */
+const struct rte_sft_ops *
+rte_sft_ops_get(uint16_t port_id, struct rte_sft_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_SFT_DRIVER_H_ */
--
2.25.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] ethdev: deprecate shared counters using action attribute
2020-11-03 17:21 3% ` Thomas Monjalon
@ 2020-11-03 17:26 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2020-11-03 17:26 UTC (permalink / raw)
To: Thomas Monjalon, Ori Kam
Cc: dev, Andrew Rybchenko, Andrey Vesnovaty, Ferruh Yigit,
Ray Kinsella, Neil Horman, techboard
On 11/3/20 8:21 PM, Thomas Monjalon wrote:
> +Cc techboard
>
> There is an interesting case here that we should decide
> how to manage in general. Please see below.
>
> 01/11/2020 08:49, Ori Kam:
>> From: Thomas Monjalon <thomas@monjalon.net>
>>> 29/10/2020 15:39, Ori Kam:
>>>>> struct rte_flow_action_count {
>>>>> - uint32_t shared:1; /**< Share counter ID with other flow rules. */
>>>>> + /** @deprecated Share counter ID with other flow rules. */
>>>>> + uint32_t shared:1;
>>>>> uint32_t reserved:31; /**< Reserved, must be zero. */
>>>>> uint32_t id; /**< Counter ID. */
>>>>> };
>>>>
>>>> As much as I agree with your patch, I don't think we should push it since
>>>> not all PMD made the move to support count action, so the application still
>>> needs
>>>> to use this API.
>>>>
>>>> I think this patch should be done but in next LTS release.
>>>
>>> The patch is not removing the field,
>>> it is just warning it will be removed in next year.
>>
>> Yes I know, but I don't think it is correct to issue such a warning without support.
>> The application still must use this API, the warning should be added as soon as
>> at least one PMD support shared counters with the new API.
>
> It should be replaced with shared actions API,
> but you claim it is not supported yet. Right?
> What are the criterias to define the replacement as supported?
>
> What to do in such case?
> Can we warn about a deprecation without having a proper replacement?
> What is the pre-condition to warn about a deprecation?
>
> About the complete removal, it has already been decided by the techboard
> that we cannot remove an API until its replacement is stable.
> In other words, the new experimental API must be promoted
> in the stable ABI, before removing the deprecated API.
>
May be the right way here is to remove deprecation mark up,
but add a deprecation notice that it will be deprecated in
21.02 and PMDs are encouraged to switch to shared actions.
Anyway questions above about the criteria are still valid
even in this case and should be answered.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ethdev: deprecate shared counters using action attribute
@ 2020-11-03 17:21 3% ` Thomas Monjalon
2020-11-03 17:26 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-11-03 17:21 UTC (permalink / raw)
To: Andrew Rybchenko, Ori Kam
Cc: dev, Andrew Rybchenko, Andrey Vesnovaty, Ferruh Yigit,
Ray Kinsella, Neil Horman, techboard
+Cc techboard
There is an interesting case here that we should decide
how to manage in general. Please see below.
01/11/2020 08:49, Ori Kam:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 29/10/2020 15:39, Ori Kam:
> > > > struct rte_flow_action_count {
> > > > - uint32_t shared:1; /**< Share counter ID with other flow rules. */
> > > > + /** @deprecated Share counter ID with other flow rules. */
> > > > + uint32_t shared:1;
> > > > uint32_t reserved:31; /**< Reserved, must be zero. */
> > > > uint32_t id; /**< Counter ID. */
> > > > };
> > >
> > > As much as I agree with your patch, I don't think we should push it since
> > > not all PMD made the move to support count action, so the application still
> > needs
> > > to use this API.
> > >
> > > I think this patch should be done but in next LTS release.
> >
> > The patch is not removing the field,
> > it is just warning it will be removed in next year.
>
> Yes I know, but I don't think it is correct to issue such a warning without support.
> The application still must use this API, the warning should be added as soon as
> at least one PMD support shared counters with the new API.
It should be replaced with shared actions API,
but you claim it is not supported yet. Right?
What are the criterias to define the replacement as supported?
What to do in such case?
Can we warn about a deprecation without having a proper replacement?
What is the pre-condition to warn about a deprecation?
About the complete removal, it has already been decided by the techboard
that we cannot remove an API until its replacement is stable.
In other words, the new experimental API must be promoted
in the stable ABI, before removing the deprecated API.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 14:02 0% ` Slava Ovsiienko
@ 2020-11-03 15:03 0% ` Morten Brørup
2020-11-04 15:00 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-03 15:03 UTC (permalink / raw)
To: Slava Ovsiienko, NBU-Contact-Thomas Monjalon, dev, techboard
Cc: Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, dev, Yigit,
Ferruh, david.marchand, Richardson, Bruce, olivier.matz, jerinj,
honnappa.nagarahalli, maxime.coquelin, stephen, hemant.agrawal,
Matan Azrad, Shahaf Shuler
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Slava Ovsiienko
> Sent: Tuesday, November 3, 2020 3:03 PM
>
> Hi, Morten
>
> > From: Morten Brørup <mb@smartsharesystems.com>
> > Sent: Tuesday, November 3, 2020 14:10
> >
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Monday, November 2, 2020 4:58 PM
> > >
> > > +Cc techboard
> > >
> > > We need benchmark numbers in order to take a decision.
> > > Please all, prepare some arguments and numbers so we can discuss
> the
> > > mbuf layout in the next techboard meeting.
> >
> > I propose that the techboard considers this from two angels:
> >
> > 1. Long term goals and their relative priority. I.e. what can be
> achieved with
> > wide-ranging modifications, requiring yet another ABI break and due
> notices.
> >
> > 2. Short term goals, i.e. what can be achieved for this release.
> >
> >
> > My suggestions follow...
> >
> > 1. Regarding long term goals:
> >
> > I have argued that simple forwarding of non-segmented packets using
> only the
> > first mbuf cache line can be achieved by making three
> > modifications:
> >
> > a) Move m->tx_offload to the first cache line.
> Not all PMDs use this field on Tx. HW might support the checksum
> offloads
> directly, not requiring these fields at all.
>
>
> > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > instead of the 64 bit m->pool pointer in the second cache line.
> 256 mpool looks enough, as for me. Regarding the indirect access to the
> pool
> (via some table) - it might introduce some performance impact.
It might, but I hope that it is negligible, so the benefits outweigh the disadvantages.
It would have to be measured, though.
And m->pool is only used for free()'ing (and detach()'ing) mbufs.
> For example,
> mlx5 PMD strongly relies on pool field for allocating mbufs in Rx
> datapath.
> We're going to update (o-o, we found point to optimize), but for now it
> does.
Without looking at the source code, I don't think the PMD is using m->pool in the RX datapath, I think it is using a pool dedicated to a receive queue used for RX descriptors in the PMD (i.e. driver->queue->pool).
>
> > c) Do not access m->next when we know that it is NULL.
> > We can use m->nb_segs == 1 or some other invariant as the gate.
> > It can be implemented by adding an m->next accessor function:
> > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > {
> > return m->nb_segs == 1 ? NULL : m->next;
> > }
>
> Sorry, not sure about this. IIRC, nb_segs is valid in the first
> segment/mbuf only.
> If we have the 4 segments in the pkt we see nb_seg=4 in the first one,
> and the nb_seg=1
> in the others. The next field is NULL in the last mbuf only. Am I wrong
> and miss something ?
You are correct.
This would have to be updated too. Either by increasing m->nb_seg in the following segments, or by splitting up relevant functions into functions for working on first segments (incl. non-segmented packets), and functions for working on following segments of segmented packets.
>
> > Regarding the priority of this goal, I guess that simple forwarding
> of non-
> > segmented packets is probably the path taken by the majority of
> packets
> > handled by DPDK.
> >
> > An alternative goal could be:
> > Do not touch the second cache line during RX.
> > A comment in the mbuf structure says so, but it is not true anymore.
> >
> > (I guess that regression testing didn't catch this because the tests
> perform TX
> > immediately after RX, so the cache miss just moves from the TX to the
> RX part
> > of the test application.)
> >
> >
> > 2. Regarding short term goals:
> >
> > The current DPDK source code looks to me like m->next is the most
> frequently
> > accessed field in the second cache line, so it makes sense moving
> this to the
> > first cache line, rather than m->pool.
> > Benchmarking may help here.
>
> Moreover, for the segmented packets the packet size is supposed to be
> large,
> and it imposes the relatively low packet rate, so probably optimization
> of
> moving next to the 1st cache line might be negligible at all. Just
> compare 148Mpps of
> 64B pkts and 4Mpps of 3000B pkts over 100Gbps link. Currently we are on
> benchmarking
> and did not succeed yet on difference finding. The benefit can't be
> expressed in mpps delta,
> we should measure CPU clocks, but Rx queue is almost always empty - we
> have an empty
> loops. So, if we have the boost - it is extremely hard to catch one.
Very good point regarding the value of such an optimization, Slava!
And when free()'ing packets, both m->next and m->pool are touched.
So perhaps the free()/detach() functions in the mbuf library can be modified to handle first segments (and non-segmented packets) and following segments differently, so accessing m->next can be avoided for non-segmented packets. Then m->pool should be moved to the first cache line.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] Minutes of Technical Board Meeting, 2020-10-21
@ 2020-11-03 14:25 4% Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 200+ results
From: Jerin Jacob Kollanukkaran @ 2020-11-03 14:25 UTC (permalink / raw)
To: dev; +Cc: techboard, Ori Kam, Guy Kaneti, Dovrat Zifroni
Minutes of Technical Board Meeting, 2020-10-21
Members Attending
-----------------
-Bruce
-Ferruh
-Hemant
-Honnappa
-Jerin (Chair)
-Kevin
-Konstantin
-Maxime
-Olivier
-Stephen
-Thomas
NOTE: The technical board meetings every second Wednesday in https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
NOTE: Next meeting will be on Wednesday 2020-11-04 @3pm UTC, and will be chaired by Kevin
# Feedback from TB on l3fwd-regexdev example application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- TB agreed to have an example application that exercises the regex device and network device as a separate application.
- The example shall be used for both Benchmark and functional verification of regex in the forwarding path.
- Based on the discussions, it is agreed that l3fwd might not be the real use case to consider as regex + network application
- Instead, TB recommends having the Deep packet inspection style application to showcase the regex + networking use case.
- Following is the simple DPI style application definition, which can be considered as a candidate, as an example of regex + network application
1) Create or import a rule database.
- Rule database will have an index as rule_id and pattern to search
2) Create or import rule_id to the action table
- Action could be -1 for the drop, 0 .. N for a specific port to forward upon the match
3) Enqueue all the packet from ethdev to regexdev
4) If there is a match, then do the action based on the table created in step 2 using struct rte_regexdev_match::rule_id
5) If there is no match, forward back to the source port
# Approval for ABI exception request for 20.11
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1) LPM
- Agreed to merge the patch with ABI changes
- Reference: http://git.dpdk.org/dpdk/commit/?id=ced5a6ce244323435d9b0c0cb8ff98adc07fc6bd
# Update on DTS usability
~~~~~~~~~~~~~~~~~~~~~~~~~
- Waiting for the feedback from community. Revisit after 20.11 release.
# Update Security process issue updates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Call for volunteers to review and to be part of Security process team.
# DMARC mitigation in the mailing list
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- No mitigation plan as of now to support DMARC in the mailing list.
- TB may revisit this topic once the popular opensource communities such as
Linux Kernel etc. migrated or have mitigation to support DMARC.
Prepare the CFP text for a virtual Asia event in January 2021
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Following is the TB views on this proposal
1) Since it is a virtual event, Why it needs to be specific to Asia?
Can a similar time geographical zone area can join the virtual connect?
2) Language will be a real barrier to host an Asia specific event.
3) TB recommends if there are enough participants for the Mandarin
language, then it is better to have virtual connect targeting to the Mandarin-only
audience like past connects.
4) Also, Based on country-specific community feedback, other language-specific
event/days can be planed if needed.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 13:50 0% ` Bruce Richardson
@ 2020-11-03 14:03 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2020-11-03 14:03 UTC (permalink / raw)
To: Bruce Richardson
Cc: Thomas Monjalon, dev, techboard, Ajit Khaparde, Ananyev,
Konstantin, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
olivier.matz, jerinj, viacheslavo, honnappa.nagarahalli,
maxime.coquelin, stephen, hemant.agrawal, Matan Azrad,
Shahaf Shuler
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Tuesday, November 3, 2020 2:50 PM
>
> On Tue, Nov 03, 2020 at 02:46:17PM +0100, Morten Brørup wrote:
> > > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > > Sent: Tuesday, November 3, 2020 1:26 PM
> > >
> > > On Tue, Nov 03, 2020 at 01:10:05PM +0100, Morten Brørup wrote:
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > Sent: Monday, November 2, 2020 4:58 PM
> > > > >
> > > > > +Cc techboard
> > > > >
> > > > > We need benchmark numbers in order to take a decision.
> > > > > Please all, prepare some arguments and numbers so we can
> discuss
> > > > > the mbuf layout in the next techboard meeting.
> > > >
> > > > I propose that the techboard considers this from two angels:
> > > >
> > > > 1. Long term goals and their relative priority. I.e. what can be
> > > > achieved with wide-ranging modifications, requiring yet another
> ABI
> > > > break and due notices.
> > > >
> > > > 2. Short term goals, i.e. what can be achieved for this release.
> > > >
> > > >
> > > > My suggestions follow...
> > > >
> > > > 1. Regarding long term goals:
> > > >
> > > > I have argued that simple forwarding of non-segmented packets
> using
> > > > only the first mbuf cache line can be achieved by making three
> > > > modifications:
> > > >
> > > > a) Move m->tx_offload to the first cache line.
> > > > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > > > instead of the 64 bit m->pool pointer in the second cache
> line.
> > > > c) Do not access m->next when we know that it is NULL.
> > > > We can use m->nb_segs == 1 or some other invariant as the
> gate.
> > > > It can be implemented by adding an m->next accessor function:
> > > > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > > > {
> > > > return m->nb_segs == 1 ? NULL : m->next;
> > > > }
> > > >
> > > > Regarding the priority of this goal, I guess that simple
> forwarding
> > > > of non-segmented packets is probably the path taken by the
> majority
> > > > of packets handled by DPDK.
> > > >
> > > >
> > > > An alternative goal could be:
> > > > Do not touch the second cache line during RX.
> > > > A comment in the mbuf structure says so, but it is not true
> anymore.
> > > >
> > >
> > > The comment should be true for non-scattered RX, I believe.
> >
> > You are correct.
> >
> > My suggestion was unclear: Extend this remark to include segmented
> packets.
> >
> > This could be a priority if the techboard considers RX segmented
> packets more important than my suggestion for single cache line
> forwarding of non-segmented packets.
> >
> >
> > > I'm not aware of any use of second cacheline for the fast-path RXs
> for many drivers.
> > > Am I missing something that has changed recently here?
> >
> > Check out eth_igb_recv_pkts() in the E1000 driver: rxm->next = NULL;
> > Or pmd_rx_burst() in the TAP driver: new_tail->next = seg->next;
> >
> > Perhaps the documentation should describe best practices for
> implementing RX and TX functions in drivers, including
> allocating/freeing mbufs. Or an example dummy Ethernet driver could do
> it.
> >
>
> Yes, perhaps I should be clearer about the "fast-path", because I was
> thinking of the optimized RX/TX paths for those nics at 10G and above.
> Probably the documentation should indeed have an update clarifying
> things a
> bit, since using the first cacheline only possible but not mandatory
> for
> simple RX.
I sometimes look at the source code of the simple drivers for reference, as they are easier to understand than the advanced vector drivers.
I suppose new PMD developers also would. :-)
Anyway, it is probably a good idea to add a clarifying note to the documentation, thus reflecting reality.
Just make sure that it says that the second cache line is supposed to be untouched by RX of high performance drivers, so application developers still consider it cold.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 12:10 4% ` Morten Brørup
2020-11-03 12:25 0% ` Bruce Richardson
@ 2020-11-03 14:02 0% ` Slava Ovsiienko
2020-11-03 15:03 0% ` Morten Brørup
1 sibling, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-11-03 14:02 UTC (permalink / raw)
To: Morten Brørup, NBU-Contact-Thomas Monjalon, dev, techboard
Cc: Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, dev, Yigit,
Ferruh, david.marchand, Richardson, Bruce, olivier.matz, jerinj,
honnappa.nagarahalli, maxime.coquelin, stephen, hemant.agrawal,
Matan Azrad, Shahaf Shuler
Hi, Morten
> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Tuesday, November 3, 2020 14:10
> To: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; dev@dpdk.org;
> techboard@dpdk.org
> Cc: Ajit Khaparde <ajit.khaparde@broadcom.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org; Yigit, Ferruh
> <ferruh.yigit@intel.com>; david.marchand@redhat.com; Richardson, Bruce
> <bruce.richardson@intel.com>; olivier.matz@6wind.com; jerinj@marvell.com;
> Slava Ovsiienko <viacheslavo@nvidia.com>; honnappa.nagarahalli@arm.com;
> maxime.coquelin@redhat.com; stephen@networkplumber.org;
> hemant.agrawal@nxp.com; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: RE: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst
> half
>
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Monday, November 2, 2020 4:58 PM
> >
> > +Cc techboard
> >
> > We need benchmark numbers in order to take a decision.
> > Please all, prepare some arguments and numbers so we can discuss the
> > mbuf layout in the next techboard meeting.
>
> I propose that the techboard considers this from two angels:
>
> 1. Long term goals and their relative priority. I.e. what can be achieved with
> wide-ranging modifications, requiring yet another ABI break and due notices.
>
> 2. Short term goals, i.e. what can be achieved for this release.
>
>
> My suggestions follow...
>
> 1. Regarding long term goals:
>
> I have argued that simple forwarding of non-segmented packets using only the
> first mbuf cache line can be achieved by making three
> modifications:
>
> a) Move m->tx_offload to the first cache line.
Not all PMDs use this field on Tx. HW might support the checksum offloads
directly, not requiring these fields at all.
> b) Use an 8 bit pktmbuf mempool index in the first cache line,
> instead of the 64 bit m->pool pointer in the second cache line.
256 mpool looks enough, as for me. Regarding the indirect access to the pool
(via some table) - it might introduce some performance impact. For example,
mlx5 PMD strongly relies on pool field for allocating mbufs in Rx datapath.
We're going to update (o-o, we found point to optimize), but for now it does.
> c) Do not access m->next when we know that it is NULL.
> We can use m->nb_segs == 1 or some other invariant as the gate.
> It can be implemented by adding an m->next accessor function:
> struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> {
> return m->nb_segs == 1 ? NULL : m->next;
> }
Sorry, not sure about this. IIRC, nb_segs is valid in the first segment/mbuf only.
If we have the 4 segments in the pkt we see nb_seg=4 in the first one, and the nb_seg=1
in the others. The next field is NULL in the last mbuf only. Am I wrong and miss something ?
> Regarding the priority of this goal, I guess that simple forwarding of non-
> segmented packets is probably the path taken by the majority of packets
> handled by DPDK.
>
> An alternative goal could be:
> Do not touch the second cache line during RX.
> A comment in the mbuf structure says so, but it is not true anymore.
>
> (I guess that regression testing didn't catch this because the tests perform TX
> immediately after RX, so the cache miss just moves from the TX to the RX part
> of the test application.)
>
>
> 2. Regarding short term goals:
>
> The current DPDK source code looks to me like m->next is the most frequently
> accessed field in the second cache line, so it makes sense moving this to the
> first cache line, rather than m->pool.
> Benchmarking may help here.
Moreover, for the segmented packets the packet size is supposed to be large,
and it imposes the relatively low packet rate, so probably optimization of
moving next to the 1st cache line might be negligible at all. Just compare 148Mpps of
64B pkts and 4Mpps of 3000B pkts over 100Gbps link. Currently we are on benchmarking
and did not succeed yet on difference finding. The benefit can't be expressed in mpps delta,
we should measure CPU clocks, but Rx queue is almost always empty - we have an empty
loops. So, if we have the boost - it is extremely hard to catch one.
With best regards, Slava
>
>
> If we - without breaking the ABI - can introduce a gate to avoid accessing m-
> >next when we know that it is NULL, we should keep it in the second cache
> line.
>
> In this case, I would prefer to move m->tx_offload to the first cache line,
> thereby providing a field available for application use, until the application
> prepares the packet for transmission.
>
>
> >
> >
> > 01/11/2020 21:59, Morten Brørup:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Sent: Sunday, November 1, 2020 5:38 PM
> > > >
> > > > 01/11/2020 10:12, Morten Brørup:
> > > > > One thing has always puzzled me:
> > > > > Why do we use 64 bits to indicate which memory pool an mbuf
> > > > > belongs to?
> > > > > The portid only uses 16 bits and an indirection index.
> > > > > Why don't we use the same kind of indirection index for mbuf
> > pools?
> > > >
> > > > I wonder what would be the cost of indirection. Probably
> > neglectible.
> > >
> > > Probably. The portid does it, and that indirection is heavily used
> > everywhere.
> > >
> > > The size of mbuf memory pool indirection array should be compile
> > > time
> > configurable, like the size of the portid indirection array.
> > >
> > > And for reference, the indirection array will fit into one cache
> > > line
> > if we default to 8 mbuf pools, thus supporting an 8 CPU socket system
> > with one mbuf pool per CPU socket, or a 4 CPU socket system with two
> > mbuf pools per CPU socket.
> > >
> > > (And as a side note: Our application is optimized for single-socket
> > systems, and we only use one mbuf pool. I guess many applications were
> > developed without carefully optimizing for multi-socket systems, and
> > also just use one mbuf pool. In these cases, the mbuf structure
> > doesn't really need a pool field. But it is still there, and the DPDK
> > libraries use it, so we didn't bother removing it.)
> > >
> > > > I think it is a good proposal...
> > > > ... for next year, after a deprecation notice.
> > > >
> > > > > I can easily imagine using one mbuf pool (or perhaps a few
> > > > > pools) per CPU socket (or per physical memory bus closest to an
> > > > > attached
> > NIC),
> > > > > but not more than 256 mbuf memory pools in total.
> > > > > So, let's introduce an mbufpoolid like the portid, and cut this
> > > > > mbuf field down from 64 to 8 bits.
> >
> > We will need to measure the perf of the solution.
> > There is a chance for the cost to be too much high.
> >
> >
> > > > > If we also cut down m->pkt_len from 32 to 24 bits,
> > > >
> > > > Who is using packets larger than 64k? Are 16 bits enough?
> > >
> > > I personally consider 64k a reasonable packet size limit. Exotic
> > applications with even larger packets would have to live with this
> > constraint. But let's see if there are any objections. For reference,
> > 64k corresponds to ca. 44 Ethernet (1500 byte) packets.
> > >
> > > (The limit could be 65535 bytes, to avoid translation of the value 0
> > into 65536 bytes.)
> > >
> > > This modification would go nicely hand in hand with the mbuf pool
> > indirection modification.
> > >
> > > ... after yet another round of ABI stability discussions,
> > depreciation notices, and so on. :-)
> >
> > After more thoughts, I'm afraid 64k is too small in some cases.
> > And 24-bit manipulation would probably break performance.
> > I'm afraid we are stuck with 32-bit length.
>
> Yes, 24 bit manipulation would probably break performance.
>
> Perhaps a solution exists with 16 bits (least significant bits) for the common
> cases, and 8 bits more (most significant bits) for the less common cases. Just
> thinking out loud here...
>
> >
> > > > > we can get the 8 bit mbuf pool index into the first cache line
> > > > > at no additional cost.
> > > >
> > > > I like the idea.
> > > > It means we don't need to move the pool pointer now, i.e. it does
> > > > not have to replace the timestamp field.
> > >
> > > Agreed! Don't move m->pool to the first cache line; it is not used
> > for RX.
> > >
> > > >
> > > > > In other words: This would free up another 64 bit field in the
> > mbuf
> > > > structure!
> > > >
> > > > That would be great!
> > > >
> > > >
> > > > > And even though the m->next pointer for scattered packets
> > > > > resides in the second cache line, the libraries and application
> > > > > knows that m->next is NULL when m->nb_segs is 1.
> > > > > This proves that my suggestion would make touching the second
> > > > > cache line unnecessary (in simple cases), even for
> > > > > re-initializing the mbuf.
> > > >
> > > > So you think the "next" pointer should stay in the second half of
> > mbuf?
> > > >
> > > > I feel you would like to move the Tx offloads in the first half to
> > > > improve performance of very simple apps.
> > >
> > > "Very simple apps" sounds like a minority of apps. I would rather
> > > say
> > "very simple packet handling scenarios", e.g. forwarding of normal
> > size non-segmented packets. I would guess that the vast majority of
> > packets handled by DPDK applications actually match this scenario. So
> > I'm proposing to optimize for what I think is the most common scenario.
> > >
> > > If segmented packets are common, then m->next could be moved to the
> > first cache line. But it will only improve the pure RX steps of the
> > pipeline. When preparing the packet for TX, m->tx_offloads will need
> > to be set, and the second cache line comes into play. So I'm wondering
> > how big the benefit of having m->next in the first cache line really
> > is - assuming that m->nb_segs will be checked before accessing m->next.
> > >
> > > > I am thinking the opposite: we could have some dynamic fields
> > > > space in the first half to improve performance of complex Rx.
> > > > Note: we can add a flag hint for field registration in this first
> > half.
> > > >
> > >
> > > I have had the same thoughts. However, I would prefer being able to
> > forward ordinary packets without using the second mbuf cache line at
> > all (although only in specific scenarios like my example above).
> > >
> > > Furthermore, the application can abuse the 64 bit m->tx_offload
> > > field
> > for private purposes until it is time to prepare the packet for TX and
> > pass it on to the driver. This hack somewhat resembles a dynamic field
> > in the first cache line, and will not be possible if the m->pool or m-
> > >next field is moved there.
> >
> >
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 13:46 0% ` Morten Brørup
@ 2020-11-03 13:50 0% ` Bruce Richardson
2020-11-03 14:03 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2020-11-03 13:50 UTC (permalink / raw)
To: Morten Brørup
Cc: Thomas Monjalon, dev, techboard, Ajit Khaparde, Ananyev,
Konstantin, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
olivier.matz, jerinj, viacheslavo, honnappa.nagarahalli,
maxime.coquelin, stephen, hemant.agrawal, Matan Azrad,
Shahaf Shuler
On Tue, Nov 03, 2020 at 02:46:17PM +0100, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Tuesday, November 3, 2020 1:26 PM
> >
> > On Tue, Nov 03, 2020 at 01:10:05PM +0100, Morten Brørup wrote:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Sent: Monday, November 2, 2020 4:58 PM
> > > >
> > > > +Cc techboard
> > > >
> > > > We need benchmark numbers in order to take a decision.
> > > > Please all, prepare some arguments and numbers so we can discuss
> > > > the mbuf layout in the next techboard meeting.
> > >
> > > I propose that the techboard considers this from two angels:
> > >
> > > 1. Long term goals and their relative priority. I.e. what can be
> > > achieved with wide-ranging modifications, requiring yet another ABI
> > > break and due notices.
> > >
> > > 2. Short term goals, i.e. what can be achieved for this release.
> > >
> > >
> > > My suggestions follow...
> > >
> > > 1. Regarding long term goals:
> > >
> > > I have argued that simple forwarding of non-segmented packets using
> > > only the first mbuf cache line can be achieved by making three
> > > modifications:
> > >
> > > a) Move m->tx_offload to the first cache line.
> > > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > > instead of the 64 bit m->pool pointer in the second cache line.
> > > c) Do not access m->next when we know that it is NULL.
> > > We can use m->nb_segs == 1 or some other invariant as the gate.
> > > It can be implemented by adding an m->next accessor function:
> > > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > > {
> > > return m->nb_segs == 1 ? NULL : m->next;
> > > }
> > >
> > > Regarding the priority of this goal, I guess that simple forwarding
> > > of non-segmented packets is probably the path taken by the majority
> > > of packets handled by DPDK.
> > >
> > >
> > > An alternative goal could be:
> > > Do not touch the second cache line during RX.
> > > A comment in the mbuf structure says so, but it is not true anymore.
> > >
> >
> > The comment should be true for non-scattered RX, I believe.
>
> You are correct.
>
> My suggestion was unclear: Extend this remark to include segmented packets.
>
> This could be a priority if the techboard considers RX segmented packets more important than my suggestion for single cache line forwarding of non-segmented packets.
>
>
> > I'm not aware of any use of second cacheline for the fast-path RXs for many drivers.
> > Am I missing something that has changed recently here?
>
> Check out eth_igb_recv_pkts() in the E1000 driver: rxm->next = NULL;
> Or pmd_rx_burst() in the TAP driver: new_tail->next = seg->next;
>
> Perhaps the documentation should describe best practices for implementing RX and TX functions in drivers, including allocating/freeing mbufs. Or an example dummy Ethernet driver could do it.
>
Yes, perhaps I should be clearer about the "fast-path", because I was
thinking of the optimized RX/TX paths for those nics at 10G and above.
Probably the documentation should indeed have an update clarifying things a
bit, since using the first cacheline only possible but not mandatory for
simple RX.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 12:25 0% ` Bruce Richardson
@ 2020-11-03 13:46 0% ` Morten Brørup
2020-11-03 13:50 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-03 13:46 UTC (permalink / raw)
To: Bruce Richardson
Cc: Thomas Monjalon, dev, techboard, Ajit Khaparde, Ananyev,
Konstantin, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
olivier.matz, jerinj, viacheslavo, honnappa.nagarahalli,
maxime.coquelin, stephen, hemant.agrawal, Matan Azrad,
Shahaf Shuler
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Tuesday, November 3, 2020 1:26 PM
>
> On Tue, Nov 03, 2020 at 01:10:05PM +0100, Morten Brørup wrote:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Monday, November 2, 2020 4:58 PM
> > >
> > > +Cc techboard
> > >
> > > We need benchmark numbers in order to take a decision.
> > > Please all, prepare some arguments and numbers so we can discuss
> > > the mbuf layout in the next techboard meeting.
> >
> > I propose that the techboard considers this from two angels:
> >
> > 1. Long term goals and their relative priority. I.e. what can be
> > achieved with wide-ranging modifications, requiring yet another ABI
> > break and due notices.
> >
> > 2. Short term goals, i.e. what can be achieved for this release.
> >
> >
> > My suggestions follow...
> >
> > 1. Regarding long term goals:
> >
> > I have argued that simple forwarding of non-segmented packets using
> > only the first mbuf cache line can be achieved by making three
> > modifications:
> >
> > a) Move m->tx_offload to the first cache line.
> > b) Use an 8 bit pktmbuf mempool index in the first cache line,
> > instead of the 64 bit m->pool pointer in the second cache line.
> > c) Do not access m->next when we know that it is NULL.
> > We can use m->nb_segs == 1 or some other invariant as the gate.
> > It can be implemented by adding an m->next accessor function:
> > struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> > {
> > return m->nb_segs == 1 ? NULL : m->next;
> > }
> >
> > Regarding the priority of this goal, I guess that simple forwarding
> > of non-segmented packets is probably the path taken by the majority
> > of packets handled by DPDK.
> >
> >
> > An alternative goal could be:
> > Do not touch the second cache line during RX.
> > A comment in the mbuf structure says so, but it is not true anymore.
> >
>
> The comment should be true for non-scattered RX, I believe.
You are correct.
My suggestion was unclear: Extend this remark to include segmented packets.
This could be a priority if the techboard considers RX segmented packets more important than my suggestion for single cache line forwarding of non-segmented packets.
> I'm not aware of any use of second cacheline for the fast-path RXs for many drivers.
> Am I missing something that has changed recently here?
Check out eth_igb_recv_pkts() in the E1000 driver: rxm->next = NULL;
Or pmd_rx_burst() in the TAP driver: new_tail->next = seg->next;
Perhaps the documentation should describe best practices for implementing RX and TX functions in drivers, including allocating/freeing mbufs. Or an example dummy Ethernet driver could do it.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-03 12:10 4% ` Morten Brørup
@ 2020-11-03 12:25 0% ` Bruce Richardson
2020-11-03 13:46 0% ` Morten Brørup
2020-11-03 14:02 0% ` Slava Ovsiienko
1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2020-11-03 12:25 UTC (permalink / raw)
To: Morten Brørup
Cc: Thomas Monjalon, dev, techboard, Ajit Khaparde, Ananyev,
Konstantin, Andrew Rybchenko, Yigit, Ferruh, david.marchand,
olivier.matz, jerinj, viacheslavo, honnappa.nagarahalli,
maxime.coquelin, stephen, hemant.agrawal, Matan Azrad,
Shahaf Shuler
On Tue, Nov 03, 2020 at 01:10:05PM +0100, Morten Brørup wrote:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Monday, November 2, 2020 4:58 PM
> >
> > +Cc techboard
> >
> > We need benchmark numbers in order to take a decision.
> > Please all, prepare some arguments and numbers so we can discuss
> > the mbuf layout in the next techboard meeting.
>
> I propose that the techboard considers this from two angels:
>
> 1. Long term goals and their relative priority. I.e. what can be
> achieved with wide-ranging modifications, requiring yet another ABI
> break and due notices.
>
> 2. Short term goals, i.e. what can be achieved for this release.
>
>
> My suggestions follow...
>
> 1. Regarding long term goals:
>
> I have argued that simple forwarding of non-segmented packets using
> only the first mbuf cache line can be achieved by making three
> modifications:
>
> a) Move m->tx_offload to the first cache line.
> b) Use an 8 bit pktmbuf mempool index in the first cache line,
> instead of the 64 bit m->pool pointer in the second cache line.
> c) Do not access m->next when we know that it is NULL.
> We can use m->nb_segs == 1 or some other invariant as the gate.
> It can be implemented by adding an m->next accessor function:
> struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
> {
> return m->nb_segs == 1 ? NULL : m->next;
> }
>
> Regarding the priority of this goal, I guess that simple forwarding
> of non-segmented packets is probably the path taken by the majority
> of packets handled by DPDK.
>
>
> An alternative goal could be:
> Do not touch the second cache line during RX.
> A comment in the mbuf structure says so, but it is not true anymore.
>
The comment should be true for non-scattered RX, I believe. I'm not aware
of any use of second cacheline for the fast-path RXs for many drivers. Am I
missing something that has changed recently here?
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-02 15:58 0% ` Thomas Monjalon
@ 2020-11-03 12:10 4% ` Morten Brørup
2020-11-03 12:25 0% ` Bruce Richardson
2020-11-03 14:02 0% ` Slava Ovsiienko
0 siblings, 2 replies; 200+ results
From: Morten Brørup @ 2020-11-03 12:10 UTC (permalink / raw)
To: Thomas Monjalon, dev, techboard
Cc: Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, dev, Yigit,
Ferruh, david.marchand, Richardson, Bruce, olivier.matz, jerinj,
viacheslavo, honnappa.nagarahalli, maxime.coquelin, stephen,
hemant.agrawal, viacheslavo, Matan Azrad, Shahaf Shuler
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Monday, November 2, 2020 4:58 PM
>
> +Cc techboard
>
> We need benchmark numbers in order to take a decision.
> Please all, prepare some arguments and numbers so we can discuss
> the mbuf layout in the next techboard meeting.
I propose that the techboard considers this from two angels:
1. Long term goals and their relative priority. I.e. what can be
achieved with wide-ranging modifications, requiring yet another ABI
break and due notices.
2. Short term goals, i.e. what can be achieved for this release.
My suggestions follow...
1. Regarding long term goals:
I have argued that simple forwarding of non-segmented packets using
only the first mbuf cache line can be achieved by making three
modifications:
a) Move m->tx_offload to the first cache line.
b) Use an 8 bit pktmbuf mempool index in the first cache line,
instead of the 64 bit m->pool pointer in the second cache line.
c) Do not access m->next when we know that it is NULL.
We can use m->nb_segs == 1 or some other invariant as the gate.
It can be implemented by adding an m->next accessor function:
struct rte_mbuf * rte_mbuf_next(struct rte_mbuf * m)
{
return m->nb_segs == 1 ? NULL : m->next;
}
Regarding the priority of this goal, I guess that simple forwarding
of non-segmented packets is probably the path taken by the majority
of packets handled by DPDK.
An alternative goal could be:
Do not touch the second cache line during RX.
A comment in the mbuf structure says so, but it is not true anymore.
(I guess that regression testing didn't catch this because the tests
perform TX immediately after RX, so the cache miss just moves from
the TX to the RX part of the test application.)
2. Regarding short term goals:
The current DPDK source code looks to me like m->next is the most
frequently accessed field in the second cache line, so it makes sense
moving this to the first cache line, rather than m->pool.
Benchmarking may help here.
If we - without breaking the ABI - can introduce a gate to avoid
accessing m->next when we know that it is NULL, we should keep it in
the second cache line.
In this case, I would prefer to move m->tx_offload to the first cache
line, thereby providing a field available for application use, until
the application prepares the packet for transmission.
>
>
> 01/11/2020 21:59, Morten Brørup:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Sunday, November 1, 2020 5:38 PM
> > >
> > > 01/11/2020 10:12, Morten Brørup:
> > > > One thing has always puzzled me:
> > > > Why do we use 64 bits to indicate which memory pool
> > > > an mbuf belongs to?
> > > > The portid only uses 16 bits and an indirection index.
> > > > Why don't we use the same kind of indirection index for mbuf
> pools?
> > >
> > > I wonder what would be the cost of indirection. Probably
> neglectible.
> >
> > Probably. The portid does it, and that indirection is heavily used
> everywhere.
> >
> > The size of mbuf memory pool indirection array should be compile time
> configurable, like the size of the portid indirection array.
> >
> > And for reference, the indirection array will fit into one cache line
> if we default to 8 mbuf pools, thus supporting an 8 CPU socket system
> with one mbuf pool per CPU socket, or a 4 CPU socket system with two
> mbuf pools per CPU socket.
> >
> > (And as a side note: Our application is optimized for single-socket
> systems, and we only use one mbuf pool. I guess many applications were
> developed without carefully optimizing for multi-socket systems, and
> also just use one mbuf pool. In these cases, the mbuf structure doesn't
> really need a pool field. But it is still there, and the DPDK libraries
> use it, so we didn't bother removing it.)
> >
> > > I think it is a good proposal...
> > > ... for next year, after a deprecation notice.
> > >
> > > > I can easily imagine using one mbuf pool (or perhaps a few pools)
> > > > per CPU socket (or per physical memory bus closest to an attached
> NIC),
> > > > but not more than 256 mbuf memory pools in total.
> > > > So, let's introduce an mbufpoolid like the portid,
> > > > and cut this mbuf field down from 64 to 8 bits.
>
> We will need to measure the perf of the solution.
> There is a chance for the cost to be too much high.
>
>
> > > > If we also cut down m->pkt_len from 32 to 24 bits,
> > >
> > > Who is using packets larger than 64k? Are 16 bits enough?
> >
> > I personally consider 64k a reasonable packet size limit. Exotic
> applications with even larger packets would have to live with this
> constraint. But let's see if there are any objections. For reference,
> 64k corresponds to ca. 44 Ethernet (1500 byte) packets.
> >
> > (The limit could be 65535 bytes, to avoid translation of the value 0
> into 65536 bytes.)
> >
> > This modification would go nicely hand in hand with the mbuf pool
> indirection modification.
> >
> > ... after yet another round of ABI stability discussions,
> depreciation notices, and so on. :-)
>
> After more thoughts, I'm afraid 64k is too small in some cases.
> And 24-bit manipulation would probably break performance.
> I'm afraid we are stuck with 32-bit length.
Yes, 24 bit manipulation would probably break performance.
Perhaps a solution exists with 16 bits (least significant bits) for
the common cases, and 8 bits more (most significant bits) for the less
common cases. Just thinking out loud here...
>
> > > > we can get the 8 bit mbuf pool index into the first cache line
> > > > at no additional cost.
> > >
> > > I like the idea.
> > > It means we don't need to move the pool pointer now,
> > > i.e. it does not have to replace the timestamp field.
> >
> > Agreed! Don't move m->pool to the first cache line; it is not used
> for RX.
> >
> > >
> > > > In other words: This would free up another 64 bit field in the
> mbuf
> > > structure!
> > >
> > > That would be great!
> > >
> > >
> > > > And even though the m->next pointer for scattered packets resides
> > > > in the second cache line, the libraries and application knows
> > > > that m->next is NULL when m->nb_segs is 1.
> > > > This proves that my suggestion would make touching
> > > > the second cache line unnecessary (in simple cases),
> > > > even for re-initializing the mbuf.
> > >
> > > So you think the "next" pointer should stay in the second half of
> mbuf?
> > >
> > > I feel you would like to move the Tx offloads in the first half
> > > to improve performance of very simple apps.
> >
> > "Very simple apps" sounds like a minority of apps. I would rather say
> "very simple packet handling scenarios", e.g. forwarding of normal size
> non-segmented packets. I would guess that the vast majority of packets
> handled by DPDK applications actually match this scenario. So I'm
> proposing to optimize for what I think is the most common scenario.
> >
> > If segmented packets are common, then m->next could be moved to the
> first cache line. But it will only improve the pure RX steps of the
> pipeline. When preparing the packet for TX, m->tx_offloads will need to
> be set, and the second cache line comes into play. So I'm wondering how
> big the benefit of having m->next in the first cache line really is -
> assuming that m->nb_segs will be checked before accessing m->next.
> >
> > > I am thinking the opposite: we could have some dynamic fields space
> > > in the first half to improve performance of complex Rx.
> > > Note: we can add a flag hint for field registration in this first
> half.
> > >
> >
> > I have had the same thoughts. However, I would prefer being able to
> forward ordinary packets without using the second mbuf cache line at
> all (although only in specific scenarios like my example above).
> >
> > Furthermore, the application can abuse the 64 bit m->tx_offload field
> for private purposes until it is time to prepare the packet for TX and
> pass it on to the driver. This hack somewhat resembles a dynamic field
> in the first cache line, and will not be possible if the m->pool or m-
> >next field is moved there.
>
>
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
[not found] ` <CAJFAV8wmpft6XLRg1RAL+d4ibbJVrR9C0ghkE-kqyig_q_Meeg@mail.gmail.com>
@ 2020-11-03 10:07 9% ` Kinsella, Ray
2020-11-10 12:53 8% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-11-03 10:07 UTC (permalink / raw)
To: David Marchand
Cc: Walsh, Conor, dpdk-dev, Luca Boccassi, Dodji Seketeli, Mcnamara, John
Hi David,
Came across an issue with this.
Essentially what is happening is that an ABI dump file generated with a newer versions of libabigail
is not guaranteed to be 100% compatible with a older versions.
That then adds a wrinkle that we need may need to look at maintaining abi dump archives per distro release,
or libabigail version depending on how you look at it.
An alter approach suggested by Dodi would be to just archive the binaries somewhere instead,
and regenerate the dumps at build time. That _may_ be feasible,
but you lose some of the benefit (build time saving) compared to archiving the abi dumps.
The most sensible approach to archiving the binaries.
is to use DPDK release os packaging for this, installed to a fs sandbox.
So the next steps are figuring out, which is the better option between
maintaining multiple abi dump archives, one per supported os distro.
or looking at what needs to happen with DPDK os packaging.
So some work still to do here.
Thanks,
Ray K
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable?
2020-11-02 16:17 3% [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable? Andrew Boyer
2020-11-02 16:25 0% ` Burakov, Anatoly
@ 2020-11-02 16:31 0% ` Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-11-02 16:31 UTC (permalink / raw)
To: Andrew Boyer, dev
On 11/2/2020 4:17 PM, Andrew Boyer wrote:
> Hello DPDK folks,
> I am ready to start submitting some patches to bring the Pensando ionic PMD up to speed. The first batch will be a practice run of some minor things, if that’s acceptable.
>
Hi Andrew,
If you recognized 'ionic' is marked as 'UNMAINTAINED' in the current tree.
I would like to see it to get proper updates and be back up to date.
> It appears that the 20.02 release is no longer being maintained. Is that correct? Is it possible for us to get patches into a new stable release of 20.02? They would only affect our PMD and not affect the ABI or anything.
>
20.02 is no longer maintained.
xx.11 releases are long term stable releases and they are maintained for two years.
v18.11 is almost end of life, with the current release, v20.11, the v19.11 &
v20.11 will be active LTS releases.
> It looks like I have just about missed the boat on 20.11 - would you prefer patches this week or should I hold them until December?
>
As said above I am for getting updates, -rc2 is tomorrow so most probably it is
already too late.
But I am OK to get 'ionic' specific patches for -rc3 too, this gives an
additional week for this release, if it is enough for you.
But let me warn you, we are very close to the release, if something goes wrong
with this last minute code, you many not have enough time to detect and fix it
and driver may be released broken, which won't put you in a better situation.
If there are issues that you can fix and verify with confidence in this short
time frame, lets try to squeeze them, but if the changes are big I suggest
waiting the next release.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable?
2020-11-02 16:17 3% [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable? Andrew Boyer
@ 2020-11-02 16:25 0% ` Burakov, Anatoly
2020-11-02 16:31 0% ` Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Burakov, Anatoly @ 2020-11-02 16:25 UTC (permalink / raw)
To: Andrew Boyer, dev
On 02-Nov-20 4:17 PM, Andrew Boyer wrote:
> Hello DPDK folks,
> I am ready to start submitting some patches to bring the Pensando ionic PMD up to speed. The first batch will be a practice run of some minor things, if that’s acceptable.
>
> It appears that the 20.02 release is no longer being maintained. Is that correct? Is it possible for us to get patches into a new stable release of 20.02? They would only affect our PMD and not affect the ABI or anything.
>
> It looks like I have just about missed the boat on 20.11 - would you prefer patches this week or should I hold them until December?
>
> Thank you,
> Andrew
> Pensando
>
20.02 is not an LTS so there won't be any more releases.
You can send the patches any time, but at this stage they won't be
pulled in to 20.11. You may help maintainers by creating an account at
patchwork [1] and marking your patches as "Deferred" once you send them.
They will be marked as "New" once the development of the new release
commences.
[1] https://patchwork.dpdk.org
--
Thanks,
Anatoly
^ permalink raw reply [relevance 0%]
* [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable?
@ 2020-11-02 16:17 3% Andrew Boyer
2020-11-02 16:25 0% ` Burakov, Anatoly
2020-11-02 16:31 0% ` Ferruh Yigit
0 siblings, 2 replies; 200+ results
From: Andrew Boyer @ 2020-11-02 16:17 UTC (permalink / raw)
To: dev
Hello DPDK folks,
I am ready to start submitting some patches to bring the Pensando ionic PMD up to speed. The first batch will be a practice run of some minor things, if that’s acceptable.
It appears that the 20.02 release is no longer being maintained. Is that correct? Is it possible for us to get patches into a new stable release of 20.02? They would only affect our PMD and not affect the ABI or anything.
It looks like I have just about missed the boat on 20.11 - would you prefer patches this week or should I hold them until December?
Thank you,
Andrew
Pensando
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
2020-11-01 20:59 3% ` Morten Brørup
@ 2020-11-02 15:58 0% ` Thomas Monjalon
2020-11-03 12:10 4% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-11-02 15:58 UTC (permalink / raw)
To: dev, techboard
Cc: Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, dev, Yigit,
Ferruh, david.marchand, Richardson, Bruce, olivier.matz, jerinj,
viacheslavo, honnappa.nagarahalli, maxime.coquelin, stephen,
hemant.agrawal, viacheslavo, Matan Azrad, Shahaf Shuler,
Morten Brørup
+Cc techboard
We need benchmark numbers in order to take a decision.
Please all, prepare some arguments and numbers so we can discuss
the mbuf layout in the next techboard meeting.
01/11/2020 21:59, Morten Brørup:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Sunday, November 1, 2020 5:38 PM
> >
> > 01/11/2020 10:12, Morten Brørup:
> > > One thing has always puzzled me:
> > > Why do we use 64 bits to indicate which memory pool
> > > an mbuf belongs to?
> > > The portid only uses 16 bits and an indirection index.
> > > Why don't we use the same kind of indirection index for mbuf pools?
> >
> > I wonder what would be the cost of indirection. Probably neglectible.
>
> Probably. The portid does it, and that indirection is heavily used everywhere.
>
> The size of mbuf memory pool indirection array should be compile time configurable, like the size of the portid indirection array.
>
> And for reference, the indirection array will fit into one cache line if we default to 8 mbuf pools, thus supporting an 8 CPU socket system with one mbuf pool per CPU socket, or a 4 CPU socket system with two mbuf pools per CPU socket.
>
> (And as a side note: Our application is optimized for single-socket systems, and we only use one mbuf pool. I guess many applications were developed without carefully optimizing for multi-socket systems, and also just use one mbuf pool. In these cases, the mbuf structure doesn't really need a pool field. But it is still there, and the DPDK libraries use it, so we didn't bother removing it.)
>
> > I think it is a good proposal...
> > ... for next year, after a deprecation notice.
> >
> > > I can easily imagine using one mbuf pool (or perhaps a few pools)
> > > per CPU socket (or per physical memory bus closest to an attached NIC),
> > > but not more than 256 mbuf memory pools in total.
> > > So, let's introduce an mbufpoolid like the portid,
> > > and cut this mbuf field down from 64 to 8 bits.
We will need to measure the perf of the solution.
There is a chance for the cost to be too much high.
> > > If we also cut down m->pkt_len from 32 to 24 bits,
> >
> > Who is using packets larger than 64k? Are 16 bits enough?
>
> I personally consider 64k a reasonable packet size limit. Exotic applications with even larger packets would have to live with this constraint. But let's see if there are any objections. For reference, 64k corresponds to ca. 44 Ethernet (1500 byte) packets.
>
> (The limit could be 65535 bytes, to avoid translation of the value 0 into 65536 bytes.)
>
> This modification would go nicely hand in hand with the mbuf pool indirection modification.
>
> ... after yet another round of ABI stability discussions, depreciation notices, and so on. :-)
After more thoughts, I'm afraid 64k is too small in some cases.
And 24-bit manipulation would probably break performance.
I'm afraid we are stuck with 32-bit length.
> > > we can get the 8 bit mbuf pool index into the first cache line
> > > at no additional cost.
> >
> > I like the idea.
> > It means we don't need to move the pool pointer now,
> > i.e. it does not have to replace the timestamp field.
>
> Agreed! Don't move m->pool to the first cache line; it is not used for RX.
>
> >
> > > In other words: This would free up another 64 bit field in the mbuf
> > structure!
> >
> > That would be great!
> >
> >
> > > And even though the m->next pointer for scattered packets resides
> > > in the second cache line, the libraries and application knows
> > > that m->next is NULL when m->nb_segs is 1.
> > > This proves that my suggestion would make touching
> > > the second cache line unnecessary (in simple cases),
> > > even for re-initializing the mbuf.
> >
> > So you think the "next" pointer should stay in the second half of mbuf?
> >
> > I feel you would like to move the Tx offloads in the first half
> > to improve performance of very simple apps.
>
> "Very simple apps" sounds like a minority of apps. I would rather say "very simple packet handling scenarios", e.g. forwarding of normal size non-segmented packets. I would guess that the vast majority of packets handled by DPDK applications actually match this scenario. So I'm proposing to optimize for what I think is the most common scenario.
>
> If segmented packets are common, then m->next could be moved to the first cache line. But it will only improve the pure RX steps of the pipeline. When preparing the packet for TX, m->tx_offloads will need to be set, and the second cache line comes into play. So I'm wondering how big the benefit of having m->next in the first cache line really is - assuming that m->nb_segs will be checked before accessing m->next.
>
> > I am thinking the opposite: we could have some dynamic fields space
> > in the first half to improve performance of complex Rx.
> > Note: we can add a flag hint for field registration in this first half.
> >
>
> I have had the same thoughts. However, I would prefer being able to forward ordinary packets without using the second mbuf cache line at all (although only in specific scenarios like my example above).
>
> Furthermore, the application can abuse the 64 bit m->tx_offload field for private purposes until it is time to prepare the packet for TX and pass it on to the driver. This hack somewhat resembles a dynamic field in the first cache line, and will not be possible if the m->pool or m->next field is moved there.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v16 00/23] Add DLB PMD
2020-11-01 23:29 3% ` [dpdk-dev] [PATCH v16 " Timothy McDaniel
@ 2020-11-02 14:07 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-11-02 14:07 UTC (permalink / raw)
To: Timothy McDaniel
Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
Jerin Jacob, Thomas Monjalon
On Mon, Nov 2, 2020 at 4:58 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The following patch series adds support for a new eventdev PMD. The DLB
> PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
> The DLB is a PCIe device that provides load-balanced, prioritized
> scheduling of core-to-core communication. The device consists of
> queues and arbiters that connect producer and consumer cores, and
> implements load-balanced queueing features including:
> - Lock-free multi-producer/multi-consumer operation.
> - Multiple priority levels for varying traffic types.
> - 'Direct' traffic (i.e. multi-producer/single-consumer)
> - Simple unordered load-balanced distribution.
> - Atomic lock-free load balancing across multiple consumers.
> - Queue element reordering feature allowing ordered load-balanced
> distribution.
>
> The DLB hardware supports both load balanced and directed ports and
> queues. Unlike other eventdev devices already in the repo, not all
> DLB ports and queues are equally capable. In particular, directed
> ports are limited to a single link, and must be connected to a directed
> queue.
> Additionally, even though LDB ports may link multiple queues, the
> number of queues that may be linked is limited by hardware. Another
> difference is that DLB does not have a straightforward way of carrying
> the flow_id in the queue elements (QE) that the hardware operates on.
>
> While reviewing the code, please be aware that this PMD has full
> control over the DLB hardware. Intel will be extending the DLB PMD
> in the future (not as part of this first series) with a mode that we
> refer to as the bifurcated PMD. The bifurcated PMD communicates with a
> kernel driver to configure the device, ports, and queues, and memory
> maps device MMIO so datapath operations occur purely in user-space.
>
> The framework to support both the PF PMD and bifurcated PMD exists in
> this patchset, and is why the iface.[ch] layer is present.
Series applied to dpdk-next-eventdev/for-main with the following fix. Thanks.
diff --git a/doc/guides/eventdevs/dlb.rst b/doc/guides/eventdevs/dlb.rst
index d44afcdcf..4c4f56b2b 100644
--- a/doc/guides/eventdevs/dlb.rst
+++ b/doc/guides/eventdevs/dlb.rst
@@ -2,7 +2,7 @@
Copyright(c) 2020 Intel Corporation.
Driver for the Intel® Dynamic Load Balancer (DLB)
+=================================================
-==================================================
The DPDK dlb poll mode driver supports the Intel® Dynamic Load Balancer.
>
> Major changes in V16
> ====================
> Address additional comments from David Marchand:
> - converted printfs in dlb/pf/dlb_main.c to DLB_LOG
> - fixed a repeated word error in dlb/pf/base/osdep_bitmap.h
> - caught up with marking the patches that Gage reviewed
>
> Major changes in V15
> ====================
> Address comments from David Marchand:
> - this patch-set is based on Nov 1, 2020 dpdk-next-eventdev
> - fix docs build (doxy-api.conf.in and doxy-api-index.md)
> - restore blank line in MAINTAINERS file
> - move dlb announcement in release_20_11.rst after ethdev
> - use headers = files() for exported meson public headers
> - fix a typo in 'add documentation ..." commit message
> - use eal version of cldemote
> - convert a couple of printfs to LOG messages
> - fix missing "~" in dlb documentation
> - delay introduction of _delayed token pop functions to
> token pop commit (fixes 8 or so unused function errors)
> - all patches build incrementally (gcc), and checkpatches reports
> success
> - I am not able to run clang locally. If clang errors are still
> present I will ask IT to install clang on a build server tomorrow.
>
> Major changes in V14
> ====================
> - Fixed format errors in doc/api/doxy-api-index.md
> - Delayed introduction of dlb2_consume_qe_immediate until
> add-dequeue-and-its-burst-variants.patch
> - Delayed introduction of dlb2_construct_token_pop_qe until
> add-PMD-s-token-pop-public-interface.patch
> - Delayed introduction of dlb_equeue_*_delayed until
> add dequeue and its burst variants.patch
>
> Major changes in V13
> ====================
> - removed now unused functions dlb_umwait and dlb_umonitor
>
> Major changes in V12
> ====================
> - Fix CENTOS build error: use __m128i instead of __v2di with
> _mm_stream_si128
>
> Major changes in V11
> ====================
> - removed unused function, fixing build error
> - fixed typo in port_setup commit message
> - this patch series is based on dpdk-next-eventdev
>
> Major changes in v10
> =====================
> - convert to use rte_power_monitor patches
> - replace __builtin_ia32_movntdq() with _mm_stream_si128()
> - remove unused functions in dlb_selftest.c
>
> Major changes in v9
> =====================
> - fixed a build error due to __rte_cache_aligned being placed after
> the ";" character, instead of before it.
>
> Major changes in v8 after dpdk reviews
> =====================
> - moved introduction of dlb in relnotes_20_11 to first patch in series
> - fixed underlines in dlb.rst that were too short
> - note that the code still uses its private byte-encoded versions of
> umonitor/umwait, rather than the new functions in the power
> patch that are built on top of those intrinsics. This is intentional.
>
> Major changes in v7 after dpdk reviews
> =====================
> - updated MAINTAINERS file to alphabetically insert DLB
> - don't create RTE_ symbols in PMD
> - converted to use version.map scheme
> - converted to use .._master_lcore instead of .._main_lcore
> - this patch set is based on dpdk-next-eventdev
>
> Major changes in v6 after dpdk reviews:
> =====================
> - fixed meson conditional build. Moved test into driver’s meson.build
> file instead of event/meson.build
> - documentation is populated as associated code is introduced
> - add log_register in add dynamic logging patch
> - rename RTE_xxx symbol(s) as DLB2_xxx
> - replaced function ptr enqueue_four with direct call to movdir64b
> - remove unused port_pages
> - broke up probe patch into 3 smaller patches for easier review
> - changed param order of movdir64b/movntdq to match intrinsics
> - added self to MAINTAINERS files
> - squashed announcement of availability into last patch in series
> - correct spelling errors and delete repeated words
> - DPDK_21.0 -> DPDK 21 in map file
> - add experimental banner to public structs and APIs
> - implemented other suggestions from code reviews of DLB2 PMD. The
> software is very similar in form so some DLB2 reviews comments
> were applicable to DLB as well
>
> Major changes in v5 after dpdk reviews and additional internal reviews
> by colleagues at Intel:
> ================
> - implement changes requested in code reviews by Gage Eads and Mike Chen
> - fix a memzone leak
> - convert to use eal rte-cpuflags patch from Liang Ma
>
> Major changes in v4 after dpdk reviews and additional internal reviews
> by colleagues at Intel:
> ================
> - Remove make infrastructure
> - shared code (pf/base) is now added incrementally
> - flexible interface (iface.[ch]) is now added incrementally
> - removed calls to rte_panic
> - do not call pthread_create directly
> - remove unused internal API, os_time
> - convert rte_atomic to __atomic builtins
> - broke out eventdev ABI changes, test/api changes, and new internal PCI
> named probe API
> - relocated enqueue logic to enqueue patch
>
> Major Changes in V3:
> ================
> - Fixed a memory corruption issue due to not allocating enough CQ
> memory for depths < 8. Hardware requires minimum allocation to be
> at least 8 entries.
> - Address review comments from Gage and Mattias.
> - Remove versioning
> - minor formatting changes
>
> Major changes in V2:
> ================
> - Correct ABI break that was present in V1.
> - Address some of the review comments received from Mattias.
> I will address the remaining items identified by Mattias in the next
> patch delivery.
> - General code cleanup based on internal code reviews
>
> Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
>
> Timothy McDaniel (23):
> event/dlb: add documentation and meson infrastructure
> event/dlb: add dynamic logging
> event/dlb: add private data structures and constants
> event/dlb: add definitions shared with LKM or shared code
> event/dlb: add inline functions
> event/dlb: add eventdev probe
> event/dlb: add flexible interface
> event/dlb: add probe-time hardware init
> event/dlb: add xstats
> event/dlb: add infos get and configure
> event/dlb: add queue and port default conf
> event/dlb: add queue setup
> event/dlb: add port setup
> event/dlb: add port link
> event/dlb: add port unlink and port unlinks in progress
> event/dlb: add eventdev start
> event/dlb: add enqueue and its burst variants
> event/dlb: add dequeue and its burst variants
> event/dlb: add eventdev stop and close
> event/dlb: add PMD's token pop public interface
> event/dlb: add PMD self-tests
> event/dlb: add queue and port release
> event/dlb: add timeout ticks entry point
>
> MAINTAINERS | 5 +
> app/test/test_eventdev.c | 7 +
> config/rte_config.h | 6 +
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> doc/guides/eventdevs/dlb.rst | 341 ++
> doc/guides/eventdevs/index.rst | 1 +
> doc/guides/rel_notes/release_20_11.rst | 5 +
> drivers/event/dlb/dlb.c | 4079 +++++++++++++++
> drivers/event/dlb/dlb_iface.c | 79 +
> drivers/event/dlb/dlb_iface.h | 82 +
> drivers/event/dlb/dlb_inline_fns.h | 36 +
> drivers/event/dlb/dlb_log.h | 25 +
> drivers/event/dlb/dlb_priv.h | 513 ++
> drivers/event/dlb/dlb_selftest.c | 1539 ++++++
> drivers/event/dlb/dlb_user.h | 814 +++
> drivers/event/dlb/dlb_xstats.c | 1217 +++++
> drivers/event/dlb/meson.build | 22 +
> drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
> drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
> drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
> drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
> drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
> drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
> drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
> drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
> drivers/event/dlb/pf/dlb_main.c | 586 +++
> drivers/event/dlb/pf/dlb_main.h | 47 +
> drivers/event/dlb/pf/dlb_pf.c | 750 +++
> drivers/event/dlb/rte_pmd_dlb.c | 38 +
> drivers/event/dlb/rte_pmd_dlb.h | 77 +
> drivers/event/dlb/version.map | 9 +
> drivers/event/meson.build | 2 +-
> 33 files changed, 21677 insertions(+), 2 deletions(-)
> create mode 100644 doc/guides/eventdevs/dlb.rst
> create mode 100644 drivers/event/dlb/dlb.c
> create mode 100644 drivers/event/dlb/dlb_iface.c
> create mode 100644 drivers/event/dlb/dlb_iface.h
> create mode 100644 drivers/event/dlb/dlb_inline_fns.h
> create mode 100644 drivers/event/dlb/dlb_log.h
> create mode 100644 drivers/event/dlb/dlb_priv.h
> create mode 100644 drivers/event/dlb/dlb_selftest.c
> create mode 100644 drivers/event/dlb/dlb_user.h
> create mode 100644 drivers/event/dlb/dlb_xstats.c
> create mode 100644 drivers/event/dlb/meson.build
> create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
> create mode 100644 drivers/event/dlb/pf/dlb_main.c
> create mode 100644 drivers/event/dlb/pf/dlb_main.h
> create mode 100644 drivers/event/dlb/pf/dlb_pf.c
> create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
> create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
> create mode 100644 drivers/event/dlb/version.map
>
> --
> 2.6.4
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v16 00/23] Add DLB PMD
` (8 preceding siblings ...)
2020-11-01 19:26 3% ` [dpdk-dev] [PATCH v15 " Timothy McDaniel
@ 2020-11-01 23:29 3% ` Timothy McDaniel
2020-11-02 14:07 0% ` Jerin Jacob
9 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-11-01 23:29 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V16
====================
Address additional comments from David Marchand:
- converted printfs in dlb/pf/dlb_main.c to DLB_LOG
- fixed a repeated word error in dlb/pf/base/osdep_bitmap.h
- caught up with marking the patches that Gage reviewed
Major changes in V15
====================
Address comments from David Marchand:
- this patch-set is based on Nov 1, 2020 dpdk-next-eventdev
- fix docs build (doxy-api.conf.in and doxy-api-index.md)
- restore blank line in MAINTAINERS file
- move dlb announcement in release_20_11.rst after ethdev
- use headers = files() for exported meson public headers
- fix a typo in 'add documentation ..." commit message
- use eal version of cldemote
- convert a couple of printfs to LOG messages
- fix missing "~" in dlb documentation
- delay introduction of _delayed token pop functions to
token pop commit (fixes 8 or so unused function errors)
- all patches build incrementally (gcc), and checkpatches reports
success
- I am not able to run clang locally. If clang errors are still
present I will ask IT to install clang on a build server tomorrow.
Major changes in V14
====================
- Fixed format errors in doc/api/doxy-api-index.md
- Delayed introduction of dlb2_consume_qe_immediate until
add-dequeue-and-its-burst-variants.patch
- Delayed introduction of dlb2_construct_token_pop_qe until
add-PMD-s-token-pop-public-interface.patch
- Delayed introduction of dlb_equeue_*_delayed until
add dequeue and its burst variants.patch
Major changes in V13
====================
- removed now unused functions dlb_umwait and dlb_umonitor
Major changes in V12
====================
- Fix CENTOS build error: use __m128i instead of __v2di with
_mm_stream_si128
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 5 +
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4079 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 36 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1217 +++++
drivers/event/dlb/meson.build | 22 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
33 files changed, 21677 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half
@ 2020-11-01 20:59 3% ` Morten Brørup
2020-11-02 15:58 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2020-11-01 20:59 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Ajit Khaparde, Ananyev, Konstantin, Andrew Rybchenko, dev,
Yigit, Ferruh, david.marchand, Richardson, Bruce, olivier.matz,
jerinj, viacheslavo, honnappa.nagarahalli, maxime.coquelin,
stephen, hemant.agrawal, viacheslavo, Matan Azrad, Shahaf Shuler
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Sunday, November 1, 2020 5:38 PM
>
> 01/11/2020 10:12, Morten Brørup:
> > One thing has always puzzled me:
> > Why do we use 64 bits to indicate which memory pool
> > an mbuf belongs to?
> > The portid only uses 16 bits and an indirection index.
> > Why don't we use the same kind of indirection index for mbuf pools?
>
> I wonder what would be the cost of indirection. Probably neglectible.
Probably. The portid does it, and that indirection is heavily used everywhere.
The size of mbuf memory pool indirection array should be compile time configurable, like the size of the portid indirection array.
And for reference, the indirection array will fit into one cache line if we default to 8 mbuf pools, thus supporting an 8 CPU socket system with one mbuf pool per CPU socket, or a 4 CPU socket system with two mbuf pools per CPU socket.
(And as a side note: Our application is optimized for single-socket systems, and we only use one mbuf pool. I guess many applications were developed without carefully optimizing for multi-socket systems, and also just use one mbuf pool. In these cases, the mbuf structure doesn't really need a pool field. But it is still there, and the DPDK libraries use it, so we didn't bother removing it.)
> I think it is a good proposal...
> ... for next year, after a deprecation notice.
>
> > I can easily imagine using one mbuf pool (or perhaps a few pools)
> > per CPU socket (or per physical memory bus closest to an attached NIC),
> > but not more than 256 mbuf memory pools in total.
> > So, let's introduce an mbufpoolid like the portid,
> > and cut this mbuf field down from 64 to 8 bits.
> >
> > If we also cut down m->pkt_len from 32 to 24 bits,
>
> Who is using packets larger than 64k? Are 16 bits enough?
I personally consider 64k a reasonable packet size limit. Exotic applications with even larger packets would have to live with this constraint. But let's see if there are any objections. For reference, 64k corresponds to ca. 44 Ethernet (1500 byte) packets.
(The limit could be 65535 bytes, to avoid translation of the value 0 into 65536 bytes.)
This modification would go nicely hand in hand with the mbuf pool indirection modification.
... after yet another round of ABI stability discussions, depreciation notices, and so on. :-)
>
> > we can get the 8 bit mbuf pool index into the first cache line
> > at no additional cost.
>
> I like the idea.
> It means we don't need to move the pool pointer now,
> i.e. it does not have to replace the timestamp field.
Agreed! Don't move m->pool to the first cache line; it is not used for RX.
>
> > In other words: This would free up another 64 bit field in the mbuf
> structure!
>
> That would be great!
>
>
> > And even though the m->next pointer for scattered packets resides
> > in the second cache line, the libraries and application knows
> > that m->next is NULL when m->nb_segs is 1.
> > This proves that my suggestion would make touching
> > the second cache line unnecessary (in simple cases),
> > even for re-initializing the mbuf.
>
> So you think the "next" pointer should stay in the second half of mbuf?
>
> I feel you would like to move the Tx offloads in the first half
> to improve performance of very simple apps.
"Very simple apps" sounds like a minority of apps. I would rather say "very simple packet handling scenarios", e.g. forwarding of normal size non-segmented packets. I would guess that the vast majority of packets handled by DPDK applications actually match this scenario. So I'm proposing to optimize for what I think is the most common scenario.
If segmented packets are common, then m->next could be moved to the first cache line. But it will only improve the pure RX steps of the pipeline. When preparing the packet for TX, m->tx_offloads will need to be set, and the second cache line comes into play. So I'm wondering how big the benefit of having m->next in the first cache line really is - assuming that m->nb_segs will be checked before accessing m->next.
> I am thinking the opposite: we could have some dynamic fields space
> in the first half to improve performance of complex Rx.
> Note: we can add a flag hint for field registration in this first half.
>
I have had the same thoughts. However, I would prefer being able to forward ordinary packets without using the second mbuf cache line at all (although only in specific scenarios like my example above).
Furthermore, the application can abuse the 64 bit m->tx_offload field for private purposes until it is time to prepare the packet for TX and pass it on to the driver. This hack somewhat resembles a dynamic field in the first cache line, and will not be possible if the m->pool or m->next field is moved there.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v15 00/23] Add DLB PMD
` (7 preceding siblings ...)
2020-10-31 18:17 3% ` [dpdk-dev] [PATCH v14 " Timothy McDaniel
@ 2020-11-01 19:26 3% ` Timothy McDaniel
2020-11-01 23:29 3% ` [dpdk-dev] [PATCH v16 " Timothy McDaniel
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-11-01 19:26 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V15
====================
Address comments from David Marchand:
- this patch-set is based on Nov 1, 2020 dpdk-next-eventdev
- fix docs build (doxy-api.conf.in and doxy-api-index.md)
- restore blank line in MAINTAINERS file
- move dlb announcement in release_20_11.rst after ethdev
- use headers = files() for exported meson public headers
- fix a typo in 'add documentation ..." commit message
- use eal version of cldemote
- convert a couple of printfs to LOG messages
- fix missing "~" in dlb documentation
- delay introduction of _delayed token pop functions to
token pop commit (fixes 8 or so unused function errors)
- all patches build incrementally (gcc), and checkpatches reports
success
- I am not able to run clang locally. If clang errors are still
present I will ask IT to install clang on a build server tomorrow.
Major changes in V14
====================
- Fixed format errors in doc/api/doxy-api-index.md
- Delayed introduction of dlb2_consume_qe_immediate until
add-dequeue-and-its-burst-variants.patch
- Delayed introduction of dlb2_construct_token_pop_qe until
add-PMD-s-token-pop-public-interface.patch
- Delayed introduction of dlb_equeue_*_delayed until
add dequeue and its burst variants.patch
Major changes in V13
====================
- removed now unused functions dlb_umwait and dlb_umonitor
Major changes in V12
====================
- Fix CENTOS build error: use __m128i instead of __v2di with
_mm_stream_si128
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 5 +
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4079 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 36 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1217 +++++
drivers/event/dlb/meson.build | 22 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
33 files changed, 21677 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate ABI version
2020-10-19 9:41 9% ` [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate " David Marchand
2020-10-27 12:13 4% ` David Marchand
2020-11-01 14:48 4% ` Thomas Monjalon
@ 2020-11-01 15:09 4% ` Raslan Darawsheh
2 siblings, 0 replies; 200+ results
From: Raslan Darawsheh @ 2020-11-01 15:09 UTC (permalink / raw)
To: David Marchand, dev
Cc: NBU-Contact-Thomas Monjalon, Matan Azrad, Shahaf Shuler, Slava Ovsiienko
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of David Marchand
> Sent: Monday, October 19, 2020 12:42 PM
> To: dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>
> Subject: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate
> ABI version
>
> The glue libraries are tightly bound to the mlx drivers of a dpdk version
> and are packaged with them.
>
> Keeping a separate ABI version prevents us from installing two versions of
> dpdk.
> Maintaining this separate version just adds confusion.
> Align the glue library ABI version to the global ABI version.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> drivers/common/mlx5/linux/meson.build | 2 +-
> drivers/common/mlx5/linux/mlx5_glue.h | 1 -
> drivers/net/mlx4/meson.build | 2 +-
> drivers/net/mlx4/mlx4_glue.h | 1 -
> 4 files changed, 2 insertions(+), 4 deletions(-)
Patch applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate ABI version
2020-11-01 14:48 4% ` Thomas Monjalon
@ 2020-11-01 15:02 7% ` Slava Ovsiienko
0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-11-01 15:02 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon, David Marchand
Cc: dev, Matan Azrad, Shahaf Shuler, Raslan Darawsheh, Asaf Penso
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, November 1, 2020 16:49
> To: David Marchand <david.marchand@redhat.com>
> Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan
> Darawsheh <rasland@nvidia.com>; Asaf Penso <asafp@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate
> ABI version
>
> 19/10/2020 11:41, David Marchand:
> > The glue libraries are tightly bound to the mlx drivers of a dpdk
> > version and are packaged with them.
> >
> > Keeping a separate ABI version prevents us from installing two
> > versions of dpdk.
> > Maintaining this separate version just adds confusion.
> > Align the glue library ABI version to the global ABI version.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
>
> There was no comment after 2 weeks, it should be merged.
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
Looks safe and provides an automatic ABI version update for mlx*_glue modules.
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate ABI version
2020-10-19 9:41 9% ` [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate " David Marchand
2020-10-27 12:13 4% ` David Marchand
@ 2020-11-01 14:48 4% ` Thomas Monjalon
2020-11-01 15:02 7% ` Slava Ovsiienko
2020-11-01 15:09 4% ` Raslan Darawsheh
2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-11-01 14:48 UTC (permalink / raw)
To: David Marchand
Cc: dev, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, rasland, asafp
19/10/2020 11:41, David Marchand:
> The glue libraries are tightly bound to the mlx drivers of a dpdk version
> and are packaged with them.
>
> Keeping a separate ABI version prevents us from installing two versions of
> dpdk.
> Maintaining this separate version just adds confusion.
> Align the glue library ABI version to the global ABI version.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
There was no comment after 2 weeks, it should be merged.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v14 00/23] Add DLB PMD
` (6 preceding siblings ...)
2020-10-31 2:12 3% ` [dpdk-dev] [PATCH v13 " Timothy McDaniel
@ 2020-10-31 18:17 3% ` Timothy McDaniel
2020-11-01 19:26 3% ` [dpdk-dev] [PATCH v15 " Timothy McDaniel
2020-11-01 23:29 3% ` [dpdk-dev] [PATCH v16 " Timothy McDaniel
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-31 18:17 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V14
====================
- Fixed format errors in doc/api/doxy-api-index.md
- Delayed introduction of dlb2_consume_qe_immediate until
add-dequeue-and-its-burst-variants.patch
- Delayed introduction of dlb2_construct_token_pop_qe until
add-PMD-s-token-pop-public-interface.patch
- Delayed introduction of dlb_equeue_*_delayed until
add dequeue and its burst variants.patch
Major changes in V13
====================
- removed now unused functions dlb_umwait and dlb_umonitor
Major changes in V12
====================
- Fix CENTOS build error: use __m128i instead of __v2di with
_mm_stream_si128
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4079 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 40 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21684 insertions(+), 3 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v13 00/23] Add DLB PMD
2020-10-31 2:12 3% ` [dpdk-dev] [PATCH v13 " Timothy McDaniel
@ 2020-10-31 12:49 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-31 12:49 UTC (permalink / raw)
To: Timothy McDaniel
Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
Jerin Jacob, Thomas Monjalon
On Sat, Oct 31, 2020 at 7:41 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The following patch series adds support for a new eventdev PMD. The DLB
> PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
> The DLB is a PCIe device that provides load-balanced, prioritized
> scheduling of core-to-core communication. The device consists of
> queues and arbiters that connect producer and consumer cores, and
> implements load-balanced queueing features including:
> - Lock-free multi-producer/multi-consumer operation.
> - Multiple priority levels for varying traffic types.
> - 'Direct' traffic (i.e. multi-producer/single-consumer)
> - Simple unordered load-balanced distribution.
> - Atomic lock-free load balancing across multiple consumers.
> - Queue element reordering feature allowing ordered load-balanced
> distribution.
>
> The DLB hardware supports both load balanced and directed ports and
> queues. Unlike other eventdev devices already in the repo, not all
> DLB ports and queues are equally capable. In particular, directed
> ports are limited to a single link, and must be connected to a directed
> queue.
> Additionally, even though LDB ports may link multiple queues, the
> number of queues that may be linked is limited by hardware. Another
> difference is that DLB does not have a straightforward way of carrying
> the flow_id in the queue elements (QE) that the hardware operates on.
>
> While reviewing the code, please be aware that this PMD has full
> control over the DLB hardware. Intel will be extending the DLB PMD
> in the future (not as part of this first series) with a mode that we
> refer to as the bifurcated PMD. The bifurcated PMD communicates with a
> kernel driver to configure the device, ports, and queues, and memory
> maps device MMIO so datapath operations occur purely in user-space.
>
> The framework to support both the PF PMD and bifurcated PMD exists in
> this patchset, and is why the iface.[ch] layer is present.
>
> Major changes in V13
> ====================
> - removed now unused functions dlb_umwait and dlb_umonitor
build error with clang at "event/dlb: add enqueue and its burst
variants" patch. Please make sure each patch builds to avoid delay in
merging the patch.
Also, address the David comment on the doc for the next version.
FAILED: drivers/libtmp_rte_event_dlb.a.p/event_dlb_dlb.c.o
ccache clang -Idrivers/libtmp_rte_event_dlb.a.p -Idrivers -I../drivers
-Idrivers/event/dlb -I../drivers/event/dlb -Ilib/librte_eventdev
-I../lib/librte_eventdev -I. -I.. -Iconfig -I../config
-Ilib/librte_eal/include -I../lib/librte_eal/incl
ude -Ilib/librte_eal/linux/include -I../lib/librte_eal/linux/include
-Ilib/librte_eal/x86/include -I../lib/librte_eal/x86/include
-Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal
-I../lib/librte_eal -Ilib/librte_kvargs -I
../lib/librte_kvargs -Ilib/librte_metrics -I../lib/librte_metrics
-Ilib/librte_telemetry -I../lib/librte_telemetry -Ilib/librte_ring
-I../lib/librte_ring -Ilib/librte_ethdev -I../lib/librte_ethdev
-Ilib/librte_net -I../lib/librte_net -Ilib/
librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
-I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
-Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_rcu
-I../lib/librte_rcu -Ilib/librte_timer -I../lib/librte_timer -Ili
b/librte_cryptodev -I../lib/librte_cryptodev -Ilib/librte_pci
-I../lib/librte_pci -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/linux -Xclang -fcolor-diagnostics -pipe
-D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Werror -O2 -g
-include rte_config.h -Wextra -Wcast-qual -Wdeprecated
-Wformat-nonliteral -Wformat-security -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wold-style-definition
-Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwr
ite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -MQ
drivers/libtmp_rte_event_dlb.a.p/event_dlb_dlb.c.o -MF
drivers/libtmp_rte_even
t_dlb.a.p/event_dlb_dlb.c.o.d -o
drivers/libtmp_rte_event_dlb.a.p/event_dlb_dlb.c.o -c
../drivers/event/dlb/dlb.c
../drivers/event/dlb/dlb.c:2777:1: error: unused function
'dlb_event_enqueue_delayed' [-Werror,-Wunused-function]
dlb_event_enqueue_delayed(void *event_port,
^
../drivers/event/dlb/dlb.c:2762:1: error: unused function
'dlb_event_enqueue_burst_delayed' [-Werror,-Wunused-function]
dlb_event_enqueue_burst_delayed(void *event_port,
^
../drivers/event/dlb/dlb.c:2792:1: error: unused function
'dlb_event_enqueue_new_burst_delayed' [-Werror,-Wunused-function]
dlb_event_enqueue_new_burst_delayed(void *event_port,
^
../drivers/event/dlb/dlb.c:2808:1: error: unused function
'dlb_event_enqueue_forward_burst_delayed' [-Werror,-Wunused-function]
dlb_event_enqueue_forward_burst_delayed(void *event_port,
^
../drivers/event/dlb/dlb.c:2605:1: error: unused function
'dlb_construct_token_pop_qe' [-Werror,-Wunused-function]
dlb_construct_token_pop_qe(struct dlb_port *qm_port, int idx)
^
../drivers/event/dlb/dlb.c:2653:1: error: unused function
'dlb_consume_qe_immediate' [-Werror,-Wunused-function]
dlb_consume_qe_immediate(struct dlb_port *qm_port, int num)
^
6 errors generated.
>
> Major changes in V12
> ====================
> - Fix CENTOS build error: use __m128i instead of __v2di with
> _mm_stream_si128
>
> Major changes in V11
> ====================
> - removed unused function, fixing build error
> - fixed typo in port_setup commit message
> - this patch series is based on dpdk-next-eventdev
>
> Major changes in v10
> =====================
> - convert to use rte_power_monitor patches
> - replace __builtin_ia32_movntdq() with _mm_stream_si128()
> - remove unused functions in dlb_selftest.c
>
> Major changes in v9
> =====================
> - fixed a build error due to __rte_cache_aligned being placed after
> the ";" character, instead of before it.
>
> Major changes in v8 after dpdk reviews
> =====================
> - moved introduction of dlb in relnotes_20_11 to first patch in series
> - fixed underlines in dlb.rst that were too short
> - note that the code still uses its private byte-encoded versions of
> umonitor/umwait, rather than the new functions in the power
> patch that are built on top of those intrinsics. This is intentional.
>
> Major changes in v7 after dpdk reviews
> =====================
> - updated MAINTAINERS file to alphabetically insert DLB
> - don't create RTE_ symbols in PMD
> - converted to use version.map scheme
> - converted to use .._master_lcore instead of .._main_lcore
> - this patch set is based on dpdk-next-eventdev
>
> Major changes in v6 after dpdk reviews:
> =====================
> - fixed meson conditional build. Moved test into driver’s meson.build
> file instead of event/meson.build
> - documentation is populated as associated code is introduced
> - add log_register in add dynamic logging patch
> - rename RTE_xxx symbol(s) as DLB2_xxx
> - replaced function ptr enqueue_four with direct call to movdir64b
> - remove unused port_pages
> - broke up probe patch into 3 smaller patches for easier review
> - changed param order of movdir64b/movntdq to match intrinsics
> - added self to MAINTAINERS files
> - squashed announcement of availability into last patch in series
> - correct spelling errors and delete repeated words
> - DPDK_21.0 -> DPDK 21 in map file
> - add experimental banner to public structs and APIs
> - implemented other suggestions from code reviews of DLB2 PMD. The
> software is very similar in form so some DLB2 reviews comments
> were applicable to DLB as well
>
> Major changes in v5 after dpdk reviews and additional internal reviews
> by colleagues at Intel:
> ================
> - implement changes requested in code reviews by Gage Eads and Mike Chen
> - fix a memzone leak
> - convert to use eal rte-cpuflags patch from Liang Ma
>
> Major changes in v4 after dpdk reviews and additional internal reviews
> by colleagues at Intel:
> ================
> - Remove make infrastructure
> - shared code (pf/base) is now added incrementally
> - flexible interface (iface.[ch]) is now added incrementally
> - removed calls to rte_panic
> - do not call pthread_create directly
> - remove unused internal API, os_time
> - convert rte_atomic to __atomic builtins
> - broke out eventdev ABI changes, test/api changes, and new internal PCI
> named probe API
> - relocated enqueue logic to enqueue patch
>
> Major Changes in V3:
> ================
> - Fixed a memory corruption issue due to not allocating enough CQ
> memory for depths < 8. Hardware requires minimum allocation to be
> at least 8 entries.
> - Address review comments from Gage and Mattias.
> - Remove versioning
> - minor formatting changes
>
> Major changes in V2:
> ================
> - Correct ABI break that was present in V1.
> - Address some of the review comments received from Mattias.
> I will address the remaining items identified by Mattias in the next
> patch delivery.
> - General code cleanup based on internal code reviews
>
> Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
>
> Timothy McDaniel (23):
> event/dlb: add documentation and meson infrastructure
> event/dlb: add dynamic logging
> event/dlb: add private data structures and constants
> event/dlb: add definitions shared with LKM or shared code
> event/dlb: add inline functions
> event/dlb: add eventdev probe
> event/dlb: add flexible interface
> event/dlb: add probe-time hardware init
> event/dlb: add xstats
> event/dlb: add infos get and configure
> event/dlb: add queue and port default conf
> event/dlb: add queue setup
> event/dlb: add port setup
> event/dlb: add port link
> event/dlb: add port unlink and port unlinks in progress
> event/dlb: add eventdev start
> event/dlb: add enqueue and its burst variants
> event/dlb: add dequeue and its burst variants
> event/dlb: add eventdev stop and close
> event/dlb: add PMD's token pop public interface
> event/dlb: add PMD self-tests
> event/dlb: add queue and port release
> event/dlb: add timeout ticks entry point
>
> MAINTAINERS | 6 +-
> app/test/test_eventdev.c | 7 +
> config/rte_config.h | 6 +
> doc/api/doxy-api-index.md | 1 +
> doc/guides/eventdevs/dlb.rst | 341 ++
> doc/guides/eventdevs/index.rst | 1 +
> doc/guides/rel_notes/release_20_11.rst | 5 +
> drivers/event/dlb/dlb.c | 4080 +++++++++++++++
> drivers/event/dlb/dlb_iface.c | 79 +
> drivers/event/dlb/dlb_iface.h | 82 +
> drivers/event/dlb/dlb_inline_fns.h | 40 +
> drivers/event/dlb/dlb_log.h | 25 +
> drivers/event/dlb/dlb_priv.h | 513 ++
> drivers/event/dlb/dlb_selftest.c | 1539 ++++++
> drivers/event/dlb/dlb_user.h | 814 +++
> drivers/event/dlb/dlb_xstats.c | 1222 +++++
> drivers/event/dlb/meson.build | 21 +
> drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
> drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
> drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
> drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
> drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
> drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
> drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
> drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
> drivers/event/dlb/pf/dlb_main.c | 586 +++
> drivers/event/dlb/pf/dlb_main.h | 47 +
> drivers/event/dlb/pf/dlb_pf.c | 750 +++
> drivers/event/dlb/rte_pmd_dlb.c | 38 +
> drivers/event/dlb/rte_pmd_dlb.h | 77 +
> drivers/event/dlb/version.map | 9 +
> drivers/event/meson.build | 2 +-
> 32 files changed, 21684 insertions(+), 2 deletions(-)
> create mode 100644 doc/guides/eventdevs/dlb.rst
> create mode 100644 drivers/event/dlb/dlb.c
> create mode 100644 drivers/event/dlb/dlb_iface.c
> create mode 100644 drivers/event/dlb/dlb_iface.h
> create mode 100644 drivers/event/dlb/dlb_inline_fns.h
> create mode 100644 drivers/event/dlb/dlb_log.h
> create mode 100644 drivers/event/dlb/dlb_priv.h
> create mode 100644 drivers/event/dlb/dlb_selftest.c
> create mode 100644 drivers/event/dlb/dlb_user.h
> create mode 100644 drivers/event/dlb/dlb_xstats.c
> create mode 100644 drivers/event/dlb/meson.build
> create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
> create mode 100644 drivers/event/dlb/pf/dlb_main.c
> create mode 100644 drivers/event/dlb/pf/dlb_main.h
> create mode 100644 drivers/event/dlb/pf/dlb_pf.c
> create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
> create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
> create mode 100644 drivers/event/dlb/version.map
>
> --
> 2.6.4
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v13 00/23] Add DLB PMD
` (5 preceding siblings ...)
2020-10-31 1:19 3% ` [dpdk-dev] [PATCH v12 " Timothy McDaniel
@ 2020-10-31 2:12 3% ` Timothy McDaniel
2020-10-31 12:49 0% ` Jerin Jacob
2020-10-31 18:17 3% ` [dpdk-dev] [PATCH v14 " Timothy McDaniel
` (2 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-31 2:12 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V13
====================
- removed now unused functions dlb_umwait and dlb_umonitor
Major changes in V12
====================
- Fix CENTOS build error: use __m128i instead of __v2di with
_mm_stream_si128
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4080 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 40 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21684 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v12 00/23] Add DLB PMD
` (4 preceding siblings ...)
2020-10-30 23:41 3% ` [dpdk-dev] [PATCH v11 " Timothy McDaniel
@ 2020-10-31 1:19 3% ` Timothy McDaniel
2020-10-31 2:12 3% ` [dpdk-dev] [PATCH v13 " Timothy McDaniel
` (3 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-31 1:19 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V12
====================
- Fix CENTOS build error: use __m128i instead of __v2di with
_mm_stream_si128
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4080 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21703 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v11 00/23] Add DLB PMD
` (3 preceding siblings ...)
2020-10-30 18:27 3% ` [dpdk-dev] [PATCH v10 " Timothy McDaniel
@ 2020-10-30 23:41 3% ` Timothy McDaniel
2020-10-31 1:19 3% ` [dpdk-dev] [PATCH v12 " Timothy McDaniel
` (4 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-30 23:41 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in V11
====================
- removed unused function, fixing build error
- fixed typo in port_setup commit message
- this patch series is based on dpdk-next-eventdev
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4080 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21703 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v10 00/23] Add DLB PMD
` (2 preceding siblings ...)
2020-10-30 12:41 3% ` [dpdk-dev] [PATCH v9 " Timothy McDaniel
@ 2020-10-30 18:27 3% ` Timothy McDaniel
2020-10-30 23:41 3% ` [dpdk-dev] [PATCH v11 " Timothy McDaniel
` (5 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-30 18:27 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v10
=====================
- convert to use rte_power_monitor patches
- replace __builtin_ia32_movntdq() with _mm_stream_si128()
- remove unused functions in dlb_selftest.c
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4092 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1539 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21715 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-30 4:24 0% ` Gujjar, Abhinandan S
@ 2020-10-30 17:18 0% ` Gujjar, Abhinandan S
0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-30 17:18 UTC (permalink / raw)
To: Akhil Goyal, Honnappa Nagarahalli, Richardson, Bruce,
Ray Kinsella, Thomas Monjalon
Cc: Ananyev, Konstantin, dev, Doherty, Declan, techboard, Vangati,
Narender, jerinj, nd
Hi Akhil,
I have sent the v6 patch for RC2.
As discussed, I will get the test app updated for dequeue callback for RC3.
Thanks
Abhinandan
> -----Original Message-----
> From: Gujjar, Abhinandan S
> Sent: Friday, October 30, 2020 9:54 AM
> To: Akhil Goyal <akhil.goyal@nxp.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Thomas
> Monjalon <thomas@monjalon.net>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>; techboard@dpdk.org; Vangati,
> Narender <narender.vangati@intel.com>; jerinj@marvell.com; nd
> <nd@arm.com>
> Subject: RE: [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback
> functions
>
>
> Thanks Tech board & Akhil for clarifying the concern.
> Sure. I will send the new version of the patch.
>
> Regards
> Abhinandan
>
> > -----Original Message-----
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> > Sent: Thursday, October 29, 2020 7:31 PM
> > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Honnappa
> > Nagarahalli <Honnappa.Nagarahalli@arm.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Thomas
> > Monjalon <thomas@monjalon.net>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org;
> > Doherty, Declan <declan.doherty@intel.com>; techboard@dpdk.org;
> > Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com; nd
> > <nd@arm.com>
> > Subject: RE: [dpdk-techboard] [v4 1/3] cryptodev: support enqueue
> > callback functions
> >
> > >
> > > Hi Akhil,
> > >
> > > Any updates on this?
> > >
> > There has been no objections for this patch from techboard.
> >
> > @Thomas Monjalon: could you please review the release notes.
> > I believe there should be a bullet for API changes to add 2 new fields
> > in rte_cryptodev.
> > What do you suggest?
> >
> > @Gujjar, Abhinandan S
> > Please send a new version for comments on errno.
> > If possible add cases for deq_cbs as well. If not, send it by next week.
>
> >
> > Regards,
> > Akhil
> > > > + Ray for ABI
> > > >
> > > > <snip>
> > > >
> > > > >
> > > > > On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
> > > > > >
> > > > > > Hi Konstantin,
> > > > > >
> > > > > > > > > Hi Tech board members,
> > > > > > > > >
> > > > > > > > > I have a doubt about the ABI breakage in below addition of field.
> > > > > > > > > Could you please comment.
> > > > > > > > >
> > > > > > > > > > /** The data structure associated with each crypto device.
> > > > > > > > > > */ struct rte_cryptodev {
> > > > > > > > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6
> +922,10
> > > > > @@
> > > > > > > > > > struct rte_cryptodev {
> > > > > > > > > > __extension__
> > > > > > > > > > uint8_t attached : 1;
> > > > > > > > > > /**< Flag indicating the device is attached */
> > > > > > > > > > +
> > > > > > > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > > > > > > + /**< User application callback for pre enqueue
> > > > > > > > > > +processing */
> > > > > > > > > > +
> > > > > > > > > > } __rte_cache_aligned;
> > > > > > > > >
> > > > > > > > > Here rte_cryptodevs is defined in stable API list in map
> > > > > > > > > file which is a pointer To all rte_cryptodev and the
> > > > > > > > > above change is changing the size of the
> > > > > > > structure.
> > > > > > >
> > > > > > > While this patch adds new fields into rte_cryptodev
> > > > > > > structure, it doesn't change the size of it.
> > > > > > > struct rte_cryptodev is cache line aligned, so it's current size:
> > > > > > > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > > > > > > So for 64-bit we have 47B implicitly reserved, and for
> > > > > > > 32-bit we have 19B reserved.
> > > > > > > That's enough to add two pointers without changing size of this
> struct.
> > > > > > >
> > > > > >
> > > > > > The structure is cache aligned, and if the cache line size in
> > > > > > 32Byte and the compilation is done on 64bit machine, then we
> > > > > > will be left with 15Bytes which is not sufficient for 2 pointers.
> > > > > > Do we have such systems? Am I missing something?
> > > > > >
> > > > >
> > > > > I don't think we support any such systems, so unless someone can
> > > > > point out a specific case where we need to support 32-byte CLs,
> > > > > I'd tend towards ignoring this as a non-issue.
> > > > Agree. I have not come across 32B cache line.
> > > >
> > > > >
> > > > > > The reason I brought this into techboard is to have a
> > > > > > consensus on such change As rte_cryptodev is a very popular and
> stable structure.
> > > > > > Any changes to it may Have impacts which one person cannot
> > > > > > judge all use
> > > > > cases.
> > > > > >
> > > > >
> > > > > Haven't been tracking this discussion much, but from what I read
> > > > > here, this doesn't look like an ABI break and should be ok.
> > > > If we are filling the holes in the cache line with new fields, it
> > > > should not be an ABI break.
> > > >
> > > > >
> > > > > Regards,
> > > > > /Bruce
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v9 00/23] Add DLB PMD
2020-10-29 14:57 3% ` [dpdk-dev] [PATCH v7 00/23] Add DLB PMD Timothy McDaniel
2020-10-30 9:40 3% ` [dpdk-dev] [PATCH v8 " Timothy McDaniel
@ 2020-10-30 12:41 3% ` Timothy McDaniel
2020-10-30 18:27 3% ` [dpdk-dev] [PATCH v10 " Timothy McDaniel
` (6 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-30 12:41 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v9
=====================
- fixed a build error due to __rte_cache_aligned being placed after
the ";" character, instead of before it.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4129 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1551 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21764 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v8 00/23] Add DLB PMD
2020-10-29 14:57 3% ` [dpdk-dev] [PATCH v7 00/23] Add DLB PMD Timothy McDaniel
@ 2020-10-30 9:40 3% ` Timothy McDaniel
2020-10-30 12:41 3% ` [dpdk-dev] [PATCH v9 " Timothy McDaniel
` (7 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-30 9:40 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v8 after dpdk reviews
=====================
- moved introduction of dlb in relnotes_20_11 to first patch in series
- fixed underlines in dlb.rst that were too short
- note that the code still uses its private byte-encoded versions of
umonitor/umwait, rather than the new functions in the power
patch that are built on top of those intrinsics. This is intentional.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4129 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1551 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21764 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [v6 1/2] cryptodev: support enqueue & dequeue callback functions
@ 2020-10-28 23:10 2% ` Abhinandan Gujjar
0 siblings, 0 replies; 200+ results
From: Abhinandan Gujjar @ 2020-10-28 23:10 UTC (permalink / raw)
To: dev, declan.doherty, akhil.goyal, Honnappa.Nagarahalli,
konstantin.ananyev
Cc: narender.vangati, jerinj, abhinandan.gujjar
This patch adds APIs to add/remove callback functions on crypto
enqueue/dequeue burst. The callback function will be called for
each burst of crypto ops received/sent on a given crypto device
queue pair.
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
config/rte_config.h | 1 +
doc/guides/prog_guide/cryptodev_lib.rst | 47 ++++
doc/guides/rel_notes/release_20_11.rst | 9 +
lib/librte_cryptodev/meson.build | 2 +-
lib/librte_cryptodev/rte_cryptodev.c | 404 +++++++++++++++++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev.h | 260 +++++++++++++++++++-
lib/librte_cryptodev/version.map | 4 +
7 files changed, 724 insertions(+), 3 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 8aa46a1..0e3ddbb 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -62,6 +62,7 @@
/* cryptodev defines */
#define RTE_CRYPTO_MAX_DEVS 64
#define RTE_CRYPTODEV_NAME_LEN 64
+#define RTE_CRYPTO_CALLBACKS 1
/* compressdev defines */
#define RTE_COMPRESS_MAX_DEVS 64
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 72129e4..9d0ff59 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -338,6 +338,53 @@ start of private data information. The offset is counted from the start of the
rte_crypto_op including other crypto information such as the IVs (since there can
be an IV also for authentication).
+User callback APIs
+~~~~~~~~~~~~~~~~~~
+The add APIs configures a user callback function to be called for each burst of crypto
+ops received/sent on a given crypto device queue pair. The return value is a pointer
+that can be used later to remove the callback API. Application is expected to register
+a callback function of type ``rte_cryptodev_callback_fn``. Multiple callback functions
+can be added for a given queue pair. API does not restrict on maximum number of
+callbacks.
+
+Callbacks registered by application would not survive rte_cryptodev_configure() as it
+reinitializes the callback list. It is user responsibility to remove all installed
+callbacks before calling rte_cryptodev_configure() to avoid possible memory leakage.
+
+So, application is expected to add user callback after rte_cryptodev_configure().
+The callbacks can also be added at the runtime. These callbacks get executed when
+``rte_cryptodev_enqueue_burst``/``rte_cryptodev_dequeue_burst`` is called.
+
+.. code-block:: c
+
+ struct rte_cryptodev_cb *
+ rte_cryptodev_add_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg);
+
+ struct rte_cryptodev_cb *
+ rte_cryptodev_add_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg);
+
+ uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
+ struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param);
+
+The remove API removes a callback function added by
+``rte_cryptodev_add_enq_callback``/``rte_cryptodev_add_deq_callback''.
+
+.. code-block:: c
+
+ int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb);
+
+ int rte_cryptodev_remove_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb);
+
Enqueue / Dequeue Burst APIs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index bae39b2..33e190a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -235,6 +235,12 @@ New Features
enqueue/dequeue operations but does not necessarily depends on
mbufs and cryptodev operation mempools.
+* **Added enqueue & dequeue callback APIs for cryptodev library.**
+
+ Cryptodev library is added with enqueue & dequeue callback APIs to
+ enable applications to add/remove user callbacks which gets called
+ for every enqueue/dequeue operation.
+
* **Updated the aesni_mb crypto PMD.**
* Added support for AES-ECB 128, 192 and 256.
@@ -492,6 +498,9 @@ API Changes
``RTE_CRYPTO_AUTH_LIST_END`` from ``enum rte_crypto_auth_algorithm``
are removed to avoid future ABI breakage while adding new algorithms.
+* cryptodev: The structure ``rte_cryptodev`` has been updated with pointers
+ for adding enqueue and dequeue callbacks.
+
* scheduler: Renamed functions ``rte_cryptodev_scheduler_slave_attach``,
``rte_cryptodev_scheduler_slave_detach`` and
``rte_cryptodev_scheduler_slaves_get`` to
diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
index c4c6b3b..8c5493f 100644
--- a/lib/librte_cryptodev/meson.build
+++ b/lib/librte_cryptodev/meson.build
@@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
'rte_crypto.h',
'rte_crypto_sym.h',
'rte_crypto_asym.h')
-deps += ['kvargs', 'mbuf']
+deps += ['kvargs', 'mbuf', 'rcu']
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 3d95ac6..eb8c832 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -448,6 +448,124 @@ struct rte_cryptodev_sym_session_pool_private_data {
return 0;
}
+#ifdef RTE_CRYPTO_CALLBACKS
+/* spinlock for crypto device enq callbacks */
+static rte_spinlock_t rte_cryptodev_callback_lock = RTE_SPINLOCK_INITIALIZER;
+
+static void
+cryptodev_cb_cleanup(struct rte_cryptodev *dev)
+{
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_cryptodev_cb *cb, *next;
+ uint16_t qp_id;
+
+ if (dev->enq_cbs == NULL && dev->deq_cbs == NULL)
+ return;
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ list = &dev->enq_cbs[qp_id];
+ cb = list->next;
+ while (cb != NULL) {
+ next = cb->next;
+ rte_free(cb);
+ cb = next;
+ }
+
+ rte_free(list->qsbr);
+ }
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ list = &dev->deq_cbs[qp_id];
+ cb = list->next;
+ while (cb != NULL) {
+ next = cb->next;
+ rte_free(cb);
+ cb = next;
+ }
+
+ rte_free(list->qsbr);
+ }
+
+ rte_free(dev->enq_cbs);
+ dev->enq_cbs = NULL;
+ rte_free(dev->deq_cbs);
+ dev->deq_cbs = NULL;
+}
+
+static int
+cryptodev_cb_init(struct rte_cryptodev *dev)
+{
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_rcu_qsbr *qsbr;
+ uint16_t qp_id;
+ size_t size;
+
+ /* Max thread set to 1, as one DP thread accessing a queue-pair */
+ const uint32_t max_threads = 1;
+
+ dev->enq_cbs = rte_zmalloc(NULL,
+ sizeof(struct rte_cryptodev_cb_rcu) *
+ dev->data->nb_queue_pairs, 0);
+ if (dev->enq_cbs == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for enq callbacks");
+ return -ENOMEM;
+ }
+
+ dev->deq_cbs = rte_zmalloc(NULL,
+ sizeof(struct rte_cryptodev_cb_rcu) *
+ dev->data->nb_queue_pairs, 0);
+ if (dev->deq_cbs == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for deq callbacks");
+ rte_free(dev->enq_cbs);
+ return -ENOMEM;
+ }
+
+ /* Create RCU QSBR variable */
+ size = rte_rcu_qsbr_get_memsize(max_threads);
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ list = &dev->enq_cbs[qp_id];
+ qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+ if (qsbr == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for RCU on "
+ "queue_pair_id=%d", qp_id);
+ goto cb_init_err;
+ }
+
+ if (rte_rcu_qsbr_init(qsbr, max_threads)) {
+ CDEV_LOG_ERR("Failed to initialize for RCU on "
+ "queue_pair_id=%d", qp_id);
+ goto cb_init_err;
+ }
+
+ list->qsbr = qsbr;
+ }
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ list = &dev->deq_cbs[qp_id];
+ qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+ if (qsbr == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for RCU on "
+ "queue_pair_id=%d", qp_id);
+ goto cb_init_err;
+ }
+
+ if (rte_rcu_qsbr_init(qsbr, max_threads)) {
+ CDEV_LOG_ERR("Failed to initialize for RCU on "
+ "queue_pair_id=%d", qp_id);
+ goto cb_init_err;
+ }
+
+ list->qsbr = qsbr;
+ }
+
+ return 0;
+
+cb_init_err:
+ cryptodev_cb_cleanup(dev);
+ return -ENOMEM;
+}
+#endif
const char *
rte_cryptodev_get_feature_name(uint64_t flag)
@@ -927,6 +1045,11 @@ struct rte_cryptodev *
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+#ifdef RTE_CRYPTO_CALLBACKS
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+ cryptodev_cb_cleanup(dev);
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+#endif
/* Setup new number of queue pairs and reconfigure device. */
diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
config->socket_id);
@@ -936,11 +1059,19 @@ struct rte_cryptodev *
return diag;
}
+#ifdef RTE_CRYPTO_CALLBACKS
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+ diag = cryptodev_cb_init(dev);
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+ if (diag) {
+ CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id);
+ return diag;
+ }
+#endif
rte_cryptodev_trace_configure(dev_id, config);
return (*dev->dev_ops->dev_configure)(dev, config);
}
-
int
rte_cryptodev_start(uint8_t dev_id)
{
@@ -1136,6 +1267,277 @@ struct rte_cryptodev *
socket_id);
}
+#ifdef RTE_CRYPTO_CALLBACKS
+struct rte_cryptodev_cb *
+rte_cryptodev_add_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg)
+{
+ struct rte_cryptodev *dev;
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_cryptodev_cb *cb, *tail;
+
+ if (!cb_fn) {
+ CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ dev = &rte_crypto_devices[dev_id];
+ if (qp_id >= dev->data->nb_queue_pairs) {
+ CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ cb = rte_zmalloc(NULL, sizeof(*cb), 0);
+ if (cb == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for callback on "
+ "dev=%d, queue_pair_id=%d", dev_id, qp_id);
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+
+ cb->fn = cb_fn;
+ cb->arg = cb_arg;
+
+ /* Add the callbacks in fifo order. */
+ list = &dev->enq_cbs[qp_id];
+ tail = list->next;
+
+ if (tail) {
+ while (tail->next)
+ tail = tail->next;
+ /* Stores to cb->fn and cb->param should complete before
+ * cb is visible to data plane.
+ */
+ __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
+ } else {
+ /* Stores to cb->fn and cb->param should complete before
+ * cb is visible to data plane.
+ */
+ __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
+ }
+
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+
+ return cb;
+}
+
+int
+rte_cryptodev_remove_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb)
+{
+ struct rte_cryptodev *dev;
+ struct rte_cryptodev_cb **prev_cb, *curr_cb;
+ struct rte_cryptodev_cb_rcu *list;
+ int ret;
+
+ ret = -EINVAL;
+
+ if (!cb) {
+ CDEV_LOG_ERR("Callback is NULL");
+ return -EINVAL;
+ }
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+ return -ENODEV;
+ }
+
+ dev = &rte_crypto_devices[dev_id];
+ if (qp_id >= dev->data->nb_queue_pairs) {
+ CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+ return -ENODEV;
+ }
+
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+ if (dev->enq_cbs == NULL) {
+ CDEV_LOG_ERR("Callback not initialized");
+ goto cb_err;
+ }
+
+ list = &dev->enq_cbs[qp_id];
+ if (list == NULL) {
+ CDEV_LOG_ERR("Callback list is NULL");
+ goto cb_err;
+ }
+
+ if (list->qsbr == NULL) {
+ CDEV_LOG_ERR("Rcu qsbr is NULL");
+ goto cb_err;
+ }
+
+ prev_cb = &list->next;
+ for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
+ curr_cb = *prev_cb;
+ if (curr_cb == cb) {
+ /* Remove the user cb from the callback list. */
+ __atomic_store_n(prev_cb, curr_cb->next,
+ __ATOMIC_RELAXED);
+ ret = 0;
+ break;
+ }
+ }
+
+ if (!ret) {
+ /* Call sync with invalid thread id as this is part of
+ * control plane API
+ */
+ rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID);
+ rte_free(cb);
+ }
+
+cb_err:
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+ return ret;
+}
+
+struct rte_cryptodev_cb *
+rte_cryptodev_add_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg)
+{
+ struct rte_cryptodev *dev;
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_cryptodev_cb *cb, *tail;
+
+ if (!cb_fn) {
+ CDEV_LOG_ERR("Callback is NULL on dev_id=%d", dev_id);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ dev = &rte_crypto_devices[dev_id];
+ if (qp_id >= dev->data->nb_queue_pairs) {
+ CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ cb = rte_zmalloc(NULL, sizeof(*cb), 0);
+ if (cb == NULL) {
+ CDEV_LOG_ERR("Failed to allocate memory for callback on "
+ "dev=%d, queue_pair_id=%d", dev_id, qp_id);
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+
+ cb->fn = cb_fn;
+ cb->arg = cb_arg;
+
+ /* Add the callbacks in fifo order. */
+ list = &dev->deq_cbs[qp_id];
+ tail = list->next;
+
+ if (tail) {
+ while (tail->next)
+ tail = tail->next;
+ /* Stores to cb->fn and cb->param should complete before
+ * cb is visible to data plane.
+ */
+ __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
+ } else {
+ /* Stores to cb->fn and cb->param should complete before
+ * cb is visible to data plane.
+ */
+ __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
+ }
+
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+
+ return cb;
+}
+
+int
+rte_cryptodev_remove_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb)
+{
+ struct rte_cryptodev *dev;
+ struct rte_cryptodev_cb **prev_cb, *curr_cb;
+ struct rte_cryptodev_cb_rcu *list;
+ int ret;
+
+ ret = -EINVAL;
+
+ if (!cb) {
+ CDEV_LOG_ERR("Callback is NULL");
+ return -EINVAL;
+ }
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+ return -ENODEV;
+ }
+
+ dev = &rte_crypto_devices[dev_id];
+ if (qp_id >= dev->data->nb_queue_pairs) {
+ CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
+ return -ENODEV;
+ }
+
+ rte_spinlock_lock(&rte_cryptodev_callback_lock);
+ if (dev->enq_cbs == NULL) {
+ CDEV_LOG_ERR("Callback not initialized");
+ goto cb_err;
+ }
+
+ list = &dev->deq_cbs[qp_id];
+ if (list == NULL) {
+ CDEV_LOG_ERR("Callback list is NULL");
+ goto cb_err;
+ }
+
+ if (list->qsbr == NULL) {
+ CDEV_LOG_ERR("Rcu qsbr is NULL");
+ goto cb_err;
+ }
+
+ prev_cb = &list->next;
+ for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
+ curr_cb = *prev_cb;
+ if (curr_cb == cb) {
+ /* Remove the user cb from the callback list. */
+ __atomic_store_n(prev_cb, curr_cb->next,
+ __ATOMIC_RELAXED);
+ ret = 0;
+ break;
+ }
+ }
+
+ if (!ret) {
+ /* Call sync with invalid thread id as this is part of
+ * control plane API
+ */
+ rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID);
+ rte_free(cb);
+ }
+
+cb_err:
+ rte_spinlock_unlock(&rte_cryptodev_callback_lock);
+ return ret;
+}
+#endif
int
rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 0935fd5..e6ab755 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -23,6 +23,7 @@
#include "rte_dev.h"
#include <rte_common.h>
#include <rte_config.h>
+#include <rte_rcu_qsbr.h>
#include "rte_cryptodev_trace_fp.h"
@@ -522,6 +523,32 @@ struct rte_cryptodev_qp_conf {
/**< The mempool for creating sess private data in sessionless mode */
};
+#ifdef RTE_CRYPTO_CALLBACKS
+/**
+ * Function type used for processing crypto ops when enqueue/dequeue burst is
+ * called.
+ *
+ * The callback function is called on enqueue/dequeue burst immediately.
+ *
+ * @param dev_id The identifier of the device.
+ * @param qp_id The index of the queue pair on which ops are
+ * enqueued/dequeued. The value must be in the
+ * range [0, nb_queue_pairs - 1] previously
+ * supplied to *rte_cryptodev_configure*.
+ * @param ops The address of an array of *nb_ops* pointers
+ * to *rte_crypto_op* structures which contain
+ * the crypto operations to be processed.
+ * @param nb_ops The number of operations to process.
+ * @param user_param The arbitrary user parameter passed in by the
+ * application when the callback was originally
+ * registered.
+ * @return The number of ops to be enqueued to the
+ * crypto device.
+ */
+typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
+ struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param);
+#endif
+
/**
* Typedef for application callback function to be registered by application
* software for notification of device events
@@ -822,7 +849,6 @@ struct rte_cryptodev_config {
enum rte_cryptodev_event_type event,
rte_cryptodev_cb_fn cb_fn, void *cb_arg);
-
typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
struct rte_crypto_op **ops, uint16_t nb_ops);
/**< Dequeue processed packets from queue pair of a device. */
@@ -839,6 +865,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
/** Structure to keep track of registered callbacks */
TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+#ifdef RTE_CRYPTO_CALLBACKS
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue pair on enqueue/dequeue.
+ */
+struct rte_cryptodev_cb {
+ struct rte_cryptodev_cb *next;
+ /** < Pointer to next callback */
+ rte_cryptodev_callback_fn fn;
+ /** < Pointer to callback function */
+ void *arg;
+ /** < Pointer to argument */
+};
+
+/**
+ * @internal
+ * Structure used to hold information about the RCU for a queue pair.
+ */
+struct rte_cryptodev_cb_rcu {
+ struct rte_cryptodev_cb *next;
+ /** < Pointer to next callback */
+ struct rte_rcu_qsbr *qsbr;
+ /** < RCU QSBR variable per queue pair */
+};
+#endif
+
/** The data structure associated with each crypto device. */
struct rte_cryptodev {
dequeue_pkt_burst_t dequeue_burst;
@@ -867,6 +920,12 @@ struct rte_cryptodev {
__extension__
uint8_t attached : 1;
/**< Flag indicating the device is attached */
+
+ struct rte_cryptodev_cb_rcu *enq_cbs;
+ /**< User application callback for pre enqueue processing */
+
+ struct rte_cryptodev_cb_rcu *deq_cbs;
+ /**< User application callback for post dequeue processing */
} __rte_cache_aligned;
void *
@@ -945,6 +1004,30 @@ struct rte_cryptodev_data {
{
struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+#ifdef RTE_CRYPTO_CALLBACKS
+ if (unlikely(dev->deq_cbs != NULL)) {
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_cryptodev_cb *cb;
+
+ /* __ATOMIC_RELEASE memory order was used when the
+ * call back was inserted into the list.
+ * Since there is a clear dependency between loading
+ * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+ * not required.
+ */
+ list = &dev->deq_cbs[qp_id];
+ rte_rcu_qsbr_thread_online(list->qsbr, 0);
+ cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
+
+ while (cb != NULL) {
+ nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
+ cb->arg);
+ cb = cb->next;
+ };
+
+ rte_rcu_qsbr_thread_offline(list->qsbr, 0);
+ }
+#endif
nb_ops = (*dev->dequeue_burst)
(dev->data->queue_pairs[qp_id], ops, nb_ops);
@@ -989,6 +1072,31 @@ struct rte_cryptodev_data {
{
struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+#ifdef RTE_CRYPTO_CALLBACKS
+ if (unlikely(dev->enq_cbs != NULL)) {
+ struct rte_cryptodev_cb_rcu *list;
+ struct rte_cryptodev_cb *cb;
+
+ /* __ATOMIC_RELEASE memory order was used when the
+ * call back was inserted into the list.
+ * Since there is a clear dependency between loading
+ * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+ * not required.
+ */
+ list = &dev->enq_cbs[qp_id];
+ rte_rcu_qsbr_thread_online(list->qsbr, 0);
+ cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
+
+ while (cb != NULL) {
+ nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
+ cb->arg);
+ cb = cb->next;
+ };
+
+ rte_rcu_qsbr_thread_offline(list->qsbr, 0);
+ }
+#endif
+
rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
return (*dev->enqueue_burst)(
dev->data->queue_pairs[qp_id], ops, nb_ops);
@@ -1730,6 +1838,156 @@ struct rte_crypto_raw_dp_ctx {
rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
uint32_t n);
+#ifdef RTE_CRYPTO_CALLBACKS
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a user callback for a given crypto device and queue pair which will be
+ * called on crypto ops enqueue.
+ *
+ * This API configures a function to be called for each burst of crypto ops
+ * received on a given crypto device queue pair. The return value is a pointer
+ * that can be used later to remove the callback using
+ * rte_cryptodev_remove_enq_callback().
+ *
+ * Callbacks registered by application would not survive
+ * rte_cryptodev_configure() as it reinitializes the callback list.
+ * It is user responsibility to remove all installed callbacks before
+ * calling rte_cryptodev_configure() to avoid possible memory leakage.
+ * Application is expected to call add API after rte_cryptodev_configure().
+ *
+ * Multiple functions can be registered per queue pair & they are called
+ * in the order they were added. The API does not restrict on maximum number
+ * of callbacks.
+ *
+ * @param dev_id The identifier of the device.
+ * @param qp_id The index of the queue pair on which ops are
+ * to be enqueued for processing. The value
+ * must be in the range [0, nb_queue_pairs - 1]
+ * previously supplied to
+ * *rte_cryptodev_configure*.
+ * @param cb_fn The callback function
+ * @param cb_arg A generic pointer parameter which will be passed
+ * to each invocation of the callback function on
+ * this crypto device and queue pair.
+ *
+ * @return
+ * NULL on error & rte_errno will contain the error code.
+ * On success, a pointer value which can later be used to remove the callback.
+ */
+
+__rte_experimental
+struct rte_cryptodev_cb *
+rte_cryptodev_add_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a user callback function for given crypto device and queue pair.
+ *
+ * This function is used to remove enqueue callbacks that were added to a
+ * crypto device queue pair using rte_cryptodev_add_enq_callback().
+ *
+ *
+ *
+ * @param dev_id The identifier of the device.
+ * @param qp_id The index of the queue pair on which ops are
+ * to be enqueued. The value must be in the
+ * range [0, nb_queue_pairs - 1] previously
+ * supplied to *rte_cryptodev_configure*.
+ * @param cb Pointer to user supplied callback created via
+ * rte_cryptodev_add_enq_callback().
+ *
+ * @return
+ * - 0: Success. Callback was removed.
+ * - <0: The dev_id or the qp_id is out of range, or the callback
+ * is NULL or not found for the crypto device queue pair.
+ */
+
+__rte_experimental
+int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a user callback for a given crypto device and queue pair which will be
+ * called on crypto ops dequeue.
+ *
+ * This API configures a function to be called for each burst of crypto ops
+ * received on a given crypto device queue pair. The return value is a pointer
+ * that can be used later to remove the callback using
+ * rte_cryptodev_remove_deq_callback().
+ *
+ * Callbacks registered by application would not survive
+ * rte_cryptodev_configure() as it reinitializes the callback list.
+ * It is user responsibility to remove all installed callbacks before
+ * calling rte_cryptodev_configure() to avoid possible memory leakage.
+ * Application is expected to call add API after rte_cryptodev_configure().
+ *
+ * Multiple functions can be registered per queue pair & they are called
+ * in the order they were added. The API does not restrict on maximum number
+ * of callbacks.
+ *
+ * @param dev_id The identifier of the device.
+ * @param qp_id The index of the queue pair on which ops are
+ * to be dequeued. The value must be in the
+ * range [0, nb_queue_pairs - 1] previously
+ * supplied to *rte_cryptodev_configure*.
+ * @param cb_fn The callback function
+ * @param cb_arg A generic pointer parameter which will be passed
+ * to each invocation of the callback function on
+ * this crypto device and queue pair.
+ *
+ * @return
+ * NULL on error & rte_errno will contain the error code.
+ * On success, a pointer value which can later be used to remove the callback.
+ */
+
+__rte_experimental
+struct rte_cryptodev_cb *
+rte_cryptodev_add_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ rte_cryptodev_callback_fn cb_fn,
+ void *cb_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a user callback function for given crypto device and queue pair.
+ *
+ * This function is used to remove dequeue callbacks that were added to a
+ * crypto device queue pair using rte_cryptodev_add_deq_callback().
+ *
+ *
+ *
+ * @param dev_id The identifier of the device.
+ * @param qp_id The index of the queue pair on which ops are
+ * to be dequeued. The value must be in the
+ * range [0, nb_queue_pairs - 1] previously
+ * supplied to *rte_cryptodev_configure*.
+ * @param cb Pointer to user supplied callback created via
+ * rte_cryptodev_add_deq_callback().
+ *
+ * @return
+ * - 0: Success. Callback was removed.
+ * - <0: The dev_id or the qp_id is out of range, or the callback
+ * is NULL or not found for the crypto device queue pair.
+ */
+__rte_experimental
+int rte_cryptodev_remove_deq_callback(uint8_t dev_id,
+ uint16_t qp_id,
+ struct rte_cryptodev_cb *cb);
+#endif
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/version.map b/lib/librte_cryptodev/version.map
index 7e4360f..429b03e 100644
--- a/lib/librte_cryptodev/version.map
+++ b/lib/librte_cryptodev/version.map
@@ -101,6 +101,8 @@ EXPERIMENTAL {
rte_cryptodev_get_qp_status;
# added in 20.11
+ rte_cryptodev_add_deq_callback;
+ rte_cryptodev_add_enq_callback;
rte_cryptodev_configure_raw_dp_ctx;
rte_cryptodev_get_raw_dp_ctx_size;
rte_cryptodev_raw_dequeue;
@@ -109,4 +111,6 @@ EXPERIMENTAL {
rte_cryptodev_raw_enqueue;
rte_cryptodev_raw_enqueue_burst;
rte_cryptodev_raw_enqueue_done;
+ rte_cryptodev_remove_deq_callback;
+ rte_cryptodev_remove_enq_callback;
};
--
1.9.1
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-29 14:00 0% ` Akhil Goyal
@ 2020-10-30 4:24 0% ` Gujjar, Abhinandan S
2020-10-30 17:18 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-30 4:24 UTC (permalink / raw)
To: Akhil Goyal, Honnappa Nagarahalli, Richardson, Bruce,
Ray Kinsella, Thomas Monjalon
Cc: Ananyev, Konstantin, dev, Doherty, Declan, techboard, Vangati,
Narender, jerinj, nd
Thanks Tech board & Akhil for clarifying the concern.
Sure. I will send the new version of the patch.
Regards
Abhinandan
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Thursday, October 29, 2020 7:31 PM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Honnappa
> Nagarahalli <Honnappa.Nagarahalli@arm.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Thomas
> Monjalon <thomas@monjalon.net>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>; techboard@dpdk.org; Vangati,
> Narender <narender.vangati@intel.com>; jerinj@marvell.com; nd
> <nd@arm.com>
> Subject: RE: [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback
> functions
>
> >
> > Hi Akhil,
> >
> > Any updates on this?
> >
> There has been no objections for this patch from techboard.
>
> @Thomas Monjalon: could you please review the release notes.
> I believe there should be a bullet for API changes to add 2 new fields in
> rte_cryptodev.
> What do you suggest?
>
> @Gujjar, Abhinandan S
> Please send a new version for comments on errno.
> If possible add cases for deq_cbs as well. If not, send it by next week.
>
> Regards,
> Akhil
> > > + Ray for ABI
> > >
> > > <snip>
> > >
> > > >
> > > > On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
> > > > >
> > > > > Hi Konstantin,
> > > > >
> > > > > > > > Hi Tech board members,
> > > > > > > >
> > > > > > > > I have a doubt about the ABI breakage in below addition of field.
> > > > > > > > Could you please comment.
> > > > > > > >
> > > > > > > > > /** The data structure associated with each crypto device.
> > > > > > > > > */ struct rte_cryptodev {
> > > > > > > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10
> > > > @@
> > > > > > > > > struct rte_cryptodev {
> > > > > > > > > __extension__
> > > > > > > > > uint8_t attached : 1;
> > > > > > > > > /**< Flag indicating the device is attached */
> > > > > > > > > +
> > > > > > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > > > > > + /**< User application callback for pre enqueue
> > > > > > > > > +processing */
> > > > > > > > > +
> > > > > > > > > } __rte_cache_aligned;
> > > > > > > >
> > > > > > > > Here rte_cryptodevs is defined in stable API list in map
> > > > > > > > file which is a pointer To all rte_cryptodev and the above
> > > > > > > > change is changing the size of the
> > > > > > structure.
> > > > > >
> > > > > > While this patch adds new fields into rte_cryptodev structure,
> > > > > > it doesn't change the size of it.
> > > > > > struct rte_cryptodev is cache line aligned, so it's current size:
> > > > > > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > > > > > So for 64-bit we have 47B implicitly reserved, and for 32-bit
> > > > > > we have 19B reserved.
> > > > > > That's enough to add two pointers without changing size of this struct.
> > > > > >
> > > > >
> > > > > The structure is cache aligned, and if the cache line size in
> > > > > 32Byte and the compilation is done on 64bit machine, then we
> > > > > will be left with 15Bytes which is not sufficient for 2 pointers.
> > > > > Do we have such systems? Am I missing something?
> > > > >
> > > >
> > > > I don't think we support any such systems, so unless someone can
> > > > point out a specific case where we need to support 32-byte CLs,
> > > > I'd tend towards ignoring this as a non-issue.
> > > Agree. I have not come across 32B cache line.
> > >
> > > >
> > > > > The reason I brought this into techboard is to have a consensus
> > > > > on such change As rte_cryptodev is a very popular and stable structure.
> > > > > Any changes to it may Have impacts which one person cannot judge
> > > > > all use
> > > > cases.
> > > > >
> > > >
> > > > Haven't been tracking this discussion much, but from what I read
> > > > here, this doesn't look like an ABI break and should be ok.
> > > If we are filling the holes in the cache line with new fields, it
> > > should not be an ABI break.
> > >
> > > >
> > > > Regards,
> > > > /Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 03/23] event/dlb2: add private data structures and constants
2020-10-29 15:29 3% ` Stephen Hemminger
@ 2020-10-29 16:07 0% ` McDaniel, Timothy
0 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-10-29 16:07 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Carrillo, Erik G, Eads, Gage, Van Haaren, Harry, jerinj, thomas
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, October 29, 2020 10:29 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dev@dpdk.org; Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Eads, Gage
> <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> jerinj@marvell.com; thomas@monjalon.net
> Subject: Re: [dpdk-dev] [PATCH v4 03/23] event/dlb2: add private data
> structures and constants
>
> On Thu, 29 Oct 2020 10:24:57 -0500
> Timothy McDaniel <timothy.mcdaniel@intel.com> wrote:
>
> > +
> > + /* marker for array sizing etc. */
> > + _DLB2_NB_ENQ_TYPES
>
> Be careful with this type of array sizing value.
> It becomes a breaking point for any API/ABI changes.
Thanks for the comment Stephen. We do not expect these enums to change in the future.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 03/23] event/dlb2: add private data structures and constants
@ 2020-10-29 15:29 3% ` Stephen Hemminger
2020-10-29 16:07 0% ` McDaniel, Timothy
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2020-10-29 15:29 UTC (permalink / raw)
To: Timothy McDaniel
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
On Thu, 29 Oct 2020 10:24:57 -0500
Timothy McDaniel <timothy.mcdaniel@intel.com> wrote:
> +
> + /* marker for array sizing etc. */
> + _DLB2_NB_ENQ_TYPES
Be careful with this type of array sizing value.
It becomes a breaking point for any API/ABI changes.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v7 00/23] Add DLB PMD
@ 2020-10-29 14:57 3% ` Timothy McDaniel
2020-10-30 9:40 3% ` [dpdk-dev] [PATCH v8 " Timothy McDaniel
` (8 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-29 14:57 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj, thomas
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v7 after dpdk reviews
=====================
- updated MAINTAINERS file to alphabetically insert DLB
- don't create RTE_ symbols in PMD
- converted to use version.map scheme
- converted to use .._master_lcore instead of .._main_lcore
- this patch set is based on dpdk-next-eventdev
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-82202 ("eventdev: increase MAX QUEUES PER DEV to 255")
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 6 +-
app/test/test_eventdev.c | 7 +
config/rte_config.h | 6 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4129 +++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1551 ++++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 +++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 +++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 ++++++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 ++++
drivers/event/dlb/pf/dlb_main.c | 586 +++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21764 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotter first half
@ 2020-10-29 14:42 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-29 14:42 UTC (permalink / raw)
To: Thomas Monjalon, dev
Cc: ferruh.yigit, david.marchand, bruce.richardson, olivier.matz,
andrew.rybchenko, jerinj, viacheslavo, Neil Horman
On 29/10/2020 09:27, Thomas Monjalon wrote:
> The mempool pointer in the mbuf struct is moved
> from the second to the first half.
> It should increase performance on most systems having 64-byte cache line,
> i.e. mbuf is split in two cache lines.
> On such system, the first half (also called first cache line) is hotter
> than the second one where the pool pointer was.
>
> Moving this field gives more space to dynfield1.
>
> This is how the mbuf layout looks like (pahole-style):
>
> word type name byte size
> 0 void * buf_addr; /* 0 + 8 */
> 1 rte_iova_t buf_iova /* 8 + 8 */
> /* --- RTE_MARKER64 rearm_data; */
> 2 uint16_t data_off; /* 16 + 2 */
> uint16_t refcnt; /* 18 + 2 */
> uint16_t nb_segs; /* 20 + 2 */
> uint16_t port; /* 22 + 2 */
> 3 uint64_t ol_flags; /* 24 + 8 */
> /* --- RTE_MARKER rx_descriptor_fields1; */
> 4 uint32_t union packet_type; /* 32 + 4 */
> uint32_t pkt_len; /* 36 + 4 */
> 5 uint16_t data_len; /* 40 + 2 */
> uint16_t vlan_tci; /* 42 + 2 */
> 5.5 uint64_t union hash; /* 44 + 8 */
> 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */
> uint16_t buf_len; /* 54 + 2 */
> 7 struct rte_mempool * pool; /* 56 + 8 */
> /* --- RTE_MARKER cacheline1; */
> 8 struct rte_mbuf * next; /* 64 + 8 */
> 9 uint64_t union tx_offload; /* 72 + 8 */
> 10 uint16_t priv_size; /* 80 + 2 */
> uint16_t timesync; /* 82 + 2 */
> uint32_t seqn; /* 84 + 4 */
> 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */
> 12 uint64_t dynfield1[4]; /* 96 + 32 */
> 16 /* --- END 128 */
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> doc/guides/rel_notes/deprecation.rst | 5 -----
> lib/librte_kni/rte_kni_common.h | 3 ++-
> lib/librte_mbuf/rte_mbuf_core.h | 5 ++---
> 3 files changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 72dbb25b83..07ca1dcbb2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -88,11 +88,6 @@ Deprecation Notices
>
> - ``seqn``
>
> - As a consequence, the layout of the ``struct rte_mbuf`` will be re-arranged,
> - avoiding impact on vectorized implementation of the driver datapaths,
> - while evaluating performance gains of a better use of the first cache line.
> -
> -
> * ethdev: the legacy filter API, including
> ``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
> as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
> diff --git a/lib/librte_kni/rte_kni_common.h b/lib/librte_kni/rte_kni_common.h
> index 36d66e2ffa..ffb3182731 100644
> --- a/lib/librte_kni/rte_kni_common.h
> +++ b/lib/librte_kni/rte_kni_common.h
> @@ -84,10 +84,11 @@ struct rte_kni_mbuf {
> char pad2[4];
> uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
> uint16_t data_len; /**< Amount of data in segment buffer. */
> + char pad3[14];
> + void *pool;
>
> /* fields on second cache line */
> __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE)))
> - void *pool;
> void *next; /**< Physical address of next mbuf in kernel. */
> };
>
> diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h
> index 52ca1c842f..ee185fa32b 100644
> --- a/lib/librte_mbuf/rte_mbuf_core.h
> +++ b/lib/librte_mbuf/rte_mbuf_core.h
> @@ -584,12 +584,11 @@ struct rte_mbuf {
>
> uint16_t buf_len; /**< Length of segment buffer. */
>
> - uint64_t unused;
> + struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
>
> /* second cache line - fields only used in slow path or on TX */
> RTE_MARKER cacheline1 __rte_cache_min_aligned;
>
> - struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
> struct rte_mbuf *next; /**< Next segment of scattered packet. */
>
> /* fields to support TX offloads */
> @@ -646,7 +645,7 @@ struct rte_mbuf {
> */
> struct rte_mbuf_ext_shared_info *shinfo;
>
> - uint64_t dynfield1[3]; /**< Reserved for dynamic fields. */
> + uint64_t dynfield1[4]; /**< Reserved for dynamic fields. */
> } __rte_cache_aligned;
>
> /**
>
I will let other chime in on the merits of positioning cache alignment of the
mempool pointer.
From the ABI PoV, depreciate notice has been observed and since mbuf effects
everything doing it outside of a ABI Breakage window is impossible, so it now or
never.
Ray K
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 15:22 4% ` Honnappa Nagarahalli
2020-10-29 13:52 0% ` Gujjar, Abhinandan S
@ 2020-10-29 14:26 3% ` Kinsella, Ray
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-29 14:26 UTC (permalink / raw)
To: Honnappa Nagarahalli, Bruce Richardson, Akhil.goyal@nxp.com
Cc: Ananyev, Konstantin, Gujjar, Abhinandan S, dev, Doherty, Declan,
techboard, Vangati, Narender, jerinj, nd
On 28/10/2020 15:22, Honnappa Nagarahalli wrote:
> + Ray for ABI
>
> <snip>
>
>>
>> On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
>>>
>>> Hi Konstantin,
>>>
>>>>>> Hi Tech board members,
>>>>>>
>>>>>> I have a doubt about the ABI breakage in below addition of field.
>>>>>> Could you please comment.
>>>>>>
>>>>>>> /** The data structure associated with each crypto device. */
>>>>>>> struct rte_cryptodev {
>>>>>>> dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10
>> @@
>>>>>>> struct rte_cryptodev {
>>>>>>> __extension__
>>>>>>> uint8_t attached : 1;
>>>>>>> /**< Flag indicating the device is attached */
>>>>>>> +
>>>>>>> + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
>>>>>>> + /**< User application callback for pre enqueue processing */
>>>>>>> +
>>>>>>> } __rte_cache_aligned;
>>>>>>
>>>>>> Here rte_cryptodevs is defined in stable API list in map file
>>>>>> which is a pointer To all rte_cryptodev and the above change is
>>>>>> changing the size of the
>>>> structure.
>>>>
>>>> While this patch adds new fields into rte_cryptodev structure, it
>>>> doesn't change the size of it.
>>>> struct rte_cryptodev is cache line aligned, so it's current size:
>>>> 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
>>>> So for 64-bit we have 47B implicitly reserved, and for 32-bit we
>>>> have 19B reserved.
>>>> That's enough to add two pointers without changing size of this struct.
>>>>
>>>
>>> The structure is cache aligned, and if the cache line size in 32Byte
>>> and the compilation is done on 64bit machine, then we will be left
>>> with 15Bytes which is not sufficient for 2 pointers.
>>> Do we have such systems? Am I missing something?
>>>
>>
>> I don't think we support any such systems, so unless someone can point out
>> a specific case where we need to support 32-byte CLs, I'd tend towards
>> ignoring this as a non-issue.
> Agree. I have not come across 32B cache line.
>
>>
>>> The reason I brought this into techboard is to have a consensus on
>>> such change As rte_cryptodev is a very popular and stable structure.
>>> Any changes to it may Have impacts which one person cannot judge all use
>> cases.
>>>
>>
>> Haven't been tracking this discussion much, but from what I read here, this
>> doesn't look like an ABI break and should be ok.
> If we are filling the holes in the cache line with new fields, it should not be an ABI break.
Agreed, risk seems minimal ... it is an ABI Breakage window in anycase.
>>
>> Regards,
>> /Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-29 13:52 0% ` Gujjar, Abhinandan S
@ 2020-10-29 14:00 0% ` Akhil Goyal
2020-10-30 4:24 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-29 14:00 UTC (permalink / raw)
To: Gujjar, Abhinandan S, Honnappa Nagarahalli, Richardson, Bruce,
Ray Kinsella, Thomas Monjalon
Cc: Ananyev, Konstantin, dev, Doherty, Declan, techboard, Vangati,
Narender, jerinj, nd
>
> Hi Akhil,
>
> Any updates on this?
>
There has been no objections for this patch from techboard.
@Thomas Monjalon: could you please review the release notes.
I believe there should be a bullet for API changes to add 2 new fields in rte_cryptodev.
What do you suggest?
@Gujjar, Abhinandan S
Please send a new version for comments on errno.
If possible add cases for deq_cbs as well. If not, send it by next week.
Regards,
Akhil
> > + Ray for ABI
> >
> > <snip>
> >
> > >
> > > On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
> > > >
> > > > Hi Konstantin,
> > > >
> > > > > > > Hi Tech board members,
> > > > > > >
> > > > > > > I have a doubt about the ABI breakage in below addition of field.
> > > > > > > Could you please comment.
> > > > > > >
> > > > > > > > /** The data structure associated with each crypto device.
> > > > > > > > */ struct rte_cryptodev {
> > > > > > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10
> > > @@
> > > > > > > > struct rte_cryptodev {
> > > > > > > > __extension__
> > > > > > > > uint8_t attached : 1;
> > > > > > > > /**< Flag indicating the device is attached */
> > > > > > > > +
> > > > > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > > > > + /**< User application callback for pre enqueue processing
> > > > > > > > +*/
> > > > > > > > +
> > > > > > > > } __rte_cache_aligned;
> > > > > > >
> > > > > > > Here rte_cryptodevs is defined in stable API list in map file
> > > > > > > which is a pointer To all rte_cryptodev and the above change
> > > > > > > is changing the size of the
> > > > > structure.
> > > > >
> > > > > While this patch adds new fields into rte_cryptodev structure, it
> > > > > doesn't change the size of it.
> > > > > struct rte_cryptodev is cache line aligned, so it's current size:
> > > > > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > > > > So for 64-bit we have 47B implicitly reserved, and for 32-bit we
> > > > > have 19B reserved.
> > > > > That's enough to add two pointers without changing size of this struct.
> > > > >
> > > >
> > > > The structure is cache aligned, and if the cache line size in 32Byte
> > > > and the compilation is done on 64bit machine, then we will be left
> > > > with 15Bytes which is not sufficient for 2 pointers.
> > > > Do we have such systems? Am I missing something?
> > > >
> > >
> > > I don't think we support any such systems, so unless someone can point
> > > out a specific case where we need to support 32-byte CLs, I'd tend
> > > towards ignoring this as a non-issue.
> > Agree. I have not come across 32B cache line.
> >
> > >
> > > > The reason I brought this into techboard is to have a consensus on
> > > > such change As rte_cryptodev is a very popular and stable structure.
> > > > Any changes to it may Have impacts which one person cannot judge all
> > > > use
> > > cases.
> > > >
> > >
> > > Haven't been tracking this discussion much, but from what I read here,
> > > this doesn't look like an ABI break and should be ok.
> > If we are filling the holes in the cache line with new fields, it should not be an
> > ABI break.
> >
> > >
> > > Regards,
> > > /Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 15:22 4% ` Honnappa Nagarahalli
@ 2020-10-29 13:52 0% ` Gujjar, Abhinandan S
2020-10-29 14:00 0% ` Akhil Goyal
2020-10-29 14:26 3% ` Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-29 13:52 UTC (permalink / raw)
To: Honnappa Nagarahalli, Richardson, Bruce, Akhil.goyal@nxp.com,
Ray Kinsella
Cc: Ananyev, Konstantin, dev, Doherty, Declan, techboard, Vangati,
Narender, jerinj, nd
Hi Akhil,
Any updates on this?
Thanks
Abhinandan
> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: Wednesday, October 28, 2020 8:52 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>; Akhil.goyal@nxp.com;
> Ray Kinsella <mdr@ashroe.eu>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; dev@dpdk.org; Doherty, Declan
> <declan.doherty@intel.com>; techboard@dpdk.org; Vangati, Narender
> <narender.vangati@intel.com>; jerinj@marvell.com; nd <nd@arm.com>
> Subject: RE: [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback
> functions
>
> + Ray for ABI
>
> <snip>
>
> >
> > On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
> > >
> > > Hi Konstantin,
> > >
> > > > > > Hi Tech board members,
> > > > > >
> > > > > > I have a doubt about the ABI breakage in below addition of field.
> > > > > > Could you please comment.
> > > > > >
> > > > > > > /** The data structure associated with each crypto device.
> > > > > > > */ struct rte_cryptodev {
> > > > > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10
> > @@
> > > > > > > struct rte_cryptodev {
> > > > > > > __extension__
> > > > > > > uint8_t attached : 1;
> > > > > > > /**< Flag indicating the device is attached */
> > > > > > > +
> > > > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > > > + /**< User application callback for pre enqueue processing
> > > > > > > +*/
> > > > > > > +
> > > > > > > } __rte_cache_aligned;
> > > > > >
> > > > > > Here rte_cryptodevs is defined in stable API list in map file
> > > > > > which is a pointer To all rte_cryptodev and the above change
> > > > > > is changing the size of the
> > > > structure.
> > > >
> > > > While this patch adds new fields into rte_cryptodev structure, it
> > > > doesn't change the size of it.
> > > > struct rte_cryptodev is cache line aligned, so it's current size:
> > > > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > > > So for 64-bit we have 47B implicitly reserved, and for 32-bit we
> > > > have 19B reserved.
> > > > That's enough to add two pointers without changing size of this struct.
> > > >
> > >
> > > The structure is cache aligned, and if the cache line size in 32Byte
> > > and the compilation is done on 64bit machine, then we will be left
> > > with 15Bytes which is not sufficient for 2 pointers.
> > > Do we have such systems? Am I missing something?
> > >
> >
> > I don't think we support any such systems, so unless someone can point
> > out a specific case where we need to support 32-byte CLs, I'd tend
> > towards ignoring this as a non-issue.
> Agree. I have not come across 32B cache line.
>
> >
> > > The reason I brought this into techboard is to have a consensus on
> > > such change As rte_cryptodev is a very popular and stable structure.
> > > Any changes to it may Have impacts which one person cannot judge all
> > > use
> > cases.
> > >
> >
> > Haven't been tracking this discussion much, but from what I read here,
> > this doesn't look like an ABI break and should be ok.
> If we are filling the holes in the cache line with new fields, it should not be an
> ABI break.
>
> >
> > Regards,
> > /Bruce
^ permalink raw reply [relevance 0%]
* [dpdk-dev] DPDK Release Status Meeting 29/10/2020
@ 2020-10-29 12:22 3% Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-29 12:22 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
Meeting minutes of 29 October 2020
----------------------------------
Agenda:
* Release Dates
* -rc1 status
* Subtrees
Participants:
* Arm
* Debian/Microsoft
* Intel
* Marvell
* Nvidia
* NXP
* Red Hat
Release Dates
-------------
* v20.11 dates
* -rc2 pushed to *Tuesday, 3 November 2020*
* -rc3: Thursday, 12 November 2020
* Release: Wednesday, 25 November 2020
-rc1 status
-----------
* Intel test report shared, many issues reported but no critical/showstopper
issue:
https://mails.dpdk.org/archives/dev/2020-October/189618.html
* Red Hat sent a test report, all looks good
https://mails.dpdk.org/archives/dev/2020-October/189872.html
Subtrees
--------
* main
* Thomas was busy with mbuf patchset, please review
* https://patches.dpdk.org/project/dpdk/list/?series=13454
* https://patches.dpdk.org/project/dpdk/list/?series=13416
* Power patchset eal part can be merged to enable DLB eventdev drivers
* The ethdev patch requires new version with more documentation
* Ferruh & Jerin will try to review the ethdev patch
* fibs patchset merged
* Working on
* Ring series from Honnappa
* replace blacklist/whitelist with allow/block from Stephen
* There is a makefile for the kmods repo, please review
* https://patches.dpdk.org/patch/79999/
* There is a GSO API behavior change without deprecation notice
* https://patches.dpdk.org/patch/82163/
* Requires techboard approval
* There is a patch to add MMIO support to PIO map
* Can it be handled in virtio level? more review please
* https://patches.dpdk.org/project/dpdk/list/?series=13220
* next-net
* Andrew was covering next-net for past a few days
* Will get ethdev patch to remove legacy filter support
* https://patches.dpdk.org/project/dpdk/list/?series=13207&state=*
* There is a dependency to tep-term example removal
* https://patches.dpdk.org/patch/82169/
* https://patches.dpdk.org/project/dpdk/list/?series=13211&state=*
* Will pull from all sub-trees
* next-crypto
* Merged most of the patches, 7-8 left
* enqueue & dequeue callback functions series from Intel remaining
* Was suspicions that it can be ABI break
* Will ask for new version
* Planning to merge if there is not objection
* next-eventdev
* Waiting new version for DLB PMDs
* eal libraries may be merged very soon, if so new version can be done
as rebased on top of it
* Need the new version soon (today/tomorrow), to be able to review and
merge it for the -rc2
* next-virtio
* Pull request sent for -rc2
* No more patch expected for -rc2
* next-net-intel
* Some patches already merged and waiting for pull
* next-net-mlx
* There can be more features for -rc2
* Some patches already merged and waiting for pull
* next-net-mrvl
* There will be two more patches for -rc2, they will be processed soon
* next-net-brcm
* Some patches already merged and waiting for pull
DPDK Release Status Meetings
============================
The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.
The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK
If you wish to attend just send an email to
"John McNamara <john.mcnamara@intel.com>" for the invite.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 15:11 3% ` [dpdk-dev] [dpdk-techboard] " Bruce Richardson
@ 2020-10-28 15:22 4% ` Honnappa Nagarahalli
2020-10-29 13:52 0% ` Gujjar, Abhinandan S
2020-10-29 14:26 3% ` Kinsella, Ray
0 siblings, 2 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-10-28 15:22 UTC (permalink / raw)
To: Bruce Richardson, Akhil.goyal@nxp.com, Ray Kinsella
Cc: Ananyev, Konstantin, Gujjar, Abhinandan S, dev, Doherty, Declan,
techboard, Vangati, Narender, jerinj, nd
+ Ray for ABI
<snip>
>
> On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
> >
> > Hi Konstantin,
> >
> > > > > Hi Tech board members,
> > > > >
> > > > > I have a doubt about the ABI breakage in below addition of field.
> > > > > Could you please comment.
> > > > >
> > > > > > /** The data structure associated with each crypto device. */
> > > > > > struct rte_cryptodev {
> > > > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10
> @@
> > > > > > struct rte_cryptodev {
> > > > > > __extension__
> > > > > > uint8_t attached : 1;
> > > > > > /**< Flag indicating the device is attached */
> > > > > > +
> > > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > > + /**< User application callback for pre enqueue processing */
> > > > > > +
> > > > > > } __rte_cache_aligned;
> > > > >
> > > > > Here rte_cryptodevs is defined in stable API list in map file
> > > > > which is a pointer To all rte_cryptodev and the above change is
> > > > > changing the size of the
> > > structure.
> > >
> > > While this patch adds new fields into rte_cryptodev structure, it
> > > doesn't change the size of it.
> > > struct rte_cryptodev is cache line aligned, so it's current size:
> > > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > > So for 64-bit we have 47B implicitly reserved, and for 32-bit we
> > > have 19B reserved.
> > > That's enough to add two pointers without changing size of this struct.
> > >
> >
> > The structure is cache aligned, and if the cache line size in 32Byte
> > and the compilation is done on 64bit machine, then we will be left
> > with 15Bytes which is not sufficient for 2 pointers.
> > Do we have such systems? Am I missing something?
> >
>
> I don't think we support any such systems, so unless someone can point out
> a specific case where we need to support 32-byte CLs, I'd tend towards
> ignoring this as a non-issue.
Agree. I have not come across 32B cache line.
>
> > The reason I brought this into techboard is to have a consensus on
> > such change As rte_cryptodev is a very popular and stable structure.
> > Any changes to it may Have impacts which one person cannot judge all use
> cases.
> >
>
> Haven't been tracking this discussion much, but from what I read here, this
> doesn't look like an ABI break and should be ok.
If we are filling the holes in the cache line with new fields, it should not be an ABI break.
>
> Regards,
> /Bruce
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [dpdk-techboard] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 14:28 0% ` Akhil Goyal
2020-10-28 14:52 0% ` Ananyev, Konstantin
@ 2020-10-28 15:11 3% ` Bruce Richardson
2020-10-28 15:22 4% ` Honnappa Nagarahalli
1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2020-10-28 15:11 UTC (permalink / raw)
To: Akhil Goyal
Cc: Ananyev, Konstantin, Gujjar, Abhinandan S, dev, Doherty, Declan,
Honnappa.Nagarahalli, techboard, Vangati, Narender, jerinj
On Wed, Oct 28, 2020 at 02:28:43PM +0000, Akhil Goyal wrote:
>
> Hi Konstantin,
>
> > > > Hi Tech board members,
> > > >
> > > > I have a doubt about the ABI breakage in below addition of field.
> > > > Could you please comment.
> > > >
> > > > > /** The data structure associated with each crypto device. */ struct
> > > > > rte_cryptodev {
> > > > > dequeue_pkt_burst_t dequeue_burst;
> > > > > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > > > > __extension__
> > > > > uint8_t attached : 1;
> > > > > /**< Flag indicating the device is attached */
> > > > > +
> > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > + /**< User application callback for pre enqueue processing */
> > > > > +
> > > > > } __rte_cache_aligned;
> > > >
> > > > Here rte_cryptodevs is defined in stable API list in map file which is a pointer
> > > > To all rte_cryptodev and the above change is changing the size of the
> > structure.
> >
> > While this patch adds new fields into rte_cryptodev structure,
> > it doesn't change the size of it.
> > struct rte_cryptodev is cache line aligned, so it's current size:
> > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > So for 64-bit we have 47B implicitly reserved, and for 32-bit we have 19B
> > reserved.
> > That's enough to add two pointers without changing size of this struct.
> >
>
> The structure is cache aligned, and if the cache line size in 32Byte and the compilation
> is done on 64bit machine, then we will be left with 15Bytes which is not sufficient for 2
> pointers.
> Do we have such systems? Am I missing something?
>
I don't think we support any such systems, so unless someone can point out
a specific case where we need to support 32-byte CLs, I'd tend towards
ignoring this as a non-issue.
> The reason I brought this into techboard is to have a consensus on such change
> As rte_cryptodev is a very popular and stable structure. Any changes to it may
> Have impacts which one person cannot judge all use cases.
>
Haven't been tracking this discussion much, but from what I read here, this
doesn't look like an ABI break and should be ok.
Regards,
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 14:28 0% ` Akhil Goyal
@ 2020-10-28 14:52 0% ` Ananyev, Konstantin
2020-10-28 15:11 3% ` [dpdk-dev] [dpdk-techboard] " Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2020-10-28 14:52 UTC (permalink / raw)
To: Akhil Goyal, Gujjar, Abhinandan S, dev, Doherty, Declan,
Honnappa.Nagarahalli, techboard
Cc: Vangati, Narender, jerinj
Hi Akhil,
> Hi Konstantin,
>
> > > > Hi Tech board members,
> > > >
> > > > I have a doubt about the ABI breakage in below addition of field.
> > > > Could you please comment.
> > > >
> > > > > /** The data structure associated with each crypto device. */ struct
> > > > > rte_cryptodev {
> > > > > dequeue_pkt_burst_t dequeue_burst;
> > > > > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > > > > __extension__
> > > > > uint8_t attached : 1;
> > > > > /**< Flag indicating the device is attached */
> > > > > +
> > > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > > + /**< User application callback for pre enqueue processing */
> > > > > +
> > > > > } __rte_cache_aligned;
> > > >
> > > > Here rte_cryptodevs is defined in stable API list in map file which is a pointer
> > > > To all rte_cryptodev and the above change is changing the size of the
> > structure.
> >
> > While this patch adds new fields into rte_cryptodev structure,
> > it doesn't change the size of it.
> > struct rte_cryptodev is cache line aligned, so it's current size:
> > 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> > So for 64-bit we have 47B implicitly reserved, and for 32-bit we have 19B
> > reserved.
> > That's enough to add two pointers without changing size of this struct.
> >
>
> The structure is cache aligned, and if the cache line size in 32Byte and the compilation
> is done on 64bit machine, then we will be left with 15Bytes which is not sufficient for 2
> pointers.
> Do we have such systems?
AFAIK - no, minimal supported cache-line size: 64B:
lib/librte_eal/include/rte_common.h:#define RTE_CACHE_LINE_MIN_SIZE 64
> Am I missing something?
> The reason I brought this into techboard is to have a consensus on such change
> As rte_cryptodev is a very popular and stable structure. Any changes to it may
> Have impacts which one person cannot judge all use cases.
+1 here.
I also think it would be good to get other TB members opinion about proposed changes.
> > > > IMO, it seems an ABI breakage, but not sure. So wanted to double check.
> > > > Now if it is an ABI breakage, then can we allow it? There was no deprecation
> > > > notice Prior to this release.
> >
> > Yes, there was no deprecation note in advance.
> > Though I think the risk is minimal - size of the struct will remain unchanged (see
> > above).
> > My vote to let it in for 20.11.
> >
> > > > Also I think if we are allowing the above change, then we should also add
> > > > another Field for deq_cbs also for post crypto processing in this patch only.
> >
> > +1 for this.
> > I think it was already addressed in v5.
> >
> > Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 12:55 0% ` Ananyev, Konstantin
@ 2020-10-28 14:28 0% ` Akhil Goyal
2020-10-28 14:52 0% ` Ananyev, Konstantin
2020-10-28 15:11 3% ` [dpdk-dev] [dpdk-techboard] " Bruce Richardson
0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2020-10-28 14:28 UTC (permalink / raw)
To: Ananyev, Konstantin, Gujjar, Abhinandan S, dev, Doherty, Declan,
Honnappa.Nagarahalli, techboard
Cc: Vangati, Narender, jerinj
Hi Konstantin,
> > > Hi Tech board members,
> > >
> > > I have a doubt about the ABI breakage in below addition of field.
> > > Could you please comment.
> > >
> > > > /** The data structure associated with each crypto device. */ struct
> > > > rte_cryptodev {
> > > > dequeue_pkt_burst_t dequeue_burst;
> > > > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > > > __extension__
> > > > uint8_t attached : 1;
> > > > /**< Flag indicating the device is attached */
> > > > +
> > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > + /**< User application callback for pre enqueue processing */
> > > > +
> > > > } __rte_cache_aligned;
> > >
> > > Here rte_cryptodevs is defined in stable API list in map file which is a pointer
> > > To all rte_cryptodev and the above change is changing the size of the
> structure.
>
> While this patch adds new fields into rte_cryptodev structure,
> it doesn't change the size of it.
> struct rte_cryptodev is cache line aligned, so it's current size:
> 128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
> So for 64-bit we have 47B implicitly reserved, and for 32-bit we have 19B
> reserved.
> That's enough to add two pointers without changing size of this struct.
>
The structure is cache aligned, and if the cache line size in 32Byte and the compilation
is done on 64bit machine, then we will be left with 15Bytes which is not sufficient for 2
pointers.
Do we have such systems? Am I missing something?
The reason I brought this into techboard is to have a consensus on such change
As rte_cryptodev is a very popular and stable structure. Any changes to it may
Have impacts which one person cannot judge all use cases.
> > > IMO, it seems an ABI breakage, but not sure. So wanted to double check.
> > > Now if it is an ABI breakage, then can we allow it? There was no deprecation
> > > notice Prior to this release.
>
> Yes, there was no deprecation note in advance.
> Though I think the risk is minimal - size of the struct will remain unchanged (see
> above).
> My vote to let it in for 20.11.
>
> > > Also I think if we are allowing the above change, then we should also add
> > > another Field for deq_cbs also for post crypto processing in this patch only.
>
> +1 for this.
> I think it was already addressed in v5.
>
> Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-28 8:20 0% ` Gujjar, Abhinandan S
@ 2020-10-28 12:55 0% ` Ananyev, Konstantin
2020-10-28 14:28 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-10-28 12:55 UTC (permalink / raw)
To: Gujjar, Abhinandan S, Akhil Goyal, dev, Doherty, Declan,
Honnappa.Nagarahalli, techboard
Cc: Vangati, Narender, jerinj
>
> Hi Tech board members,
>
> Could you please clarify the concern?
> The latest patch (https://patches.dpdk.org/patch/82536/) supports both enqueue and dequeue callback functionality.
>
> Thanks
> Abhinandan
>
> > -----Original Message-----
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> > Sent: Tuesday, October 27, 2020 11:59 PM
> > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org;
> > Doherty, Declan <declan.doherty@intel.com>;
> > Honnappa.Nagarahalli@arm.com; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; techboard@dpdk.org
> > Cc: Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com
> > Subject: RE: [v4 1/3] cryptodev: support enqueue callback functions
> >
> > Hi Tech board members,
> >
> > I have a doubt about the ABI breakage in below addition of field.
> > Could you please comment.
> >
> > > /** The data structure associated with each crypto device. */ struct
> > > rte_cryptodev {
> > > dequeue_pkt_burst_t dequeue_burst;
> > > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > > __extension__
> > > uint8_t attached : 1;
> > > /**< Flag indicating the device is attached */
> > > +
> > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > + /**< User application callback for pre enqueue processing */
> > > +
> > > } __rte_cache_aligned;
> >
> > Here rte_cryptodevs is defined in stable API list in map file which is a pointer
> > To all rte_cryptodev and the above change is changing the size of the structure.
While this patch adds new fields into rte_cryptodev structure,
it doesn't change the size of it.
struct rte_cryptodev is cache line aligned, so it's current size:
128B for 64-bit systems, and 64B(/128B) for 32-bit systems.
So for 64-bit we have 47B implicitly reserved, and for 32-bit we have 19B reserved.
That's enough to add two pointers without changing size of this struct.
> > IMO, it seems an ABI breakage, but not sure. So wanted to double check.
> > Now if it is an ABI breakage, then can we allow it? There was no deprecation
> > notice Prior to this release.
Yes, there was no deprecation note in advance.
Though I think the risk is minimal - size of the struct will remain unchanged (see above).
My vote to let it in for 20.11.
> > Also I think if we are allowing the above change, then we should also add
> > another Field for deq_cbs also for post crypto processing in this patch only.
+1 for this.
I think it was already addressed in v5.
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-27 18:28 4% ` Akhil Goyal
@ 2020-10-28 8:20 0% ` Gujjar, Abhinandan S
2020-10-28 12:55 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-28 8:20 UTC (permalink / raw)
To: Akhil Goyal, dev, Doherty, Declan, Honnappa.Nagarahalli, Ananyev,
Konstantin, techboard
Cc: Vangati, Narender, jerinj
Hi Tech board members,
Could you please clarify the concern?
The latest patch (https://patches.dpdk.org/patch/82536/) supports both enqueue and dequeue callback functionality.
Thanks
Abhinandan
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Tuesday, October 27, 2020 11:59 PM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>;
> Honnappa.Nagarahalli@arm.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; techboard@dpdk.org
> Cc: Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com
> Subject: RE: [v4 1/3] cryptodev: support enqueue callback functions
>
> Hi Tech board members,
>
> I have a doubt about the ABI breakage in below addition of field.
> Could you please comment.
>
> > /** The data structure associated with each crypto device. */ struct
> > rte_cryptodev {
> > dequeue_pkt_burst_t dequeue_burst;
> > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > __extension__
> > uint8_t attached : 1;
> > /**< Flag indicating the device is attached */
> > +
> > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > + /**< User application callback for pre enqueue processing */
> > +
> > } __rte_cache_aligned;
>
> Here rte_cryptodevs is defined in stable API list in map file which is a pointer
> To all rte_cryptodev and the above change is changing the size of the structure.
> IMO, it seems an ABI breakage, but not sure. So wanted to double check.
> Now if it is an ABI breakage, then can we allow it? There was no deprecation
> notice Prior to this release.
>
> Also I think if we are allowing the above change, then we should also add
> another Field for deq_cbs also for post crypto processing in this patch only.
>
> Regards,
> Akhil
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to
2020-10-26 6:47 3% [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to yang_y_yi
2020-10-27 19:55 0% ` Ananyev, Konstantin
@ 2020-10-28 0:51 0% ` Hu, Jiayu
1 sibling, 0 replies; 200+ results
From: Hu, Jiayu @ 2020-10-28 0:51 UTC (permalink / raw)
To: yang_y_yi, dev; +Cc: Ananyev, Konstantin, techboard, thomas, yangyi01
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
> -----Original Message-----
> From: yang_y_yi@163.com <yang_y_yi@163.com>
> Sent: Monday, October 26, 2020 2:47 PM
> To: dev@dpdk.org
> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; techboard@dpdk.org;
> thomas@monjalon.net; yangyi01@inspur.com; yang_y_yi@163.com
> Subject: [PATCH v3] gso: fix free issue of mbuf gso segments attach to
>
> From: Yi Yang <yangyi01@inspur.com>
>
> rte_gso_segment decreased refcnt of pkt by one, but
> it is wrong if pkt is external mbuf, pkt won't be
> freed because of incorrect refcnt, the result is
> application can't allocate mbuf from mempool because
> mbufs in mempool are run out of.
>
> One correct way is application should call
> rte_pktmbuf_free after calling rte_gso_segment to free
> pkt explicitly. rte_gso_segment mustn't handle it, this
> should be responsibility of application.
>
> This commit changed rte_gso_segment in functional behavior
> and return value, so the application must take appropriate
> actions according to return values, "ret < 0" means it
> should free and drop 'pkt', "ret == 0" means 'pkt' isn't
> GSOed but 'pkt' can be transimmitted as a normal packet,
> "ret > 0" means 'pkt' has been GSOed into two or multiple
> segments, it should use "pkts_out" to transmit these
> segments. The application must free 'pkt' after call
> rte_gso_segment when return value isn't equal to 0.
>
> Fixes: 119583797b6a ("gso: support TCP/IPv4 GSO")
> Signed-off-by: Yi Yang <yangyi01@inspur.com>
> ---
> Changelog:
>
> v2->v3:
> - add release notes to emphasize behavior and return
> value changes of rte_gso_segment().
> - update return value description of rte_gso_segment().
> - modify related code to adapt to the changes.
>
> v1->v2:
> - update description of rte_gso_segment().
> - change code which calls rte_gso_segment() to
> fix free issue.
>
> ---
> app/test-pmd/csumonly.c | 12 ++++++++++--
> .../prog_guide/generic_segmentation_offload_lib.rst | 7 +++++--
> doc/guides/rel_notes/release_20_11.rst | 7 +++++++
> drivers/net/tap/rte_eth_tap.c | 12 ++++++++++--
> lib/librte_gso/gso_tcp4.c | 6 ++----
> lib/librte_gso/gso_tunnel_tcp4.c | 14 +++++---------
> lib/librte_gso/gso_udp4.c | 6 ++----
> lib/librte_gso/rte_gso.c | 15 +++------------
> lib/librte_gso/rte_gso.h | 8 ++++++--
> 9 files changed, 50 insertions(+), 37 deletions(-)
>
> diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
> index 3d7d244..d813d4f 100644
> --- a/app/test-pmd/csumonly.c
> +++ b/app/test-pmd/csumonly.c
> @@ -1080,9 +1080,17 @@ struct simple_gre_hdr {
> ret = rte_gso_segment(pkts_burst[i], gso_ctx,
> &gso_segments[nb_segments],
> GSO_MAX_PKT_BURST -
> nb_segments);
> - if (ret >= 0)
> + if (ret >= 1) {
> + /* pkts_burst[i] can be freed safely here. */
> + rte_pktmbuf_free(pkts_burst[i]);
> nb_segments += ret;
> - else {
> + } else if (ret == 0) {
> + /* 0 means it can be transmitted directly
> + * without gso.
> + */
> + gso_segments[nb_segments] = pkts_burst[i];
> + nb_segments += 1;
> + } else {
> TESTPMD_LOG(DEBUG, "Unable to segment
> packet");
> rte_pktmbuf_free(pkts_burst[i]);
> }
> diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> index 205cb8a..8577572 100644
> --- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> +++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> @@ -25,8 +25,9 @@ Bearing that in mind, the GSO library enables DPDK
> applications to segment
> packets in software. Note however, that GSO is implemented as a
> standalone
> library, and not via a 'fallback' mechanism (i.e. for when TSO is unsupported
> in the underlying hardware); that is, applications must explicitly invoke the
> -GSO library to segment packets. The size of GSO segments ``(segsz)`` is
> -configurable by the application.
> +GSO library to segment packets, they also must call ``rte_pktmbuf_free()`` to
> +free mbuf GSO segments attach to after calling ``rte_gso_segment()``. The
> size
> +of GSO segments ``(segsz)`` is configurable by the application.
>
> Limitations
> -----------
> @@ -233,6 +234,8 @@ To segment an outgoing packet, an application must:
>
> #. Invoke the GSO segmentation API, ``rte_gso_segment()``.
>
> +#. Call ``rte_pktmbuf_free()`` to free mbuf ``rte_gso_segment()`` segments.
> +
> #. If required, update the L3 and L4 checksums of the newly-created
> segments.
> For tunneled packets, the outer IPv4 headers' checksums should also be
> updated. Alternatively, the application may offload checksum calculation
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index d8ac359..da77396 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -543,6 +543,13 @@ API Changes
> * sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
> from ``struct rte_sched_subport_params``.
>
> +* **Changed ``rte_gso_segment`` in functional behavior and return value.**
> +
> + * Don't save pkt to pkts_out[0] if it isn't GSOed in case of ret == 1.
> + * Return 0 instead of 1 for the above case.
> + * ``rte_gso_segment`` won't free pkt no matter whether it is GSOed, the
> + application has responsibility to free it after call ``rte_gso_segment``.
> +
>
> ABI Changes
> -----------
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index 81c6884..2f8abb1 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -751,8 +751,16 @@ struct ipc_queues {
> if (num_tso_mbufs < 0)
> break;
>
> - mbuf = gso_mbufs;
> - num_mbufs = num_tso_mbufs;
> + if (num_tso_mbufs >= 1) {
> + mbuf = gso_mbufs;
> + num_mbufs = num_tso_mbufs;
> + } else {
> + /* 0 means it can be transmitted directly
> + * without gso.
> + */
> + mbuf = &mbuf_in;
> + num_mbufs = 1;
> + }
> } else {
> /* stats.errs will be incremented */
> if (rte_pktmbuf_pkt_len(mbuf_in) > max_size)
> diff --git a/lib/librte_gso/gso_tcp4.c b/lib/librte_gso/gso_tcp4.c
> index ade172a..d31feaf 100644
> --- a/lib/librte_gso/gso_tcp4.c
> +++ b/lib/librte_gso/gso_tcp4.c
> @@ -50,15 +50,13 @@
> pkt->l2_len);
> frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /* Don't process the packet without data */
> hdr_offset = pkt->l2_len + pkt->l3_len + pkt->l4_len;
> if (unlikely(hdr_offset >= pkt->pkt_len)) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> pyld_unit_size = gso_size - hdr_offset;
> diff --git a/lib/librte_gso/gso_tunnel_tcp4.c
> b/lib/librte_gso/gso_tunnel_tcp4.c
> index e0384c2..166aace 100644
> --- a/lib/librte_gso/gso_tunnel_tcp4.c
> +++ b/lib/librte_gso/gso_tunnel_tcp4.c
> @@ -62,7 +62,7 @@
> {
> struct rte_ipv4_hdr *inner_ipv4_hdr;
> uint16_t pyld_unit_size, hdr_offset, frag_off;
> - int ret = 1;
> + int ret;
>
> hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len;
> inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
> char *) +
> @@ -73,25 +73,21 @@
> */
> frag_off = rte_be_to_cpu_16(inner_ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> hdr_offset += pkt->l3_len + pkt->l4_len;
> /* Don't process the packet without data */
> if (hdr_offset >= pkt->pkt_len) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
> pyld_unit_size = gso_size - hdr_offset;
>
> /* Segment the payload */
> ret = gso_do_segment(pkt, hdr_offset, pyld_unit_size, direct_pool,
> indirect_pool, pkts_out, nb_pkts_out);
> - if (ret <= 1)
> - return ret;
> -
> - update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out, ret);
> + if (ret > 1)
> + update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out,
> ret);
>
> return ret;
> }
> diff --git a/lib/librte_gso/gso_udp4.c b/lib/librte_gso/gso_udp4.c
> index 6fa68f2..5d0186a 100644
> --- a/lib/librte_gso/gso_udp4.c
> +++ b/lib/librte_gso/gso_udp4.c
> @@ -52,8 +52,7 @@
> pkt->l2_len);
> frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /*
> @@ -65,8 +64,7 @@
>
> /* Don't process the packet without data. */
> if (unlikely(hdr_offset + pkt->l4_len >= pkt->pkt_len)) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /* pyld_unit_size must be a multiple of 8 because frag_off
> diff --git a/lib/librte_gso/rte_gso.c b/lib/librte_gso/rte_gso.c
> index 751b5b6..896350e 100644
> --- a/lib/librte_gso/rte_gso.c
> +++ b/lib/librte_gso/rte_gso.c
> @@ -30,7 +30,6 @@
> uint16_t nb_pkts_out)
> {
> struct rte_mempool *direct_pool, *indirect_pool;
> - struct rte_mbuf *pkt_seg;
> uint64_t ol_flags;
> uint16_t gso_size;
> uint8_t ipid_delta;
> @@ -44,8 +43,7 @@
>
> if (gso_ctx->gso_size >= pkt->pkt_len) {
> pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> direct_pool = gso_ctx->direct_pool;
> @@ -75,18 +73,11 @@
> indirect_pool, pkts_out, nb_pkts_out);
> } else {
> /* unsupported packet, skip */
> - pkts_out[0] = pkt;
> RTE_LOG(DEBUG, GSO, "Unsupported packet type\n");
> - return 1;
> + ret = 0;
> }
>
> - if (ret > 1) {
> - pkt_seg = pkt;
> - while (pkt_seg) {
> - rte_mbuf_refcnt_update(pkt_seg, -1);
> - pkt_seg = pkt_seg->next;
> - }
> - } else if (ret < 0) {
> + if (ret < 0) {
> /* Revert the ol_flags in the event of failure. */
> pkt->ol_flags = ol_flags;
> }
> diff --git a/lib/librte_gso/rte_gso.h b/lib/librte_gso/rte_gso.h
> index 3aab297..af480ee 100644
> --- a/lib/librte_gso/rte_gso.h
> +++ b/lib/librte_gso/rte_gso.h
> @@ -89,8 +89,11 @@ struct rte_gso_ctx {
> * the GSO segments are sent to should support transmission of multi-
> segment
> * packets.
> *
> - * If the input packet is GSO'd, its mbuf refcnt reduces by 1. Therefore,
> - * when all GSO segments are freed, the input packet is freed automatically.
> + * If the input packet is GSO'd, all the indirect segments are attached to the
> + * input packet.
> + *
> + * rte_gso_segment() will not free the input packet no matter whether it is
> + * GSO'd or not, the application should free it after call rte_gso_segment().
> *
> * If the memory space in pkts_out or MBUF pools is insufficient, this
> * function fails, and it returns (-1) * errno. Otherwise, GSO succeeds,
> @@ -109,6 +112,7 @@ struct rte_gso_ctx {
> *
> * @return
> * - The number of GSO segments filled in pkts_out on success.
> + * - Return 0 if it needn't GSOed.
> * - Return -ENOMEM if run out of memory in MBUF pools.
> * - Return -EINVAL for invalid parameters.
> */
> --
> 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to
2020-10-26 6:47 3% [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to yang_y_yi
@ 2020-10-27 19:55 0% ` Ananyev, Konstantin
2020-10-28 0:51 0% ` Hu, Jiayu
1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2020-10-27 19:55 UTC (permalink / raw)
To: yang_y_yi, dev; +Cc: Hu, Jiayu, techboard, thomas, yangyi01
> -----Original Message-----
> From: yang_y_yi@163.com <yang_y_yi@163.com>
> Sent: Monday, October 26, 2020 6:47 AM
> To: dev@dpdk.org
> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; techboard@dpdk.org; thomas@monjalon.net;
> yangyi01@inspur.com; yang_y_yi@163.com
> Subject: [PATCH v3] gso: fix free issue of mbuf gso segments attach to>
> From: Yi Yang <yangyi01@inspur.com>
>
> rte_gso_segment decreased refcnt of pkt by one, but
> it is wrong if pkt is external mbuf, pkt won't be
> freed because of incorrect refcnt, the result is
> application can't allocate mbuf from mempool because
> mbufs in mempool are run out of.
>
> One correct way is application should call
> rte_pktmbuf_free after calling rte_gso_segment to free
> pkt explicitly. rte_gso_segment mustn't handle it, this
> should be responsibility of application.
>
> This commit changed rte_gso_segment in functional behavior
> and return value, so the application must take appropriate
> actions according to return values, "ret < 0" means it
> should free and drop 'pkt', "ret == 0" means 'pkt' isn't
> GSOed but 'pkt' can be transimmitted as a normal packet,
> "ret > 0" means 'pkt' has been GSOed into two or multiple
> segments, it should use "pkts_out" to transmit these
> segments. The application must free 'pkt' after call
> rte_gso_segment when return value isn't equal to 0.
Tech-board members: this is not a formal API breakage,
but it is a functional change (i.e. all code that uses that API will need to be changed).
There was no deprecation note in advance.
So please provide your input: are you ok with such change or not.
I am ok with the proposed changes.
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Fixes: 119583797b6a ("gso: support TCP/IPv4 GSO")
> Signed-off-by: Yi Yang <yangyi01@inspur.com>
> ---
> Changelog:
>
> v2->v3:
> - add release notes to emphasize behavior and return
> value changes of rte_gso_segment().
> - update return value description of rte_gso_segment().
> - modify related code to adapt to the changes.
>
> v1->v2:
> - update description of rte_gso_segment().
> - change code which calls rte_gso_segment() to
> fix free issue.
>
> ---
> app/test-pmd/csumonly.c | 12 ++++++++++--
> .../prog_guide/generic_segmentation_offload_lib.rst | 7 +++++--
> doc/guides/rel_notes/release_20_11.rst | 7 +++++++
> drivers/net/tap/rte_eth_tap.c | 12 ++++++++++--
> lib/librte_gso/gso_tcp4.c | 6 ++----
> lib/librte_gso/gso_tunnel_tcp4.c | 14 +++++---------
> lib/librte_gso/gso_udp4.c | 6 ++----
> lib/librte_gso/rte_gso.c | 15 +++------------
> lib/librte_gso/rte_gso.h | 8 ++++++--
> 9 files changed, 50 insertions(+), 37 deletions(-)
>
> diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
> index 3d7d244..d813d4f 100644
> --- a/app/test-pmd/csumonly.c
> +++ b/app/test-pmd/csumonly.c
> @@ -1080,9 +1080,17 @@ struct simple_gre_hdr {
> ret = rte_gso_segment(pkts_burst[i], gso_ctx,
> &gso_segments[nb_segments],
> GSO_MAX_PKT_BURST - nb_segments);
> - if (ret >= 0)
> + if (ret >= 1) {
> + /* pkts_burst[i] can be freed safely here. */
> + rte_pktmbuf_free(pkts_burst[i]);
> nb_segments += ret;
> - else {
> + } else if (ret == 0) {
> + /* 0 means it can be transmitted directly
> + * without gso.
> + */
> + gso_segments[nb_segments] = pkts_burst[i];
> + nb_segments += 1;
> + } else {
> TESTPMD_LOG(DEBUG, "Unable to segment packet");
> rte_pktmbuf_free(pkts_burst[i]);
> }
> diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> index 205cb8a..8577572 100644
> --- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> +++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
> @@ -25,8 +25,9 @@ Bearing that in mind, the GSO library enables DPDK applications to segment
> packets in software. Note however, that GSO is implemented as a standalone
> library, and not via a 'fallback' mechanism (i.e. for when TSO is unsupported
> in the underlying hardware); that is, applications must explicitly invoke the
> -GSO library to segment packets. The size of GSO segments ``(segsz)`` is
> -configurable by the application.
> +GSO library to segment packets, they also must call ``rte_pktmbuf_free()`` to
> +free mbuf GSO segments attach to after calling ``rte_gso_segment()``. The size
> +of GSO segments ``(segsz)`` is configurable by the application.
>
> Limitations
> -----------
> @@ -233,6 +234,8 @@ To segment an outgoing packet, an application must:
>
> #. Invoke the GSO segmentation API, ``rte_gso_segment()``.
>
> +#. Call ``rte_pktmbuf_free()`` to free mbuf ``rte_gso_segment()`` segments.
> +
> #. If required, update the L3 and L4 checksums of the newly-created segments.
> For tunneled packets, the outer IPv4 headers' checksums should also be
> updated. Alternatively, the application may offload checksum calculation
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index d8ac359..da77396 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -543,6 +543,13 @@ API Changes
> * sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
> from ``struct rte_sched_subport_params``.
>
> +* **Changed ``rte_gso_segment`` in functional behavior and return value.**
> +
> + * Don't save pkt to pkts_out[0] if it isn't GSOed in case of ret == 1.
> + * Return 0 instead of 1 for the above case.
> + * ``rte_gso_segment`` won't free pkt no matter whether it is GSOed, the
> + application has responsibility to free it after call ``rte_gso_segment``.
> +
>
> ABI Changes
> -----------
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index 81c6884..2f8abb1 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -751,8 +751,16 @@ struct ipc_queues {
> if (num_tso_mbufs < 0)
> break;
>
> - mbuf = gso_mbufs;
> - num_mbufs = num_tso_mbufs;
> + if (num_tso_mbufs >= 1) {
> + mbuf = gso_mbufs;
> + num_mbufs = num_tso_mbufs;
> + } else {
> + /* 0 means it can be transmitted directly
> + * without gso.
> + */
> + mbuf = &mbuf_in;
> + num_mbufs = 1;
> + }
> } else {
> /* stats.errs will be incremented */
> if (rte_pktmbuf_pkt_len(mbuf_in) > max_size)
> diff --git a/lib/librte_gso/gso_tcp4.c b/lib/librte_gso/gso_tcp4.c
> index ade172a..d31feaf 100644
> --- a/lib/librte_gso/gso_tcp4.c
> +++ b/lib/librte_gso/gso_tcp4.c
> @@ -50,15 +50,13 @@
> pkt->l2_len);
> frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /* Don't process the packet without data */
> hdr_offset = pkt->l2_len + pkt->l3_len + pkt->l4_len;
> if (unlikely(hdr_offset >= pkt->pkt_len)) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> pyld_unit_size = gso_size - hdr_offset;
> diff --git a/lib/librte_gso/gso_tunnel_tcp4.c b/lib/librte_gso/gso_tunnel_tcp4.c
> index e0384c2..166aace 100644
> --- a/lib/librte_gso/gso_tunnel_tcp4.c
> +++ b/lib/librte_gso/gso_tunnel_tcp4.c
> @@ -62,7 +62,7 @@
> {
> struct rte_ipv4_hdr *inner_ipv4_hdr;
> uint16_t pyld_unit_size, hdr_offset, frag_off;
> - int ret = 1;
> + int ret;
>
> hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len;
> inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
> @@ -73,25 +73,21 @@
> */
> frag_off = rte_be_to_cpu_16(inner_ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> hdr_offset += pkt->l3_len + pkt->l4_len;
> /* Don't process the packet without data */
> if (hdr_offset >= pkt->pkt_len) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
> pyld_unit_size = gso_size - hdr_offset;
>
> /* Segment the payload */
> ret = gso_do_segment(pkt, hdr_offset, pyld_unit_size, direct_pool,
> indirect_pool, pkts_out, nb_pkts_out);
> - if (ret <= 1)
> - return ret;
> -
> - update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out, ret);
> + if (ret > 1)
> + update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out, ret);
>
> return ret;
> }
> diff --git a/lib/librte_gso/gso_udp4.c b/lib/librte_gso/gso_udp4.c
> index 6fa68f2..5d0186a 100644
> --- a/lib/librte_gso/gso_udp4.c
> +++ b/lib/librte_gso/gso_udp4.c
> @@ -52,8 +52,7 @@
> pkt->l2_len);
> frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
> if (unlikely(IS_FRAGMENTED(frag_off))) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /*
> @@ -65,8 +64,7 @@
>
> /* Don't process the packet without data. */
> if (unlikely(hdr_offset + pkt->l4_len >= pkt->pkt_len)) {
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> /* pyld_unit_size must be a multiple of 8 because frag_off
> diff --git a/lib/librte_gso/rte_gso.c b/lib/librte_gso/rte_gso.c
> index 751b5b6..896350e 100644
> --- a/lib/librte_gso/rte_gso.c
> +++ b/lib/librte_gso/rte_gso.c
> @@ -30,7 +30,6 @@
> uint16_t nb_pkts_out)
> {
> struct rte_mempool *direct_pool, *indirect_pool;
> - struct rte_mbuf *pkt_seg;
> uint64_t ol_flags;
> uint16_t gso_size;
> uint8_t ipid_delta;
> @@ -44,8 +43,7 @@
>
> if (gso_ctx->gso_size >= pkt->pkt_len) {
> pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
> - pkts_out[0] = pkt;
> - return 1;
> + return 0;
> }
>
> direct_pool = gso_ctx->direct_pool;
> @@ -75,18 +73,11 @@
> indirect_pool, pkts_out, nb_pkts_out);
> } else {
> /* unsupported packet, skip */
> - pkts_out[0] = pkt;
> RTE_LOG(DEBUG, GSO, "Unsupported packet type\n");
> - return 1;
> + ret = 0;
> }
>
> - if (ret > 1) {
> - pkt_seg = pkt;
> - while (pkt_seg) {
> - rte_mbuf_refcnt_update(pkt_seg, -1);
> - pkt_seg = pkt_seg->next;
> - }
> - } else if (ret < 0) {
> + if (ret < 0) {
> /* Revert the ol_flags in the event of failure. */
> pkt->ol_flags = ol_flags;
> }
> diff --git a/lib/librte_gso/rte_gso.h b/lib/librte_gso/rte_gso.h
> index 3aab297..af480ee 100644
> --- a/lib/librte_gso/rte_gso.h
> +++ b/lib/librte_gso/rte_gso.h
> @@ -89,8 +89,11 @@ struct rte_gso_ctx {
> * the GSO segments are sent to should support transmission of multi-segment
> * packets.
> *
> - * If the input packet is GSO'd, its mbuf refcnt reduces by 1. Therefore,
> - * when all GSO segments are freed, the input packet is freed automatically.
> + * If the input packet is GSO'd, all the indirect segments are attached to the
> + * input packet.
> + *
> + * rte_gso_segment() will not free the input packet no matter whether it is
> + * GSO'd or not, the application should free it after call rte_gso_segment().
> *
> * If the memory space in pkts_out or MBUF pools is insufficient, this
> * function fails, and it returns (-1) * errno. Otherwise, GSO succeeds,
> @@ -109,6 +112,7 @@ struct rte_gso_ctx {
> *
> * @return
> * - The number of GSO segments filled in pkts_out on success.
> + * - Return 0 if it needn't GSOed.
> * - Return -ENOMEM if run out of memory in MBUF pools.
> * - Return -EINVAL for invalid parameters.
> */
> --
> 1.8.3.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-27 19:26 0% ` Akhil Goyal
@ 2020-10-27 19:41 0% ` Gujjar, Abhinandan S
0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-27 19:41 UTC (permalink / raw)
To: Akhil Goyal, dev, Doherty, Declan, Honnappa.Nagarahalli, Ananyev,
Konstantin
Cc: Vangati, Narender, jerinj
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Wednesday, October 28, 2020 12:56 AM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>;
> Honnappa.Nagarahalli@arm.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com
> Subject: RE: [v4 1/3] cryptodev: support enqueue callback functions
>
> Hi Abhinandan,
>
> > > > +static int
> > > > +cryptodev_cb_init(struct rte_cryptodev *dev) {
> > > > + struct rte_cryptodev_enq_cb_rcu *list;
> > > > + struct rte_rcu_qsbr *qsbr;
> > > > + uint16_t qp_id;
> > > > + size_t size;
> > > > +
> > > > + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> > > > + const uint32_t max_threads = 1;
> > > > +
> > > > + dev->enq_cbs = rte_zmalloc(NULL,
> > > > + sizeof(struct rte_cryptodev_enq_cb_rcu) *
> > > > + dev->data->nb_queue_pairs, 0);
> > > > + if (dev->enq_cbs == NULL) {
> > > > + CDEV_LOG_ERR("Failed to allocate memory for callbacks");
> > > > + rte_errno = ENOMEM;
> > > > + return -1;
> > > > + }
> > >
> > > Why not return ENOMEM here? You are not using rte_errno while
> > > returning from this function, so setting it does not have any meaning.
> > This is a internal function. The caller is returning ENOMEM.
>
> The caller can return the returned value from cryptodev_cb_init, instead of
> explicitly Returning ENOMEM.
> There is no point of setting rte_errno here.
Ok. I will update the patch.
>
>
> > > > /** The data structure associated with each crypto device. */
> > > > struct rte_cryptodev {
> > > > dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,10 @@ struct
> > > > rte_cryptodev {
> > > > __extension__
> > > > uint8_t attached : 1;
> > > > /**< Flag indicating the device is attached */
> > > > +
> > > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > > + /**< User application callback for pre enqueue processing */
> > > > +
> > > Extra line
> > ok
> > >
> > > We should add support for post dequeue callbacks also. Since this is
> > > an LTS release And we wont be very flexible in future quarterly
> > > release, we should do all the changes In one go.
> > This patch set is driven by requirement. Recently, we have a
> > requirement to have callback for dequeue as well. Looking at code
> > freeze date, I am not sure we can target that as well. Let this patch
> > go in and I will send a separate patch for dequeue callback.
> >
>
> We may not be able to change the rte_cryptodev structure so frequently.
> It may be allowed to change it 21.11 release. Which is too far.
> I think atleast the cryptodev changes can go in RC2 and test app for deq cbs
> Can go in RC3 if not RC2.
" cryptodev changes " -> Is it rte_cryptodev structure changes alone or supporting
dequeue callback as well in RC2? And then have test app changes in RC3?
If it is about adding dequeue callback support in RC2, I will try.
If it does not work, hope we can still the get the enqueue callback + rte_cryptodev structure
changes to support dequeue callbacks in the next patch set.
>
> > > I believe we should also double check with techboard if this is an ABI
> breakage.
> > > IMO, it is ABI breakage, rte_cryprodevs is part of stable APIs, but not sure.
> > >
> > > > } __rte_cache_aligned;
> > > >
>
>
>
> > > >
> > > > +#ifdef RTE_CRYPTO_CALLBACKS
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > + *
> > > > + * Add a user callback for a given crypto device and queue pair
> > > > +which will be
> > > > + * called on crypto ops enqueue.
> > > > + *
> > > > + * This API configures a function to be called for each burst of
> > > > +crypto ops
> > > > + * received on a given crypto device queue pair. The return value
> > > > +is a pointer
> > > > + * that can be used later to remove the callback using
> > > > + * rte_cryptodev_remove_enq_callback().
> > > > + *
> > > > + * Multiple functions are called in the order that they are added.
> > >
> > > Is there a limit for the number of cbs that can be added? Better to
> > > add a Comment here.
>
> I think you missed this comment.
There is not limitation as of now. I will add a comment on the same.
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-27 19:16 0% ` Gujjar, Abhinandan S
@ 2020-10-27 19:26 0% ` Akhil Goyal
2020-10-27 19:41 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-27 19:26 UTC (permalink / raw)
To: Gujjar, Abhinandan S, dev, Doherty, Declan, Honnappa.Nagarahalli,
Ananyev, Konstantin
Cc: Vangati, Narender, jerinj
Hi Abhinandan,
> > > +static int
> > > +cryptodev_cb_init(struct rte_cryptodev *dev) {
> > > + struct rte_cryptodev_enq_cb_rcu *list;
> > > + struct rte_rcu_qsbr *qsbr;
> > > + uint16_t qp_id;
> > > + size_t size;
> > > +
> > > + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> > > + const uint32_t max_threads = 1;
> > > +
> > > + dev->enq_cbs = rte_zmalloc(NULL,
> > > + sizeof(struct rte_cryptodev_enq_cb_rcu) *
> > > + dev->data->nb_queue_pairs, 0);
> > > + if (dev->enq_cbs == NULL) {
> > > + CDEV_LOG_ERR("Failed to allocate memory for callbacks");
> > > + rte_errno = ENOMEM;
> > > + return -1;
> > > + }
> >
> > Why not return ENOMEM here? You are not using rte_errno while returning
> > from this function, so setting it does not have any meaning.
> This is a internal function. The caller is returning ENOMEM.
The caller can return the returned value from cryptodev_cb_init, instead of explicitly
Returning ENOMEM.
There is no point of setting rte_errno here.
> > > /** The data structure associated with each crypto device. */ struct
> > > rte_cryptodev {
> > > dequeue_pkt_burst_t dequeue_burst;
> > > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > > __extension__
> > > uint8_t attached : 1;
> > > /**< Flag indicating the device is attached */
> > > +
> > > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > > + /**< User application callback for pre enqueue processing */
> > > +
> > Extra line
> ok
> >
> > We should add support for post dequeue callbacks also. Since this is an LTS
> > release And we wont be very flexible in future quarterly release, we should do
> > all the changes In one go.
> This patch set is driven by requirement. Recently, we have a requirement to have
> callback for dequeue as well. Looking at code freeze date, I am not sure we can
> target that as well. Let this patch go in and I will send a separate patch for
> dequeue callback.
>
We may not be able to change the rte_cryptodev structure so frequently.
It may be allowed to change it 21.11 release. Which is too far.
I think atleast the cryptodev changes can go in RC2 and test app for deq cbs
Can go in RC3 if not RC2.
> > I believe we should also double check with techboard if this is an ABI breakage.
> > IMO, it is ABI breakage, rte_cryprodevs is part of stable APIs, but not sure.
> >
> > > } __rte_cache_aligned;
> > >
> > >
> > > +#ifdef RTE_CRYPTO_CALLBACKS
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Add a user callback for a given crypto device and queue pair which
> > > +will be
> > > + * called on crypto ops enqueue.
> > > + *
> > > + * This API configures a function to be called for each burst of
> > > +crypto ops
> > > + * received on a given crypto device queue pair. The return value is
> > > +a pointer
> > > + * that can be used later to remove the callback using
> > > + * rte_cryptodev_remove_enq_callback().
> > > + *
> > > + * Multiple functions are called in the order that they are added.
> >
> > Is there a limit for the number of cbs that can be added? Better to add a
> > Comment here.
I think you missed this comment.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-27 18:19 4% ` Akhil Goyal
@ 2020-10-27 19:16 0% ` Gujjar, Abhinandan S
2020-10-27 19:26 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-27 19:16 UTC (permalink / raw)
To: Akhil Goyal, dev, Doherty, Declan, Honnappa.Nagarahalli, Ananyev,
Konstantin
Cc: Vangati, Narender, jerinj
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Tuesday, October 27, 2020 11:49 PM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>;
> Honnappa.Nagarahalli@arm.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com
> Subject: RE: [v4 1/3] cryptodev: support enqueue callback functions
>
> Hi Abhinandan,
> > Subject: [v4 1/3] cryptodev: support enqueue callback functions
> >
> > This patch adds APIs to add/remove callback functions. The callback
> > function will be called for each burst of crypto ops received on a
> > given crypto device queue pair.
> >
> > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > ---
> > config/rte_config.h | 1 +
> > lib/librte_cryptodev/meson.build | 2 +-
> > lib/librte_cryptodev/rte_cryptodev.c | 230
> +++++++++++++++++++++++++
> > lib/librte_cryptodev/rte_cryptodev.h | 158 ++++++++++++++++-
> > lib/librte_cryptodev/rte_cryptodev_version.map | 2 +
> > 5 files changed, 391 insertions(+), 2 deletions(-)
> >
> > diff --git a/config/rte_config.h b/config/rte_config.h index
> > 03d90d7..e999d93 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -61,6 +61,7 @@
> > /* cryptodev defines */
> > #define RTE_CRYPTO_MAX_DEVS 64
> > #define RTE_CRYPTODEV_NAME_LEN 64
> > +#define RTE_CRYPTO_CALLBACKS 1
> >
> > /* compressdev defines */
> > #define RTE_COMPRESS_MAX_DEVS 64
> > diff --git a/lib/librte_cryptodev/meson.build
> > b/lib/librte_cryptodev/meson.build
> > index c4c6b3b..8c5493f 100644
> > --- a/lib/librte_cryptodev/meson.build
> > +++ b/lib/librte_cryptodev/meson.build
> > @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
> > 'rte_crypto.h',
> > 'rte_crypto_sym.h',
> > 'rte_crypto_asym.h')
> > -deps += ['kvargs', 'mbuf']
> > +deps += ['kvargs', 'mbuf', 'rcu']
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> > b/lib/librte_cryptodev/rte_cryptodev.c
> > index 3d95ac6..0880d9b 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.c
> > +++ b/lib/librte_cryptodev/rte_cryptodev.c
> > @@ -448,6 +448,91 @@ struct
> > rte_cryptodev_sym_session_pool_private_data
> > {
> > return 0;
> > }
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/* spinlock for crypto device enq callbacks */ static rte_spinlock_t
> > +rte_cryptodev_callback_lock =
> > RTE_SPINLOCK_INITIALIZER;
> > +
> > +static void
> > +cryptodev_cb_cleanup(struct rte_cryptodev *dev) {
> > + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + uint16_t qp_id;
> > +
> > + if (dev->enq_cbs == NULL)
> > + return;
> > +
> > + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> > + list = &dev->enq_cbs[qp_id];
> > + prev_cb = &list->next;
> > +
> > + while (*prev_cb != NULL) {
> > + curr_cb = *prev_cb;
> > + /* Remove the user cb from the callback list. */
> > + __atomic_store_n(prev_cb, curr_cb->next,
> > + __ATOMIC_RELAXED);
> > + rte_rcu_qsbr_synchronize(list->qsbr,
> > + RTE_QSBR_THRID_INVALID);
> > + rte_free(curr_cb);
> > + }
> > +
> > + rte_free(list->qsbr);
> > + }
> > +
> > + rte_free(dev->enq_cbs);
> > + dev->enq_cbs = NULL;
> > +}
> > +
> > +static int
> > +cryptodev_cb_init(struct rte_cryptodev *dev) {
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + struct rte_rcu_qsbr *qsbr;
> > + uint16_t qp_id;
> > + size_t size;
> > +
> > + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> > + const uint32_t max_threads = 1;
> > +
> > + dev->enq_cbs = rte_zmalloc(NULL,
> > + sizeof(struct rte_cryptodev_enq_cb_rcu) *
> > + dev->data->nb_queue_pairs, 0);
> > + if (dev->enq_cbs == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for callbacks");
> > + rte_errno = ENOMEM;
> > + return -1;
> > + }
>
> Why not return ENOMEM here? You are not using rte_errno while returning
> from this function, so setting it does not have any meaning.
This is a internal function. The caller is returning ENOMEM.
>
> > +
> > + /* Create RCU QSBR variable */
> > + size = rte_rcu_qsbr_get_memsize(max_threads);
> > +
> > + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> > + list = &dev->enq_cbs[qp_id];
> > + qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> > + if (qsbr == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for RCU
> on
> > "
> > + "queue_pair_id=%d", qp_id);
> > + goto cb_init_err;
> > + }
> > +
> > + if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> > + CDEV_LOG_ERR("Failed to initialize for RCU on "
> > + "queue_pair_id=%d", qp_id);
> > + goto cb_init_err;
> > + }
> > +
> > + list->qsbr = qsbr;
> > + }
> > +
> > + return 0;
> > +
> > +cb_init_err:
> > + rte_errno = ENOMEM;
> > + cryptodev_cb_cleanup(dev);
> > + return -1;
> Same here, return -ENOMEM
Same as above
>
> > +
> Extra line
ok
>
> > +}
> > +#endif
> >
> > const char *
> > rte_cryptodev_get_feature_name(uint64_t flag) @@ -927,6 +1012,11 @@
> > struct rte_cryptodev *
> >
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> ENOTSUP);
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> > + cryptodev_cb_cleanup(dev);
> > + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> > +#endif
> > /* Setup new number of queue pairs and reconfigure device. */
> > diag = rte_cryptodev_queue_pairs_config(dev, config-
> >nb_queue_pairs,
> > config->socket_id);
> > @@ -936,6 +1026,15 @@ struct rte_cryptodev *
> > return diag;
> > }
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> > + diag = cryptodev_cb_init(dev);
> > + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> > + if (diag) {
> > + CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id);
> > + return -ENOMEM;
> > + }
> > +#endif
> > rte_cryptodev_trace_configure(dev_id, config);
> > return (*dev->dev_ops->dev_configure)(dev, config); } @@ -1136,6
> > +1235,137 @@ struct rte_cryptodev *
> > socket_id);
> > }
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +struct rte_cryptodev_cb *
> > +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + rte_cryptodev_callback_fn cb_fn,
> > + void *cb_arg)
> > +{
> > + struct rte_cryptodev *dev;
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + struct rte_cryptodev_cb *cb, *tail;
> > +
> > + if (!cb_fn)
> > + return NULL;
> > +
> > + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> > + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> > + return NULL;
> > + }
> > +
> > + dev = &rte_crypto_devices[dev_id];
> > + if (qp_id >= dev->data->nb_queue_pairs) {
> > + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> > + return NULL;
> > + }
>
> Errno is not set before above three returns.
I will update it in the next version of the patch.
>
> > +
> > + cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> > + if (cb == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for callback on "
> > + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> > +
> > + cb->fn = cb_fn;
> > + cb->arg = cb_arg;
> > +
> > + /* Add the callbacks in fifo order. */
> > + list = &dev->enq_cbs[qp_id];
> > + tail = list->next;
> > +
> > + if (tail) {
> > + while (tail->next)
> > + tail = tail->next;
> > + /* Stores to cb->fn and cb->param should complete before
> > + * cb is visible to data plane.
> > + */
> > + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
> > + } else {
> > + /* Stores to cb->fn and cb->param should complete before
> > + * cb is visible to data plane.
> > + */
> > + __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
> > + }
> > +
> > + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> > +
> > + return cb;
> > +}
> > +
> > +int
> > +rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + struct rte_cryptodev_cb *cb)
> > +{
> > + struct rte_cryptodev *dev;
> > + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + int ret;
> > +
> > + ret = -EINVAL;
> No need to set EINVAL here. You are returning same value everywhere.
> The error numbers can be different at each exit.
Sure. I will take care returning different error numbers.
The initialized is required because of below during just before calling
Rcu sync.
>
> > +
> > + if (!cb) {
> > + CDEV_LOG_ERR("cb is NULL");
> > + return ret;
> You should directly return -EINVAL here and in below cases as well.
>
> > + }
> > +
> > + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> > + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> > + return ret;
> Here return value should be -ENODEV
>
>
> > + }
> > +
> > + dev = &rte_crypto_devices[dev_id];
> > + if (qp_id >= dev->data->nb_queue_pairs) {
> > + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> > + return ret;
> > + }
> > +
> > + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> > + if (dev->enq_cbs == NULL) {
> > + CDEV_LOG_ERR("Callback not initialized");
> > + goto cb_err;
> > + }
> > +
> > + list = &dev->enq_cbs[qp_id];
> > + if (list == NULL) {
> > + CDEV_LOG_ERR("Callback list is NULL");
> > + goto cb_err;
> > + }
> > +
> > + if (list->qsbr == NULL) {
> > + CDEV_LOG_ERR("Rcu qsbr is NULL");
> > + goto cb_err;
> > + }
> > +
> > + prev_cb = &list->next;
> > + for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> > + curr_cb = *prev_cb;
> > + if (curr_cb == cb) {
> > + /* Remove the user cb from the callback list. */
> > + __atomic_store_n(prev_cb, curr_cb->next,
> > + __ATOMIC_RELAXED);
> > + ret = 0;
> > + break;
> > + }
> > + }
> > +
> > + if (!ret) {
> > + /* Call sync with invalid thread id as this is part of
> > + * control plane API
> > + */
> > + rte_rcu_qsbr_synchronize(list->qsbr,
> > RTE_QSBR_THRID_INVALID);
> > + rte_free(cb);
> > + }
> > +
> > +cb_err:
> > + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> > + return ret;
> > +}
> > +#endif
> >
> > int
> > rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats
> > *stats) diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> > b/lib/librte_cryptodev/rte_cryptodev.h
> > index 0935fd5..1b7d7ef 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -23,6 +23,7 @@
> > #include "rte_dev.h"
> > #include <rte_common.h>
> > #include <rte_config.h>
> > +#include <rte_rcu_qsbr.h>
> >
> > #include "rte_cryptodev_trace_fp.h"
> >
> > @@ -522,6 +523,34 @@ struct rte_cryptodev_qp_conf {
> > /**< The mempool for creating sess private data in sessionless mode
> > */ };
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * Function type used for pre processing crypto ops when enqueue
> > +burst is
> > + * called.
> > + *
> > + * The callback function is called on enqueue burst immediately
> > + * before the crypto ops are put onto the hardware queue for processing.
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param ops The address of an array of *nb_ops* pointers
> > + * to *rte_crypto_op* structures which contain
> > + * the crypto operations to be processed.
> > + * @param nb_ops The number of operations to process.
> > + * @param user_param The arbitrary user parameter passed in by the
> > + * application when the callback was originally
> > + * registered.
> > + * @return The number of ops to be enqueued to the
> > + * crypto device.
> > + */
> > +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t
> qp_id,
> > + struct rte_crypto_op **ops, uint16_t nb_ops, void
> > *user_param);
> > +#endif
> > +
> > /**
> > * Typedef for application callback function to be registered by application
> > * software for notification of device events @@ -822,7 +851,6 @@
> > struct rte_cryptodev_config {
> > enum rte_cryptodev_event_type event,
> > rte_cryptodev_cb_fn cb_fn, void *cb_arg);
> >
> > -
> > typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
> > struct rte_crypto_op **ops, uint16_t nb_ops);
> > /**< Dequeue processed packets from queue pair of a device. */ @@
> > -839,6 +867,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
> > /** Structure to keep track of registered callbacks */
> > TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * @internal
> > + * Structure used to hold information about the callbacks to be
> > +called for a
> > + * queue pair on enqueue.
> > + */
> > +struct rte_cryptodev_cb {
> > + struct rte_cryptodev_cb *next;
> > + /** < Pointer to next callback */
> > + rte_cryptodev_callback_fn fn;
> > + /** < Pointer to callback function */
> > + void *arg;
> > + /** < Pointer to argument */
> > +};
> > +
> > +/**
> > + * @internal
> > + * Structure used to hold information about the RCU for a queue pair.
> > + */
> > +struct rte_cryptodev_enq_cb_rcu {
> > + struct rte_cryptodev_cb *next;
> > + /** < Pointer to next callback */
> > + struct rte_rcu_qsbr *qsbr;
> > + /** < RCU QSBR variable per queue pair */ }; #endif
> > +
> > /** The data structure associated with each crypto device. */ struct
> > rte_cryptodev {
> > dequeue_pkt_burst_t dequeue_burst;
> > @@ -867,6 +922,10 @@ struct rte_cryptodev {
> > __extension__
> > uint8_t attached : 1;
> > /**< Flag indicating the device is attached */
> > +
> > + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> > + /**< User application callback for pre enqueue processing */
> > +
> Extra line
ok
>
> We should add support for post dequeue callbacks also. Since this is an LTS
> release And we wont be very flexible in future quarterly release, we should do
> all the changes In one go.
This patch set is driven by requirement. Recently, we have a requirement to have
callback for dequeue as well. Looking at code freeze date, I am not sure we can
target that as well. Let this patch go in and I will send a separate patch for
dequeue callback.
> I believe we should also double check with techboard if this is an ABI breakage.
> IMO, it is ABI breakage, rte_cryprodevs is part of stable APIs, but not sure.
>
> > } __rte_cache_aligned;
> >
> > void *
> > @@ -989,6 +1048,31 @@ struct rte_cryptodev_data { {
> > struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > + if (unlikely(dev->enq_cbs != NULL)) {
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + struct rte_cryptodev_cb *cb;
> > +
> > + /* __ATOMIC_RELEASE memory order was used when the
> > + * call back was inserted into the list.
> > + * Since there is a clear dependency between loading
> > + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order
> > is
> > + * not required.
> > + */
> > + list = &dev->enq_cbs[qp_id];
> > + rte_rcu_qsbr_thread_online(list->qsbr, 0);
> > + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> > +
> > + while (cb != NULL) {
> > + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> > + cb->arg);
> > + cb = cb->next;
> > + };
> > +
> > + rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> > + }
> > +#endif
> > +
> > rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> > nb_ops);
> > return (*dev->enqueue_burst)(
> > dev->data->queue_pairs[qp_id], ops, nb_ops); @@ -
> 1730,6 +1814,78
> > @@ struct rte_crypto_raw_dp_ctx {
> > rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> > uint32_t n);
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a user callback for a given crypto device and queue pair which
> > +will be
> > + * called on crypto ops enqueue.
> > + *
> > + * This API configures a function to be called for each burst of
> > +crypto ops
> > + * received on a given crypto device queue pair. The return value is
> > +a pointer
> > + * that can be used later to remove the callback using
> > + * rte_cryptodev_remove_enq_callback().
> > + *
> > + * Multiple functions are called in the order that they are added.
>
> Is there a limit for the number of cbs that can be added? Better to add a
> Comment here.
>
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param cb_fn The callback function
> > + * @param cb_arg A generic pointer parameter which will be
> > passed
> > + * to each invocation of the callback function on
> > + * this crypto device and queue pair.
> > + *
> > + * @return
> > + * NULL on error.
> > + * On success, a pointer value which can later be used to remove the
> callback.
> > + */
> > +
> > +__rte_experimental
> > +struct rte_cryptodev_cb *
> > +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + rte_cryptodev_callback_fn cb_fn,
> > + void *cb_arg);
> > +
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a user callback function for given crypto device and queue pair.
> > + *
> > + * This function is used to removed callbacks that were added to a
> > +crypto
> > + * device queue pair using rte_cryptodev_add_enq_callback().
> > + *
> > + *
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param cb Pointer to user supplied callback created via
> > + * rte_cryptodev_add_enq_callback().
> > + *
> > + * @return
> > + * - 0: Success. Callback was removed.
> > + * - -EINVAL: The dev_id or the qp_id is out of range, or the callback
> > + * is NULL or not found for the crypto device queue pair.
> > + */
> > +
> > +__rte_experimental
> > +int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + struct rte_cryptodev_cb *cb);
> > +
> > +#endif
> > +
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map
> > b/lib/librte_cryptodev/rte_cryptodev_version.map
> > index 7e4360f..5d8d6b0 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> > +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> > @@ -101,6 +101,7 @@ EXPERIMENTAL {
> > rte_cryptodev_get_qp_status;
> >
> > # added in 20.11
> > + rte_cryptodev_add_enq_callback;
> > rte_cryptodev_configure_raw_dp_ctx;
> > rte_cryptodev_get_raw_dp_ctx_size;
> > rte_cryptodev_raw_dequeue;
> > @@ -109,4 +110,5 @@ EXPERIMENTAL {
> > rte_cryptodev_raw_enqueue;
> > rte_cryptodev_raw_enqueue_burst;
> > rte_cryptodev_raw_enqueue_done;
> > + rte_cryptodev_remove_enq_callback;
> > };
> > --
> > 1.9.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
2020-10-27 18:19 4% ` Akhil Goyal
@ 2020-10-27 18:28 4% ` Akhil Goyal
2020-10-28 8:20 0% ` Gujjar, Abhinandan S
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-27 18:28 UTC (permalink / raw)
To: Abhinandan Gujjar, dev, declan.doherty, Honnappa.Nagarahalli,
konstantin.ananyev, techboard
Cc: narender.vangati, jerinj
Hi Tech board members,
I have a doubt about the ABI breakage in below addition of field.
Could you please comment.
> /** The data structure associated with each crypto device. */
> struct rte_cryptodev {
> dequeue_pkt_burst_t dequeue_burst;
> @@ -867,6 +922,10 @@ struct rte_cryptodev {
> __extension__
> uint8_t attached : 1;
> /**< Flag indicating the device is attached */
> +
> + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> + /**< User application callback for pre enqueue processing */
> +
> } __rte_cache_aligned;
Here rte_cryptodevs is defined in stable API list in map file which is a pointer
To all rte_cryptodev and the above change is changing the size of the structure.
IMO, it seems an ABI breakage, but not sure. So wanted to double check.
Now if it is an ABI breakage, then can we allow it? There was no deprecation notice
Prior to this release.
Also I think if we are allowing the above change, then we should also add another
Field for deq_cbs also for post crypto processing in this patch only.
Regards,
Akhil
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions
@ 2020-10-27 18:19 4% ` Akhil Goyal
2020-10-27 19:16 0% ` Gujjar, Abhinandan S
2020-10-27 18:28 4% ` Akhil Goyal
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-27 18:19 UTC (permalink / raw)
To: Abhinandan Gujjar, dev, declan.doherty, Honnappa.Nagarahalli,
konstantin.ananyev
Cc: narender.vangati, jerinj
Hi Abhinandan,
> Subject: [v4 1/3] cryptodev: support enqueue callback functions
>
> This patch adds APIs to add/remove callback functions. The callback
> function will be called for each burst of crypto ops received on a
> given crypto device queue pair.
>
> Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> ---
> config/rte_config.h | 1 +
> lib/librte_cryptodev/meson.build | 2 +-
> lib/librte_cryptodev/rte_cryptodev.c | 230 +++++++++++++++++++++++++
> lib/librte_cryptodev/rte_cryptodev.h | 158 ++++++++++++++++-
> lib/librte_cryptodev/rte_cryptodev_version.map | 2 +
> 5 files changed, 391 insertions(+), 2 deletions(-)
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 03d90d7..e999d93 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -61,6 +61,7 @@
> /* cryptodev defines */
> #define RTE_CRYPTO_MAX_DEVS 64
> #define RTE_CRYPTODEV_NAME_LEN 64
> +#define RTE_CRYPTO_CALLBACKS 1
>
> /* compressdev defines */
> #define RTE_COMPRESS_MAX_DEVS 64
> diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
> index c4c6b3b..8c5493f 100644
> --- a/lib/librte_cryptodev/meson.build
> +++ b/lib/librte_cryptodev/meson.build
> @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
> 'rte_crypto.h',
> 'rte_crypto_sym.h',
> 'rte_crypto_asym.h')
> -deps += ['kvargs', 'mbuf']
> +deps += ['kvargs', 'mbuf', 'rcu']
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index 3d95ac6..0880d9b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -448,6 +448,91 @@ struct rte_cryptodev_sym_session_pool_private_data
> {
> return 0;
> }
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/* spinlock for crypto device enq callbacks */
> +static rte_spinlock_t rte_cryptodev_callback_lock =
> RTE_SPINLOCK_INITIALIZER;
> +
> +static void
> +cryptodev_cb_cleanup(struct rte_cryptodev *dev)
> +{
> + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> + struct rte_cryptodev_enq_cb_rcu *list;
> + uint16_t qp_id;
> +
> + if (dev->enq_cbs == NULL)
> + return;
> +
> + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> + list = &dev->enq_cbs[qp_id];
> + prev_cb = &list->next;
> +
> + while (*prev_cb != NULL) {
> + curr_cb = *prev_cb;
> + /* Remove the user cb from the callback list. */
> + __atomic_store_n(prev_cb, curr_cb->next,
> + __ATOMIC_RELAXED);
> + rte_rcu_qsbr_synchronize(list->qsbr,
> + RTE_QSBR_THRID_INVALID);
> + rte_free(curr_cb);
> + }
> +
> + rte_free(list->qsbr);
> + }
> +
> + rte_free(dev->enq_cbs);
> + dev->enq_cbs = NULL;
> +}
> +
> +static int
> +cryptodev_cb_init(struct rte_cryptodev *dev)
> +{
> + struct rte_cryptodev_enq_cb_rcu *list;
> + struct rte_rcu_qsbr *qsbr;
> + uint16_t qp_id;
> + size_t size;
> +
> + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> + const uint32_t max_threads = 1;
> +
> + dev->enq_cbs = rte_zmalloc(NULL,
> + sizeof(struct rte_cryptodev_enq_cb_rcu) *
> + dev->data->nb_queue_pairs, 0);
> + if (dev->enq_cbs == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for callbacks");
> + rte_errno = ENOMEM;
> + return -1;
> + }
Why not return ENOMEM here? You are not using rte_errno while returning
from this function, so setting it does not have any meaning.
> +
> + /* Create RCU QSBR variable */
> + size = rte_rcu_qsbr_get_memsize(max_threads);
> +
> + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
> + list = &dev->enq_cbs[qp_id];
> + qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> + if (qsbr == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for RCU on
> "
> + "queue_pair_id=%d", qp_id);
> + goto cb_init_err;
> + }
> +
> + if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> + CDEV_LOG_ERR("Failed to initialize for RCU on "
> + "queue_pair_id=%d", qp_id);
> + goto cb_init_err;
> + }
> +
> + list->qsbr = qsbr;
> + }
> +
> + return 0;
> +
> +cb_init_err:
> + rte_errno = ENOMEM;
> + cryptodev_cb_cleanup(dev);
> + return -1;
Same here, return -ENOMEM
> +
Extra line
> +}
> +#endif
>
> const char *
> rte_cryptodev_get_feature_name(uint64_t flag)
> @@ -927,6 +1012,11 @@ struct rte_cryptodev *
>
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> ENOTSUP);
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> + cryptodev_cb_cleanup(dev);
> + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +#endif
> /* Setup new number of queue pairs and reconfigure device. */
> diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
> config->socket_id);
> @@ -936,6 +1026,15 @@ struct rte_cryptodev *
> return diag;
> }
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> + diag = cryptodev_cb_init(dev);
> + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> + if (diag) {
> + CDEV_LOG_ERR("Callback init failed for dev_id=%d", dev_id);
> + return -ENOMEM;
> + }
> +#endif
> rte_cryptodev_trace_configure(dev_id, config);
> return (*dev->dev_ops->dev_configure)(dev, config);
> }
> @@ -1136,6 +1235,137 @@ struct rte_cryptodev *
> socket_id);
> }
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + rte_cryptodev_callback_fn cb_fn,
> + void *cb_arg)
> +{
> + struct rte_cryptodev *dev;
> + struct rte_cryptodev_enq_cb_rcu *list;
> + struct rte_cryptodev_cb *cb, *tail;
> +
> + if (!cb_fn)
> + return NULL;
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> + return NULL;
> + }
> +
> + dev = &rte_crypto_devices[dev_id];
> + if (qp_id >= dev->data->nb_queue_pairs) {
> + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> + return NULL;
> + }
Errno is not set before above three returns.
> +
> + cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> + if (cb == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for callback on "
> + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> +
> + cb->fn = cb_fn;
> + cb->arg = cb_arg;
> +
> + /* Add the callbacks in fifo order. */
> + list = &dev->enq_cbs[qp_id];
> + tail = list->next;
> +
> + if (tail) {
> + while (tail->next)
> + tail = tail->next;
> + /* Stores to cb->fn and cb->param should complete before
> + * cb is visible to data plane.
> + */
> + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE);
> + } else {
> + /* Stores to cb->fn and cb->param should complete before
> + * cb is visible to data plane.
> + */
> + __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE);
> + }
> +
> + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> +
> + return cb;
> +}
> +
> +int
> +rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + struct rte_cryptodev_cb *cb)
> +{
> + struct rte_cryptodev *dev;
> + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> + struct rte_cryptodev_enq_cb_rcu *list;
> + int ret;
> +
> + ret = -EINVAL;
No need to set EINVAL here. You are returning same value everywhere.
The error numbers can be different at each exit.
> +
> + if (!cb) {
> + CDEV_LOG_ERR("cb is NULL");
> + return ret;
You should directly return -EINVAL here and in below cases as well.
> + }
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> + return ret;
Here return value should be -ENODEV
> + }
> +
> + dev = &rte_crypto_devices[dev_id];
> + if (qp_id >= dev->data->nb_queue_pairs) {
> + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> + return ret;
> + }
> +
> + rte_spinlock_lock(&rte_cryptodev_callback_lock);
> + if (dev->enq_cbs == NULL) {
> + CDEV_LOG_ERR("Callback not initialized");
> + goto cb_err;
> + }
> +
> + list = &dev->enq_cbs[qp_id];
> + if (list == NULL) {
> + CDEV_LOG_ERR("Callback list is NULL");
> + goto cb_err;
> + }
> +
> + if (list->qsbr == NULL) {
> + CDEV_LOG_ERR("Rcu qsbr is NULL");
> + goto cb_err;
> + }
> +
> + prev_cb = &list->next;
> + for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> + curr_cb = *prev_cb;
> + if (curr_cb == cb) {
> + /* Remove the user cb from the callback list. */
> + __atomic_store_n(prev_cb, curr_cb->next,
> + __ATOMIC_RELAXED);
> + ret = 0;
> + break;
> + }
> + }
> +
> + if (!ret) {
> + /* Call sync with invalid thread id as this is part of
> + * control plane API
> + */
> + rte_rcu_qsbr_synchronize(list->qsbr,
> RTE_QSBR_THRID_INVALID);
> + rte_free(cb);
> + }
> +
> +cb_err:
> + rte_spinlock_unlock(&rte_cryptodev_callback_lock);
> + return ret;
> +}
> +#endif
>
> int
> rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> b/lib/librte_cryptodev/rte_cryptodev.h
> index 0935fd5..1b7d7ef 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -23,6 +23,7 @@
> #include "rte_dev.h"
> #include <rte_common.h>
> #include <rte_config.h>
> +#include <rte_rcu_qsbr.h>
>
> #include "rte_cryptodev_trace_fp.h"
>
> @@ -522,6 +523,34 @@ struct rte_cryptodev_qp_conf {
> /**< The mempool for creating sess private data in sessionless mode */
> };
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * Function type used for pre processing crypto ops when enqueue burst is
> + * called.
> + *
> + * The callback function is called on enqueue burst immediately
> + * before the crypto ops are put onto the hardware queue for processing.
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param ops The address of an array of *nb_ops* pointers
> + * to *rte_crypto_op* structures which contain
> + * the crypto operations to be processed.
> + * @param nb_ops The number of operations to process.
> + * @param user_param The arbitrary user parameter passed in by the
> + * application when the callback was originally
> + * registered.
> + * @return The number of ops to be enqueued to the
> + * crypto device.
> + */
> +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
> + struct rte_crypto_op **ops, uint16_t nb_ops, void
> *user_param);
> +#endif
> +
> /**
> * Typedef for application callback function to be registered by application
> * software for notification of device events
> @@ -822,7 +851,6 @@ struct rte_cryptodev_config {
> enum rte_cryptodev_event_type event,
> rte_cryptodev_cb_fn cb_fn, void *cb_arg);
>
> -
> typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
> struct rte_crypto_op **ops, uint16_t nb_ops);
> /**< Dequeue processed packets from queue pair of a device. */
> @@ -839,6 +867,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
> /** Structure to keep track of registered callbacks */
> TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * @internal
> + * Structure used to hold information about the callbacks to be called for a
> + * queue pair on enqueue.
> + */
> +struct rte_cryptodev_cb {
> + struct rte_cryptodev_cb *next;
> + /** < Pointer to next callback */
> + rte_cryptodev_callback_fn fn;
> + /** < Pointer to callback function */
> + void *arg;
> + /** < Pointer to argument */
> +};
> +
> +/**
> + * @internal
> + * Structure used to hold information about the RCU for a queue pair.
> + */
> +struct rte_cryptodev_enq_cb_rcu {
> + struct rte_cryptodev_cb *next;
> + /** < Pointer to next callback */
> + struct rte_rcu_qsbr *qsbr;
> + /** < RCU QSBR variable per queue pair */
> +};
> +#endif
> +
> /** The data structure associated with each crypto device. */
> struct rte_cryptodev {
> dequeue_pkt_burst_t dequeue_burst;
> @@ -867,6 +922,10 @@ struct rte_cryptodev {
> __extension__
> uint8_t attached : 1;
> /**< Flag indicating the device is attached */
> +
> + struct rte_cryptodev_enq_cb_rcu *enq_cbs;
> + /**< User application callback for pre enqueue processing */
> +
Extra line
We should add support for post dequeue callbacks also. Since this is an LTS release
And we wont be very flexible in future quarterly release, we should do all the changes
In one go.
I believe we should also double check with techboard if this is an ABI breakage.
IMO, it is ABI breakage, rte_cryprodevs is part of stable APIs, but not sure.
> } __rte_cache_aligned;
>
> void *
> @@ -989,6 +1048,31 @@ struct rte_cryptodev_data {
> {
> struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> + if (unlikely(dev->enq_cbs != NULL)) {
> + struct rte_cryptodev_enq_cb_rcu *list;
> + struct rte_cryptodev_cb *cb;
> +
> + /* __ATOMIC_RELEASE memory order was used when the
> + * call back was inserted into the list.
> + * Since there is a clear dependency between loading
> + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order
> is
> + * not required.
> + */
> + list = &dev->enq_cbs[qp_id];
> + rte_rcu_qsbr_thread_online(list->qsbr, 0);
> + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> +
> + while (cb != NULL) {
> + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> + cb->arg);
> + cb = cb->next;
> + };
> +
> + rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> + }
> +#endif
> +
> rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> return (*dev->enqueue_burst)(
> dev->data->queue_pairs[qp_id], ops, nb_ops);
> @@ -1730,6 +1814,78 @@ struct rte_crypto_raw_dp_ctx {
> rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> uint32_t n);
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a user callback for a given crypto device and queue pair which will be
> + * called on crypto ops enqueue.
> + *
> + * This API configures a function to be called for each burst of crypto ops
> + * received on a given crypto device queue pair. The return value is a pointer
> + * that can be used later to remove the callback using
> + * rte_cryptodev_remove_enq_callback().
> + *
> + * Multiple functions are called in the order that they are added.
Is there a limit for the number of cbs that can be added? Better to add a
Comment here.
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param cb_fn The callback function
> + * @param cb_arg A generic pointer parameter which will be
> passed
> + * to each invocation of the callback function on
> + * this crypto device and queue pair.
> + *
> + * @return
> + * NULL on error.
> + * On success, a pointer value which can later be used to remove the callback.
> + */
> +
> +__rte_experimental
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + rte_cryptodev_callback_fn cb_fn,
> + void *cb_arg);
> +
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a user callback function for given crypto device and queue pair.
> + *
> + * This function is used to removed callbacks that were added to a crypto
> + * device queue pair using rte_cryptodev_add_enq_callback().
> + *
> + *
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param cb Pointer to user supplied callback created via
> + * rte_cryptodev_add_enq_callback().
> + *
> + * @return
> + * - 0: Success. Callback was removed.
> + * - -EINVAL: The dev_id or the qp_id is out of range, or the callback
> + * is NULL or not found for the crypto device queue pair.
> + */
> +
> +__rte_experimental
> +int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + struct rte_cryptodev_cb *cb);
> +
> +#endif
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map
> b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 7e4360f..5d8d6b0 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -101,6 +101,7 @@ EXPERIMENTAL {
> rte_cryptodev_get_qp_status;
>
> # added in 20.11
> + rte_cryptodev_add_enq_callback;
> rte_cryptodev_configure_raw_dp_ctx;
> rte_cryptodev_get_raw_dp_ctx_size;
> rte_cryptodev_raw_dequeue;
> @@ -109,4 +110,5 @@ EXPERIMENTAL {
> rte_cryptodev_raw_enqueue;
> rte_cryptodev_raw_enqueue_burst;
> rte_cryptodev_raw_enqueue_done;
> + rte_cryptodev_remove_enq_callback;
> };
> --
> 1.9.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate ABI version
2020-10-19 9:41 9% ` [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate " David Marchand
@ 2020-10-27 12:13 4% ` David Marchand
2020-11-01 14:48 4% ` Thomas Monjalon
2020-11-01 15:09 4% ` Raslan Darawsheh
2 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-27 12:13 UTC (permalink / raw)
To: Thomas Monjalon, Matan Azrad, Shahaf Shuler, Raslan Darawsheh
Cc: Viacheslav Ovsiienko, dev
On Mon, Oct 19, 2020 at 11:42 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> The glue libraries are tightly bound to the mlx drivers of a dpdk version
> and are packaged with them.
>
> Keeping a separate ABI version prevents us from installing two versions of
> dpdk.
> Maintaining this separate version just adds confusion.
> Align the glue library ABI version to the global ABI version.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
Review?
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] doc: update abi version references
2020-10-26 19:31 33% ` [dpdk-dev] [PATCH v3] " Ray Kinsella
@ 2020-10-27 11:40 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-27 11:40 UTC (permalink / raw)
To: Ray Kinsella; +Cc: Neil Horman, Thomas Monjalon, Mcnamara, John, dev
On Mon, Oct 26, 2020 at 8:35 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Updated references to abi versions in the contributors guide.
> Fixed an inaccurate reference to a symbol in the policy.
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> Reviewed-by: David Marchand <david.marchand@redhat.com>
Applied, thanks Ray.
--
David Marchand
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3] doc: update abi version references
2020-10-23 16:07 33% [dpdk-dev] [PATCH v1] doc: update abi version references Ray Kinsella
2020-10-23 16:51 7% ` David Marchand
2020-10-26 19:23 33% ` [dpdk-dev] [PATCH] " Ray Kinsella
@ 2020-10-26 19:31 33% ` Ray Kinsella
2020-10-27 11:40 4% ` David Marchand
2 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-10-26 19:31 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman; +Cc: thomas, david.marchand, john.mcnamara, dev
Updated references to abi versions in the contributors guide.
Fixed an inaccurate reference to a symbol in the policy.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/contributing/abi_policy.rst | 56 +++++-----
doc/guides/contributing/abi_versioning.rst | 120 ++++++++++-----------
2 files changed, 88 insertions(+), 88 deletions(-)
---
* v2 -> v3: Missed the version prepend on the v2.
* v1 -> v2: Fixed references to 19.11, and a typo in the policy
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index e17758a107..4ad87dbfed 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -78,15 +78,15 @@ The DPDK ABI policy
-------------------
A new major ABI version is declared no more frequently than yearly, with
-declarations usually aligning with a LTS release, e.g. ABI 20 for DPDK 19.11.
+declarations usually aligning with a LTS release, e.g. ABI 21 for DPDK 20.11.
Compatibility with the major ABI version is then mandatory in subsequent
-releases until the next major ABI version is declared, e.g. ABI 21 for DPDK
-20.11.
+releases until the next major ABI version is declared, e.g. ABI 22 for DPDK
+21.11.
At the declaration of a major ABI version, major version numbers encoded in
libraries' sonames are bumped to indicate the new version, with the minor
-version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
-``librte_eal.so.21.0``.
+version reset to ``0``. An example would be ``librte_eal.so.21.3`` would become
+``librte_eal.so.22.0``.
The ABI may then change multiple times, without warning, between the last major
ABI version increment and the HEAD label of the git tree, with the condition
@@ -95,8 +95,8 @@ sonames do not change.
Minor versions are incremented to indicate the release of a new ABI compatible
DPDK release, typically the DPDK quarterly releases. An example of this, might
-be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
-release, following the declaration of the new major ABI version ``20``.
+be that ``librte_eal.so.21.1`` would indicate the first ABI compatible DPDK
+release, following the declaration of the new major ABI version ``21``.
An ABI version is supported in all new releases until the next major ABI version
is declared. When changing the major ABI version, the release notes will detail
@@ -222,11 +222,11 @@ Examples of ABI Changes
The following are examples of allowable ABI changes occurring between
declarations of major ABI versions.
-* DPDK 19.11 release defines the function ``rte_foo()`` ; ``rte_foo()``
- is part of the major ABI version ``20``.
+* DPDK 20.11 release defines the function ``rte_foo()`` ; ``rte_foo()``
+ is part of the major ABI version ``21``.
-* DPDK 20.02 release defines a new function ``rte_foo(uint8_t bar)``.
- This is not a problem as long as the symbol ``rte_foo@DPDK20`` is
+* DPDK 21.02 release defines a new function ``rte_foo(uint8_t bar)``.
+ This is not a problem as long as the symbol ``rte_foo@DPDK_21`` is
preserved through :ref:`abi_versioning`.
- The new function may be marked with the ``__rte_experimental`` tag for a
@@ -235,21 +235,21 @@ declarations of major ABI versions.
- Once ``rte_foo(uint8_t bar)`` becomes non-experimental, ``rte_foo()`` is
declared as ``__rte_deprecated`` and an deprecation notice is provided.
-* DPDK 19.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
- version of ``rte_foo`` only exists from DPDK 20.02 onwards as described in the
+* DPDK 20.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
+ version of ``rte_foo`` only exists from DPDK 21.02 onwards as described in the
:ref:`note on forward-only compatibility<forward-only>`.
-* DPDK 20.02 release defines the experimental function ``__rte_experimental
- rte_baz()``. This function may or may not exist in the DPDK 20.05 release.
+* DPDK 21.02 release defines the experimental function ``__rte_experimental
+ rte_baz()``. This function may or may not exist in the DPDK 21.05 release.
* An application ``dPacket`` wishes to use ``rte_foo(uint8_t bar)``, before the
- declaration of the DPDK ``21`` major ABI version. The application can only
- ensure its runtime dependencies are met by specifying ``DPDK (>= 20.2)`` as
+ declaration of the DPDK ``22`` major ABI version. The application can only
+ ensure its runtime dependencies are met by specifying ``DPDK (>= 21.2)`` as
an explicit package dependency, as the soname can only indicate the
supported major ABI version.
-* At the release of DPDK 20.11, the function ``rte_foo(uint8_t bar)`` becomes
- formally part of then new major ABI version DPDK ``21`` and ``rte_foo()`` may be
+* At the release of DPDK 21.11, the function ``rte_foo(uint8_t bar)`` becomes
+ formally part of then new major ABI version DPDK ``22`` and ``rte_foo()`` may be
removed.
.. _deprecation_notices:
@@ -261,25 +261,25 @@ The following are some examples of ABI deprecation notices which would be
added to the Release Notes:
* The Macro ``#RTE_FOO`` is deprecated and will be removed with ABI version
- 21, to be replaced with the inline function ``rte_foo()``.
+ 22, to be replaced with the inline function ``rte_foo()``.
* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
- in version 20.2. Backwards compatibility will be maintained for this function
- until the release of the new DPDK major ABI version 21, in DPDK version
- 20.11.
+ in version 21.2. Backwards compatibility will be maintained for this function
+ until the release of the new DPDK major ABI version 22, in DPDK version
+ 21.11.
-* The members of ``struct rte_foo`` have been reorganized in DPDK 20.02 for
+* The members of ``struct rte_foo`` have been reorganized in DPDK 21.02 for
performance reasons. Existing binary applications will have backwards
- compatibility in release 20.02, while newly built binaries will need to
+ compatibility in release 21.02, while newly built binaries will need to
reference the new structure variant ``struct rte_foo2``. Compatibility will be
- removed in release 20.11, and all applications will require updating and
+ removed in release 21.11, and all applications will require updating and
rebuilding to the new structure at that time, which will be renamed to the
original ``struct rte_foo``.
* Significant ABI changes are planned for the ``librte_dostuff`` library. The
- upcoming release 20.02 will not contain these changes, but release 20.11 will,
+ upcoming release 21.02 will not contain these changes, but release 21.11 will,
and no backwards compatibility is planned due to the extensive nature of
- these changes. Binaries using this library built prior to ABI version 21 will
+ these changes. Binaries using this library built prior to ABI version 22 will
require updating and recompilation.
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index b8b35761e2..91ada18dd7 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -14,22 +14,22 @@ What is a library's soname?
---------------------------
System libraries usually adopt the familiar major and minor version naming
-convention, where major versions (e.g. ``librte_eal 20.x, 21.x``) are presumed
+convention, where major versions (e.g. ``librte_eal 21.x, 22.x``) are presumed
to be ABI incompatible with each other and minor versions (e.g. ``librte_eal
-20.1, 20.2``) are presumed to be ABI compatible. A library's `soname
+21.1, 21.2``) are presumed to be ABI compatible. A library's `soname
<https://en.wikipedia.org/wiki/Soname>`_. is typically used to provide backward
compatibility information about a given library, describing the lowest common
denominator ABI supported by the library. The soname or logical name for the
library, is typically comprised of the library's name and major version e.g.
-``librte_eal.so.20``.
+``librte_eal.so.21``.
During an application's build process, a library's soname is noted as a runtime
dependency of the application. This information is then used by the `dynamic
linker <https://en.wikipedia.org/wiki/Dynamic_linker>`_ when resolving the
applications dependencies at runtime, to load a library supporting the correct
ABI version. The library loaded at runtime therefore, may be a minor revision
-supporting the same major ABI version (e.g. ``librte_eal.20.2``), as the library
-used to link the application (e.g ``librte_eal.20.0``).
+supporting the same major ABI version (e.g. ``librte_eal.21.2``), as the library
+used to link the application (e.g ``librte_eal.21.0``).
.. _major_abi_versions:
@@ -59,41 +59,41 @@ persists over multiple releases.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
When an ABI change is made between major ABI versions to a given library, a new
section is added to that library's version map describing the impending new ABI
version, as described in the section :ref:`example_abi_macro_usage`. The
-library's soname and filename however do not change, e.g. ``libacl.so.20``, as
+library's soname and filename however do not change, e.g. ``libacl.so.21``, as
ABI compatibility with the last major ABI version continues to be preserved for
that library.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
- DPDK_21 {
+ DPDK_22 {
global:
- } DPDK_20;
+ } DPDK_21;
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
-However when a new ABI version is declared, for example DPDK ``21``, old
+However when a new ABI version is declared, for example DPDK ``22``, old
depreciated functions may be safely removed at this point and the entire old
major ABI version removed, see the section :ref:`deprecating_entire_abi` on
how this may be done.
@@ -101,12 +101,12 @@ how this may be done.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
@@ -216,7 +216,7 @@ library looks like this
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -242,7 +242,7 @@ This file needs to be modified as follows
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -264,15 +264,15 @@ This file needs to be modified as follows
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
The addition of the new block tells the linker that a new version node
-``DPDK_21`` is available, which contains the symbol rte_acl_create, and inherits
-the symbols from the DPDK_20 node. This list is directly translated into a
+``DPDK_22`` is available, which contains the symbol rte_acl_create, and inherits
+the symbols from the DPDK_21 node. This list is directly translated into a
list of exported symbols when DPDK is compiled as a shared library.
Next, we need to specify in the code which function maps to the rte_acl_create
@@ -285,7 +285,7 @@ with the public symbol name
-struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param)
+struct rte_acl_ctx * __vsym
- +rte_acl_create_v20(const struct rte_acl_param *param)
+ +rte_acl_create_v21(const struct rte_acl_param *param)
{
size_t sz;
struct rte_acl_ctx *ctx;
@@ -294,7 +294,7 @@ with the public symbol name
Note that the base name of the symbol was kept intact, as this is conducive to
the macros used for versioning symbols and we have annotated the function as
``__vsym``, an implementation of a versioned symbol . That is our next step,
-mapping this new symbol name to the initial symbol name at version node 20.
+mapping this new symbol name to the initial symbol name at version node 21.
Immediately after the function, we add the VERSION_SYMBOL macro.
.. code-block:: c
@@ -302,26 +302,26 @@ Immediately after the function, we add the VERSION_SYMBOL macro.
#include <rte_function_versioning.h>
...
- VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ VERSION_SYMBOL(rte_acl_create, _v21, 21);
Remembering to also add the rte_function_versioning.h header to the requisite c
file where these changes are being made. The macro instructs the linker to
-create a new symbol ``rte_acl_create@DPDK_20``, which matches the symbol created
+create a new symbol ``rte_acl_create@DPDK_21``, which matches the symbol created
in older builds, but now points to the above newly named function. We have now
mapped the original rte_acl_create symbol to the original function (but with a
new name).
Please see the section :ref:`Enabling versioning macros
<enabling_versioning_macros>` to enable this macro in the meson/ninja build.
-Next, we need to create the new ``v21`` version of the symbol. We create a new
-function name, with the ``v21`` suffix, and implement it appropriately.
+Next, we need to create the new ``v22`` version of the symbol. We create a new
+function name, with the ``v22`` suffix, and implement it appropriately.
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
{
- struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
+ struct rte_acl_ctx *ctx = rte_acl_create_v21(param);
ctx->debug = debug;
@@ -330,7 +330,7 @@ function name, with the ``v21`` suffix, and implement it appropriately.
This code serves as our new API call. Its the same as our old call, but adds the
new parameter in place. Next we need to map this function to the new default
-symbol ``rte_acl_create@DPDK_21``. To do this, immediately after the function,
+symbol ``rte_acl_create@DPDK_22``. To do this, immediately after the function,
we add the BIND_DEFAULT_SYMBOL macro.
.. code-block:: c
@@ -338,10 +338,10 @@ we add the BIND_DEFAULT_SYMBOL macro.
#include <rte_function_versioning.h>
...
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
The macro instructs the linker to create the new default symbol
-``rte_acl_create@DPDK_21``, which points to the above newly named function.
+``rte_acl_create@DPDK_22``, which points to the above newly named function.
We finally modify the prototype of the call in the public header file,
such that it contains both versions of the symbol and the public API.
@@ -352,15 +352,15 @@ such that it contains both versions of the symbol and the public API.
rte_acl_create(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v20(const struct rte_acl_param *param);
+ rte_acl_create_v21(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
And that's it, on the next shared library rebuild, there will be two versions of
-rte_acl_create, an old DPDK_20 version, used by previously built applications,
-and a new DPDK_21 version, used by future built applications.
+rte_acl_create, an old DPDK_21 version, used by previously built applications,
+and a new DPDK_22 version, used by future built applications.
.. note::
@@ -385,21 +385,21 @@ this code in a position of no longer having a symbol simply named
To correct this, we can simply map a function of our choosing back to the public
symbol in the static build with the ``MAP_STATIC_SYMBOL`` macro. Generally the
assumption is that the most recent version of the symbol is the one you want to
-map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
+map. So, back in the C file where, immediately after ``rte_acl_create_v22`` is
defined, we add this
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug)
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug)
{
...
}
- MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v21);
+ MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v22);
That tells the compiler that, when building a static library, any calls to the
-symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
+symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v22``
.. _enabling_versioning_macros:
@@ -456,7 +456,7 @@ version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
@@ -486,22 +486,22 @@ When we promote the symbol to the stable ABI, we simply strip the
}
We then update the map file, adding the symbol ``rte_acl_create``
-to the ``DPDK_21`` version node.
+to the ``DPDK_22`` version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Although there are strictly no guarantees or commitments associated with
@@ -509,7 +509,7 @@ Although there are strictly no guarantees or commitments associated with
an alias to experimental. The process to add an alias to experimental,
is similar to the symbol versioning process. Assuming we have an experimental
symbol as before, we now add the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: c
@@ -535,29 +535,29 @@ and ``DPDK_21`` version nodes.
VERSION_SYMBOL_EXPERIMENTAL(rte_acl_create, _e);
struct rte_acl_ctx *
- rte_acl_create_v21(const struct rte_acl_param *param)
+ rte_acl_create_v22(const struct rte_acl_param *param)
{
return rte_acl_create(param);
}
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
In the map file, we map the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
EXPERIMENTAL {
global:
@@ -585,7 +585,7 @@ file:
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -607,21 +607,21 @@ file:
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Next remove the corresponding versioned export.
.. code-block:: c
- -VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ -VERSION_SYMBOL(rte_acl_create, _v21, 21);
Note that the internal function definition could also be removed, but its used
-in our example by the newer version ``v21``, so we leave it in place and declare
+in our example by the newer version ``v22``, so we leave it in place and declare
it as static. This is a coding style choice.
.. _deprecating_entire_abi:
@@ -642,7 +642,7 @@ In the case of our map above, it would transform to look as follows
.. code-block:: none
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_add_rules;
@@ -670,8 +670,8 @@ symbols.
.. code-block:: c
- -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 20);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ -BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
Lastly, any VERSION_SYMBOL macros that point to the old version nodes
should be removed, taking care to preserve any code that is shared
--
2.23.0
^ permalink raw reply [relevance 33%]
* [dpdk-dev] [PATCH] doc: update abi version references
2020-10-23 16:07 33% [dpdk-dev] [PATCH v1] doc: update abi version references Ray Kinsella
2020-10-23 16:51 7% ` David Marchand
@ 2020-10-26 19:23 33% ` Ray Kinsella
2020-10-26 19:31 33% ` [dpdk-dev] [PATCH v3] " Ray Kinsella
2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-10-26 19:23 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman; +Cc: thomas, david.marchand, john.mcnamara, dev
Updated references to abi versions in the contributors guide.
Fixed an inaccurate reference to a symbol in the policy.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/contributing/abi_policy.rst | 56 +++++-----
doc/guides/contributing/abi_versioning.rst | 120 ++++++++++-----------
2 files changed, 88 insertions(+), 88 deletions(-)
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index e17758a107..4ad87dbfed 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -78,15 +78,15 @@ The DPDK ABI policy
-------------------
A new major ABI version is declared no more frequently than yearly, with
-declarations usually aligning with a LTS release, e.g. ABI 20 for DPDK 19.11.
+declarations usually aligning with a LTS release, e.g. ABI 21 for DPDK 20.11.
Compatibility with the major ABI version is then mandatory in subsequent
-releases until the next major ABI version is declared, e.g. ABI 21 for DPDK
-20.11.
+releases until the next major ABI version is declared, e.g. ABI 22 for DPDK
+21.11.
At the declaration of a major ABI version, major version numbers encoded in
libraries' sonames are bumped to indicate the new version, with the minor
-version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
-``librte_eal.so.21.0``.
+version reset to ``0``. An example would be ``librte_eal.so.21.3`` would become
+``librte_eal.so.22.0``.
The ABI may then change multiple times, without warning, between the last major
ABI version increment and the HEAD label of the git tree, with the condition
@@ -95,8 +95,8 @@ sonames do not change.
Minor versions are incremented to indicate the release of a new ABI compatible
DPDK release, typically the DPDK quarterly releases. An example of this, might
-be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
-release, following the declaration of the new major ABI version ``20``.
+be that ``librte_eal.so.21.1`` would indicate the first ABI compatible DPDK
+release, following the declaration of the new major ABI version ``21``.
An ABI version is supported in all new releases until the next major ABI version
is declared. When changing the major ABI version, the release notes will detail
@@ -222,11 +222,11 @@ Examples of ABI Changes
The following are examples of allowable ABI changes occurring between
declarations of major ABI versions.
-* DPDK 19.11 release defines the function ``rte_foo()`` ; ``rte_foo()``
- is part of the major ABI version ``20``.
+* DPDK 20.11 release defines the function ``rte_foo()`` ; ``rte_foo()``
+ is part of the major ABI version ``21``.
-* DPDK 20.02 release defines a new function ``rte_foo(uint8_t bar)``.
- This is not a problem as long as the symbol ``rte_foo@DPDK20`` is
+* DPDK 21.02 release defines a new function ``rte_foo(uint8_t bar)``.
+ This is not a problem as long as the symbol ``rte_foo@DPDK_21`` is
preserved through :ref:`abi_versioning`.
- The new function may be marked with the ``__rte_experimental`` tag for a
@@ -235,21 +235,21 @@ declarations of major ABI versions.
- Once ``rte_foo(uint8_t bar)`` becomes non-experimental, ``rte_foo()`` is
declared as ``__rte_deprecated`` and an deprecation notice is provided.
-* DPDK 19.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
- version of ``rte_foo`` only exists from DPDK 20.02 onwards as described in the
+* DPDK 20.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
+ version of ``rte_foo`` only exists from DPDK 21.02 onwards as described in the
:ref:`note on forward-only compatibility<forward-only>`.
-* DPDK 20.02 release defines the experimental function ``__rte_experimental
- rte_baz()``. This function may or may not exist in the DPDK 20.05 release.
+* DPDK 21.02 release defines the experimental function ``__rte_experimental
+ rte_baz()``. This function may or may not exist in the DPDK 21.05 release.
* An application ``dPacket`` wishes to use ``rte_foo(uint8_t bar)``, before the
- declaration of the DPDK ``21`` major ABI version. The application can only
- ensure its runtime dependencies are met by specifying ``DPDK (>= 20.2)`` as
+ declaration of the DPDK ``22`` major ABI version. The application can only
+ ensure its runtime dependencies are met by specifying ``DPDK (>= 21.2)`` as
an explicit package dependency, as the soname can only indicate the
supported major ABI version.
-* At the release of DPDK 20.11, the function ``rte_foo(uint8_t bar)`` becomes
- formally part of then new major ABI version DPDK ``21`` and ``rte_foo()`` may be
+* At the release of DPDK 21.11, the function ``rte_foo(uint8_t bar)`` becomes
+ formally part of then new major ABI version DPDK ``22`` and ``rte_foo()`` may be
removed.
.. _deprecation_notices:
@@ -261,25 +261,25 @@ The following are some examples of ABI deprecation notices which would be
added to the Release Notes:
* The Macro ``#RTE_FOO`` is deprecated and will be removed with ABI version
- 21, to be replaced with the inline function ``rte_foo()``.
+ 22, to be replaced with the inline function ``rte_foo()``.
* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
- in version 20.2. Backwards compatibility will be maintained for this function
- until the release of the new DPDK major ABI version 21, in DPDK version
- 20.11.
+ in version 21.2. Backwards compatibility will be maintained for this function
+ until the release of the new DPDK major ABI version 22, in DPDK version
+ 21.11.
-* The members of ``struct rte_foo`` have been reorganized in DPDK 20.02 for
+* The members of ``struct rte_foo`` have been reorganized in DPDK 21.02 for
performance reasons. Existing binary applications will have backwards
- compatibility in release 20.02, while newly built binaries will need to
+ compatibility in release 21.02, while newly built binaries will need to
reference the new structure variant ``struct rte_foo2``. Compatibility will be
- removed in release 20.11, and all applications will require updating and
+ removed in release 21.11, and all applications will require updating and
rebuilding to the new structure at that time, which will be renamed to the
original ``struct rte_foo``.
* Significant ABI changes are planned for the ``librte_dostuff`` library. The
- upcoming release 20.02 will not contain these changes, but release 20.11 will,
+ upcoming release 21.02 will not contain these changes, but release 21.11 will,
and no backwards compatibility is planned due to the extensive nature of
- these changes. Binaries using this library built prior to ABI version 21 will
+ these changes. Binaries using this library built prior to ABI version 22 will
require updating and recompilation.
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index b8b35761e2..91ada18dd7 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -14,22 +14,22 @@ What is a library's soname?
---------------------------
System libraries usually adopt the familiar major and minor version naming
-convention, where major versions (e.g. ``librte_eal 20.x, 21.x``) are presumed
+convention, where major versions (e.g. ``librte_eal 21.x, 22.x``) are presumed
to be ABI incompatible with each other and minor versions (e.g. ``librte_eal
-20.1, 20.2``) are presumed to be ABI compatible. A library's `soname
+21.1, 21.2``) are presumed to be ABI compatible. A library's `soname
<https://en.wikipedia.org/wiki/Soname>`_. is typically used to provide backward
compatibility information about a given library, describing the lowest common
denominator ABI supported by the library. The soname or logical name for the
library, is typically comprised of the library's name and major version e.g.
-``librte_eal.so.20``.
+``librte_eal.so.21``.
During an application's build process, a library's soname is noted as a runtime
dependency of the application. This information is then used by the `dynamic
linker <https://en.wikipedia.org/wiki/Dynamic_linker>`_ when resolving the
applications dependencies at runtime, to load a library supporting the correct
ABI version. The library loaded at runtime therefore, may be a minor revision
-supporting the same major ABI version (e.g. ``librte_eal.20.2``), as the library
-used to link the application (e.g ``librte_eal.20.0``).
+supporting the same major ABI version (e.g. ``librte_eal.21.2``), as the library
+used to link the application (e.g ``librte_eal.21.0``).
.. _major_abi_versions:
@@ -59,41 +59,41 @@ persists over multiple releases.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
When an ABI change is made between major ABI versions to a given library, a new
section is added to that library's version map describing the impending new ABI
version, as described in the section :ref:`example_abi_macro_usage`. The
-library's soname and filename however do not change, e.g. ``libacl.so.20``, as
+library's soname and filename however do not change, e.g. ``libacl.so.21``, as
ABI compatibility with the last major ABI version continues to be preserved for
that library.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
- DPDK_21 {
+ DPDK_22 {
global:
- } DPDK_20;
+ } DPDK_21;
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
-However when a new ABI version is declared, for example DPDK ``21``, old
+However when a new ABI version is declared, for example DPDK ``22``, old
depreciated functions may be safely removed at this point and the entire old
major ABI version removed, see the section :ref:`deprecating_entire_abi` on
how this may be done.
@@ -101,12 +101,12 @@ how this may be done.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
@@ -216,7 +216,7 @@ library looks like this
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -242,7 +242,7 @@ This file needs to be modified as follows
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -264,15 +264,15 @@ This file needs to be modified as follows
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
The addition of the new block tells the linker that a new version node
-``DPDK_21`` is available, which contains the symbol rte_acl_create, and inherits
-the symbols from the DPDK_20 node. This list is directly translated into a
+``DPDK_22`` is available, which contains the symbol rte_acl_create, and inherits
+the symbols from the DPDK_21 node. This list is directly translated into a
list of exported symbols when DPDK is compiled as a shared library.
Next, we need to specify in the code which function maps to the rte_acl_create
@@ -285,7 +285,7 @@ with the public symbol name
-struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param)
+struct rte_acl_ctx * __vsym
- +rte_acl_create_v20(const struct rte_acl_param *param)
+ +rte_acl_create_v21(const struct rte_acl_param *param)
{
size_t sz;
struct rte_acl_ctx *ctx;
@@ -294,7 +294,7 @@ with the public symbol name
Note that the base name of the symbol was kept intact, as this is conducive to
the macros used for versioning symbols and we have annotated the function as
``__vsym``, an implementation of a versioned symbol . That is our next step,
-mapping this new symbol name to the initial symbol name at version node 20.
+mapping this new symbol name to the initial symbol name at version node 21.
Immediately after the function, we add the VERSION_SYMBOL macro.
.. code-block:: c
@@ -302,26 +302,26 @@ Immediately after the function, we add the VERSION_SYMBOL macro.
#include <rte_function_versioning.h>
...
- VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ VERSION_SYMBOL(rte_acl_create, _v21, 21);
Remembering to also add the rte_function_versioning.h header to the requisite c
file where these changes are being made. The macro instructs the linker to
-create a new symbol ``rte_acl_create@DPDK_20``, which matches the symbol created
+create a new symbol ``rte_acl_create@DPDK_21``, which matches the symbol created
in older builds, but now points to the above newly named function. We have now
mapped the original rte_acl_create symbol to the original function (but with a
new name).
Please see the section :ref:`Enabling versioning macros
<enabling_versioning_macros>` to enable this macro in the meson/ninja build.
-Next, we need to create the new ``v21`` version of the symbol. We create a new
-function name, with the ``v21`` suffix, and implement it appropriately.
+Next, we need to create the new ``v22`` version of the symbol. We create a new
+function name, with the ``v22`` suffix, and implement it appropriately.
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
{
- struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
+ struct rte_acl_ctx *ctx = rte_acl_create_v21(param);
ctx->debug = debug;
@@ -330,7 +330,7 @@ function name, with the ``v21`` suffix, and implement it appropriately.
This code serves as our new API call. Its the same as our old call, but adds the
new parameter in place. Next we need to map this function to the new default
-symbol ``rte_acl_create@DPDK_21``. To do this, immediately after the function,
+symbol ``rte_acl_create@DPDK_22``. To do this, immediately after the function,
we add the BIND_DEFAULT_SYMBOL macro.
.. code-block:: c
@@ -338,10 +338,10 @@ we add the BIND_DEFAULT_SYMBOL macro.
#include <rte_function_versioning.h>
...
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
The macro instructs the linker to create the new default symbol
-``rte_acl_create@DPDK_21``, which points to the above newly named function.
+``rte_acl_create@DPDK_22``, which points to the above newly named function.
We finally modify the prototype of the call in the public header file,
such that it contains both versions of the symbol and the public API.
@@ -352,15 +352,15 @@ such that it contains both versions of the symbol and the public API.
rte_acl_create(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v20(const struct rte_acl_param *param);
+ rte_acl_create_v21(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
And that's it, on the next shared library rebuild, there will be two versions of
-rte_acl_create, an old DPDK_20 version, used by previously built applications,
-and a new DPDK_21 version, used by future built applications.
+rte_acl_create, an old DPDK_21 version, used by previously built applications,
+and a new DPDK_22 version, used by future built applications.
.. note::
@@ -385,21 +385,21 @@ this code in a position of no longer having a symbol simply named
To correct this, we can simply map a function of our choosing back to the public
symbol in the static build with the ``MAP_STATIC_SYMBOL`` macro. Generally the
assumption is that the most recent version of the symbol is the one you want to
-map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
+map. So, back in the C file where, immediately after ``rte_acl_create_v22`` is
defined, we add this
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug)
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug)
{
...
}
- MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v21);
+ MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v22);
That tells the compiler that, when building a static library, any calls to the
-symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
+symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v22``
.. _enabling_versioning_macros:
@@ -456,7 +456,7 @@ version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
@@ -486,22 +486,22 @@ When we promote the symbol to the stable ABI, we simply strip the
}
We then update the map file, adding the symbol ``rte_acl_create``
-to the ``DPDK_21`` version node.
+to the ``DPDK_22`` version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Although there are strictly no guarantees or commitments associated with
@@ -509,7 +509,7 @@ Although there are strictly no guarantees or commitments associated with
an alias to experimental. The process to add an alias to experimental,
is similar to the symbol versioning process. Assuming we have an experimental
symbol as before, we now add the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: c
@@ -535,29 +535,29 @@ and ``DPDK_21`` version nodes.
VERSION_SYMBOL_EXPERIMENTAL(rte_acl_create, _e);
struct rte_acl_ctx *
- rte_acl_create_v21(const struct rte_acl_param *param)
+ rte_acl_create_v22(const struct rte_acl_param *param)
{
return rte_acl_create(param);
}
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
In the map file, we map the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
EXPERIMENTAL {
global:
@@ -585,7 +585,7 @@ file:
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -607,21 +607,21 @@ file:
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Next remove the corresponding versioned export.
.. code-block:: c
- -VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ -VERSION_SYMBOL(rte_acl_create, _v21, 21);
Note that the internal function definition could also be removed, but its used
-in our example by the newer version ``v21``, so we leave it in place and declare
+in our example by the newer version ``v22``, so we leave it in place and declare
it as static. This is a coding style choice.
.. _deprecating_entire_abi:
@@ -642,7 +642,7 @@ In the case of our map above, it would transform to look as follows
.. code-block:: none
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_add_rules;
@@ -670,8 +670,8 @@ symbols.
.. code-block:: c
- -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 20);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ -BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
Lastly, any VERSION_SYMBOL macros that point to the old version nodes
should be removed, taking care to preserve any code that is shared
--
2.23.0
^ permalink raw reply [relevance 33%]
* Re: [dpdk-dev] [PATCH v1] doc: update abi version references
2020-10-23 16:51 7% ` David Marchand
@ 2020-10-26 18:27 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-26 18:27 UTC (permalink / raw)
To: David Marchand; +Cc: Neil Horman, Mcnamara, John, Thomas Monjalon, dev
Good catch :-).
On 23/10/2020 17:51, David Marchand wrote:
> On Fri, Oct 23, 2020 at 6:11 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>>
>> Updated references to abi versions in the contributors guide.
>
> Thanks for looking at it.
>
> I would keep the dpdk release version aligned with updated ABI ver.
> Caught 3 references in the first file.
>
> %s/19.11/20.11/g can fix this.
>
> Then:
> Reviewed-by: David Marchand <david.marchand@redhat.com>
>
>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to
@ 2020-10-26 6:47 3% yang_y_yi
2020-10-27 19:55 0% ` Ananyev, Konstantin
2020-10-28 0:51 0% ` Hu, Jiayu
0 siblings, 2 replies; 200+ results
From: yang_y_yi @ 2020-10-26 6:47 UTC (permalink / raw)
To: dev; +Cc: jiayu.hu, konstantin.ananyev, techboard, thomas, yangyi01, yang_y_yi
From: Yi Yang <yangyi01@inspur.com>
rte_gso_segment decreased refcnt of pkt by one, but
it is wrong if pkt is external mbuf, pkt won't be
freed because of incorrect refcnt, the result is
application can't allocate mbuf from mempool because
mbufs in mempool are run out of.
One correct way is application should call
rte_pktmbuf_free after calling rte_gso_segment to free
pkt explicitly. rte_gso_segment mustn't handle it, this
should be responsibility of application.
This commit changed rte_gso_segment in functional behavior
and return value, so the application must take appropriate
actions according to return values, "ret < 0" means it
should free and drop 'pkt', "ret == 0" means 'pkt' isn't
GSOed but 'pkt' can be transimmitted as a normal packet,
"ret > 0" means 'pkt' has been GSOed into two or multiple
segments, it should use "pkts_out" to transmit these
segments. The application must free 'pkt' after call
rte_gso_segment when return value isn't equal to 0.
Fixes: 119583797b6a ("gso: support TCP/IPv4 GSO")
Signed-off-by: Yi Yang <yangyi01@inspur.com>
---
Changelog:
v2->v3:
- add release notes to emphasize behavior and return
value changes of rte_gso_segment().
- update return value description of rte_gso_segment().
- modify related code to adapt to the changes.
v1->v2:
- update description of rte_gso_segment().
- change code which calls rte_gso_segment() to
fix free issue.
---
app/test-pmd/csumonly.c | 12 ++++++++++--
.../prog_guide/generic_segmentation_offload_lib.rst | 7 +++++--
doc/guides/rel_notes/release_20_11.rst | 7 +++++++
drivers/net/tap/rte_eth_tap.c | 12 ++++++++++--
lib/librte_gso/gso_tcp4.c | 6 ++----
lib/librte_gso/gso_tunnel_tcp4.c | 14 +++++---------
lib/librte_gso/gso_udp4.c | 6 ++----
lib/librte_gso/rte_gso.c | 15 +++------------
lib/librte_gso/rte_gso.h | 8 ++++++--
9 files changed, 50 insertions(+), 37 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 3d7d244..d813d4f 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -1080,9 +1080,17 @@ struct simple_gre_hdr {
ret = rte_gso_segment(pkts_burst[i], gso_ctx,
&gso_segments[nb_segments],
GSO_MAX_PKT_BURST - nb_segments);
- if (ret >= 0)
+ if (ret >= 1) {
+ /* pkts_burst[i] can be freed safely here. */
+ rte_pktmbuf_free(pkts_burst[i]);
nb_segments += ret;
- else {
+ } else if (ret == 0) {
+ /* 0 means it can be transmitted directly
+ * without gso.
+ */
+ gso_segments[nb_segments] = pkts_burst[i];
+ nb_segments += 1;
+ } else {
TESTPMD_LOG(DEBUG, "Unable to segment packet");
rte_pktmbuf_free(pkts_burst[i]);
}
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 205cb8a..8577572 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -25,8 +25,9 @@ Bearing that in mind, the GSO library enables DPDK applications to segment
packets in software. Note however, that GSO is implemented as a standalone
library, and not via a 'fallback' mechanism (i.e. for when TSO is unsupported
in the underlying hardware); that is, applications must explicitly invoke the
-GSO library to segment packets. The size of GSO segments ``(segsz)`` is
-configurable by the application.
+GSO library to segment packets, they also must call ``rte_pktmbuf_free()`` to
+free mbuf GSO segments attach to after calling ``rte_gso_segment()``. The size
+of GSO segments ``(segsz)`` is configurable by the application.
Limitations
-----------
@@ -233,6 +234,8 @@ To segment an outgoing packet, an application must:
#. Invoke the GSO segmentation API, ``rte_gso_segment()``.
+#. Call ``rte_pktmbuf_free()`` to free mbuf ``rte_gso_segment()`` segments.
+
#. If required, update the L3 and L4 checksums of the newly-created segments.
For tunneled packets, the outer IPv4 headers' checksums should also be
updated. Alternatively, the application may offload checksum calculation
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359..da77396 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -543,6 +543,13 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* **Changed ``rte_gso_segment`` in functional behavior and return value.**
+
+ * Don't save pkt to pkts_out[0] if it isn't GSOed in case of ret == 1.
+ * Return 0 instead of 1 for the above case.
+ * ``rte_gso_segment`` won't free pkt no matter whether it is GSOed, the
+ application has responsibility to free it after call ``rte_gso_segment``.
+
ABI Changes
-----------
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 81c6884..2f8abb1 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -751,8 +751,16 @@ struct ipc_queues {
if (num_tso_mbufs < 0)
break;
- mbuf = gso_mbufs;
- num_mbufs = num_tso_mbufs;
+ if (num_tso_mbufs >= 1) {
+ mbuf = gso_mbufs;
+ num_mbufs = num_tso_mbufs;
+ } else {
+ /* 0 means it can be transmitted directly
+ * without gso.
+ */
+ mbuf = &mbuf_in;
+ num_mbufs = 1;
+ }
} else {
/* stats.errs will be incremented */
if (rte_pktmbuf_pkt_len(mbuf_in) > max_size)
diff --git a/lib/librte_gso/gso_tcp4.c b/lib/librte_gso/gso_tcp4.c
index ade172a..d31feaf 100644
--- a/lib/librte_gso/gso_tcp4.c
+++ b/lib/librte_gso/gso_tcp4.c
@@ -50,15 +50,13 @@
pkt->l2_len);
frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
if (unlikely(IS_FRAGMENTED(frag_off))) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
/* Don't process the packet without data */
hdr_offset = pkt->l2_len + pkt->l3_len + pkt->l4_len;
if (unlikely(hdr_offset >= pkt->pkt_len)) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
pyld_unit_size = gso_size - hdr_offset;
diff --git a/lib/librte_gso/gso_tunnel_tcp4.c b/lib/librte_gso/gso_tunnel_tcp4.c
index e0384c2..166aace 100644
--- a/lib/librte_gso/gso_tunnel_tcp4.c
+++ b/lib/librte_gso/gso_tunnel_tcp4.c
@@ -62,7 +62,7 @@
{
struct rte_ipv4_hdr *inner_ipv4_hdr;
uint16_t pyld_unit_size, hdr_offset, frag_off;
- int ret = 1;
+ int ret;
hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len;
inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
@@ -73,25 +73,21 @@
*/
frag_off = rte_be_to_cpu_16(inner_ipv4_hdr->fragment_offset);
if (unlikely(IS_FRAGMENTED(frag_off))) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
hdr_offset += pkt->l3_len + pkt->l4_len;
/* Don't process the packet without data */
if (hdr_offset >= pkt->pkt_len) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
pyld_unit_size = gso_size - hdr_offset;
/* Segment the payload */
ret = gso_do_segment(pkt, hdr_offset, pyld_unit_size, direct_pool,
indirect_pool, pkts_out, nb_pkts_out);
- if (ret <= 1)
- return ret;
-
- update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out, ret);
+ if (ret > 1)
+ update_tunnel_ipv4_tcp_headers(pkt, ipid_delta, pkts_out, ret);
return ret;
}
diff --git a/lib/librte_gso/gso_udp4.c b/lib/librte_gso/gso_udp4.c
index 6fa68f2..5d0186a 100644
--- a/lib/librte_gso/gso_udp4.c
+++ b/lib/librte_gso/gso_udp4.c
@@ -52,8 +52,7 @@
pkt->l2_len);
frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
if (unlikely(IS_FRAGMENTED(frag_off))) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
/*
@@ -65,8 +64,7 @@
/* Don't process the packet without data. */
if (unlikely(hdr_offset + pkt->l4_len >= pkt->pkt_len)) {
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
/* pyld_unit_size must be a multiple of 8 because frag_off
diff --git a/lib/librte_gso/rte_gso.c b/lib/librte_gso/rte_gso.c
index 751b5b6..896350e 100644
--- a/lib/librte_gso/rte_gso.c
+++ b/lib/librte_gso/rte_gso.c
@@ -30,7 +30,6 @@
uint16_t nb_pkts_out)
{
struct rte_mempool *direct_pool, *indirect_pool;
- struct rte_mbuf *pkt_seg;
uint64_t ol_flags;
uint16_t gso_size;
uint8_t ipid_delta;
@@ -44,8 +43,7 @@
if (gso_ctx->gso_size >= pkt->pkt_len) {
pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
- pkts_out[0] = pkt;
- return 1;
+ return 0;
}
direct_pool = gso_ctx->direct_pool;
@@ -75,18 +73,11 @@
indirect_pool, pkts_out, nb_pkts_out);
} else {
/* unsupported packet, skip */
- pkts_out[0] = pkt;
RTE_LOG(DEBUG, GSO, "Unsupported packet type\n");
- return 1;
+ ret = 0;
}
- if (ret > 1) {
- pkt_seg = pkt;
- while (pkt_seg) {
- rte_mbuf_refcnt_update(pkt_seg, -1);
- pkt_seg = pkt_seg->next;
- }
- } else if (ret < 0) {
+ if (ret < 0) {
/* Revert the ol_flags in the event of failure. */
pkt->ol_flags = ol_flags;
}
diff --git a/lib/librte_gso/rte_gso.h b/lib/librte_gso/rte_gso.h
index 3aab297..af480ee 100644
--- a/lib/librte_gso/rte_gso.h
+++ b/lib/librte_gso/rte_gso.h
@@ -89,8 +89,11 @@ struct rte_gso_ctx {
* the GSO segments are sent to should support transmission of multi-segment
* packets.
*
- * If the input packet is GSO'd, its mbuf refcnt reduces by 1. Therefore,
- * when all GSO segments are freed, the input packet is freed automatically.
+ * If the input packet is GSO'd, all the indirect segments are attached to the
+ * input packet.
+ *
+ * rte_gso_segment() will not free the input packet no matter whether it is
+ * GSO'd or not, the application should free it after call rte_gso_segment().
*
* If the memory space in pkts_out or MBUF pools is insufficient, this
* function fails, and it returns (-1) * errno. Otherwise, GSO succeeds,
@@ -109,6 +112,7 @@ struct rte_gso_ctx {
*
* @return
* - The number of GSO segments filled in pkts_out on success.
+ * - Return 0 if it needn't GSOed.
* - Return -ENOMEM if run out of memory in MBUF pools.
* - Return -EINVAL for invalid parameters.
*/
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v8 4/5] doc: change references to blacklist and whitelist
@ 2020-10-25 21:15 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-25 21:15 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Luca Boccassi
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 23 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 4 +--
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 7 +++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 3 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 175 insertions(+), 155 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index f77ce91f76ee..f8d3d77474ff 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -632,19 +632,19 @@ Testing
QAT SYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 4f06e069847a..496b7199c8c9 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -63,7 +63,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -78,7 +78,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -87,7 +87,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -96,7 +96,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -107,7 +107,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -115,7 +115,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -125,7 +125,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -135,7 +135,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -145,7 +145,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
-----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 2540ddd5c2f5..97033958b758 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -725,7 +725,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -750,12 +750,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 442ab1511c64..8c2985cad04a 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -760,7 +760,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 1deb7faaa50c..9ae5109234eb 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 01e37d462102..b79780abc1a5 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index e1b5c80d6c91..9a9cf5bfbc3d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -120,13 +120,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 234bf066b9f6..6458bfc42e1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 5cf85d94cc34..488a9ec22450 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -222,7 +222,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -458,7 +458,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -466,7 +466,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -802,7 +802,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index a2aea1233376..6e4d53968f75 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -62,8 +74,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index ed920e91ad51..cea7e8c2c4e3 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_pmd_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -409,7 +409,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index afa65a1379a5..5077e06a98a2 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1488,7 +1488,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1496,14 +1496,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1511,7 +1511,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 7c04b5e60040..3c42e8585835 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,8 @@ for details.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 \
+ -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +117,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -127,7 +128,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -139,7 +140,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -151,7 +152,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -163,7 +164,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +186,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -194,7 +195,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -205,7 +206,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -213,7 +214,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -229,7 +230,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index a928a790e389..9da4281c8bd3 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359e51d4..57069ae4db4c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -543,6 +543,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+ $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
-c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
.. code-block:: console
- ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
- -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+ -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..cb637abdfaf4 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..5d53bf633db7 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,18 @@ Following is the sample command:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+ -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+ -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..9a7743146b82 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -w 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v7 4/5] doc: change references to blacklist and whitelist
@ 2020-10-25 20:57 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-25 20:57 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Luca Boccassi
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 23 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 4 +--
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 7 +++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 3 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 175 insertions(+), 155 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index f77ce91f76ee..f8d3d77474ff 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -632,19 +632,19 @@ Testing
QAT SYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 4f06e069847a..496b7199c8c9 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -63,7 +63,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -78,7 +78,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -87,7 +87,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -96,7 +96,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -107,7 +107,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -115,7 +115,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -125,7 +125,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -135,7 +135,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -145,7 +145,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
-----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 2540ddd5c2f5..97033958b758 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -725,7 +725,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -750,12 +750,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 442ab1511c64..8c2985cad04a 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -760,7 +760,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 1deb7faaa50c..9ae5109234eb 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 01e37d462102..b79780abc1a5 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index e1b5c80d6c91..9a9cf5bfbc3d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -120,13 +120,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 234bf066b9f6..6458bfc42e1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 5cf85d94cc34..488a9ec22450 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -222,7 +222,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -458,7 +458,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -466,7 +466,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -802,7 +802,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index a2aea1233376..6e4d53968f75 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -62,8 +74,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index ed920e91ad51..cea7e8c2c4e3 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_pmd_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -409,7 +409,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index afa65a1379a5..5077e06a98a2 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1488,7 +1488,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1496,14 +1496,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1511,7 +1511,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 7c04b5e60040..3c42e8585835 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,8 @@ for details.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 \
+ -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +117,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -127,7 +128,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -139,7 +140,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -151,7 +152,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -163,7 +164,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +186,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -194,7 +195,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -205,7 +206,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -213,7 +214,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -229,7 +230,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index a928a790e389..9da4281c8bd3 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359e51d4..57069ae4db4c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -543,6 +543,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+ $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
-c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
.. code-block:: console
- ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
- -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+ -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..cb637abdfaf4 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..5d53bf633db7 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,18 @@ Following is the sample command:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+ -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+ -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..9a7743146b82 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -w 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v6 5/5] doc: change references to blacklist and whitelist
@ 2020-10-25 16:57 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-25 16:57 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Luca Boccassi
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 23 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 4 +--
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 7 +++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 3 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 175 insertions(+), 155 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index f77ce91f76ee..f8d3d77474ff 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -632,19 +632,19 @@ Testing
QAT SYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 4f06e069847a..496b7199c8c9 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -63,7 +63,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -78,7 +78,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -87,7 +87,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -96,7 +96,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -107,7 +107,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -115,7 +115,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -125,7 +125,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -135,7 +135,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -145,7 +145,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
-----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 2540ddd5c2f5..97033958b758 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -725,7 +725,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -750,12 +750,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 442ab1511c64..8c2985cad04a 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -760,7 +760,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 1deb7faaa50c..9ae5109234eb 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 01e37d462102..b79780abc1a5 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index e1b5c80d6c91..9a9cf5bfbc3d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -120,13 +120,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 234bf066b9f6..6458bfc42e1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 5cf85d94cc34..488a9ec22450 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -222,7 +222,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -458,7 +458,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -466,7 +466,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -802,7 +802,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index a2aea1233376..6e4d53968f75 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -62,8 +74,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index ed920e91ad51..cea7e8c2c4e3 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_pmd_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -409,7 +409,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index afa65a1379a5..5077e06a98a2 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1488,7 +1488,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1496,14 +1496,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1511,7 +1511,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 7c04b5e60040..3c42e8585835 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,8 @@ for details.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 \
+ -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +117,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -127,7 +128,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -139,7 +140,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -151,7 +152,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -163,7 +164,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +186,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -194,7 +195,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -205,7 +206,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -213,7 +214,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -229,7 +230,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index a928a790e389..9da4281c8bd3 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359e51d4..57069ae4db4c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -543,6 +543,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+ $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
-c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
.. code-block:: console
- ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
- -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+ -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..cb637abdfaf4 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..5d53bf633db7 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,18 @@ Following is the sample command:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+ -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+ -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..9a7743146b82 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -w 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v5 5/5] doc: change references to blacklist and whitelist
@ 2020-10-24 1:01 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-24 1:01 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Luca Boccassi
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 23 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 4 +--
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 7 +++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 3 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 175 insertions(+), 155 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 080768a2e766..83565d71752d 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index da14a68d9cff..bac82421bca2 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index f77ce91f76ee..f8d3d77474ff 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -237,7 +237,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -275,7 +275,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -632,19 +632,19 @@ Testing
QAT SYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
cd ./<build_dir>/app/test
- ./dpdk-test -l1 -n1 -w <your qat bdf>
+ ./dpdk-test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 4f06e069847a..496b7199c8c9 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -55,7 +55,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -63,7 +63,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -78,7 +78,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -87,7 +87,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -96,7 +96,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -107,7 +107,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -115,7 +115,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -125,7 +125,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -135,7 +135,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -145,7 +145,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
-----------------
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 53f09a52dbb5..1272c1e72b7b 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -42,7 +42,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -61,7 +61,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 2540ddd5c2f5..97033958b758 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -683,7 +683,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -725,7 +725,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -750,12 +750,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 442ab1511c64..8c2985cad04a 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -96,7 +96,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- dpdk-testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ dpdk-testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
~~~~~~~~~~~~~~~~~~~~~~
@@ -301,7 +301,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -328,7 +328,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- dpdk-testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ dpdk-testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -760,7 +760,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 1deb7faaa50c..9ae5109234eb 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 01e37d462102..b79780abc1a5 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -503,10 +503,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c62448768376..163ae3f47b11 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -305,7 +305,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -371,7 +371,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -420,7 +420,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index e1b5c80d6c91..9a9cf5bfbc3d 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -48,7 +48,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -56,11 +56,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -111,8 +111,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -120,13 +120,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 234bf066b9f6..6458bfc42e1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 5cf85d94cc34..488a9ec22450 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -172,7 +172,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -185,7 +185,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -200,7 +200,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -212,7 +212,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -222,7 +222,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -458,7 +458,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -466,7 +466,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -802,7 +802,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./dpdk-l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index a2aea1233376..6e4d53968f75 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -30,7 +30,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -41,7 +41,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -53,7 +53,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -62,8 +74,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -74,14 +86,14 @@ Runtime Config Options
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ dpdk-testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ dpdk-testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -233,7 +245,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- dpdk-testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index ed920e91ad51..cea7e8c2c4e3 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -24,8 +24,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -381,7 +381,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -389,14 +389,14 @@ devices managed by librte_pmd_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -409,7 +409,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index afa65a1379a5..5077e06a98a2 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1488,7 +1488,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1496,14 +1496,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1511,7 +1511,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index ecea3ecff074..e987f331048c 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -63,7 +63,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 7c04b5e60040..3c42e8585835 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -63,7 +63,8 @@ for details.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 \
+ -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -116,7 +117,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -127,7 +128,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -139,7 +140,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -151,7 +152,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -163,7 +164,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -185,7 +186,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -194,7 +195,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -205,7 +206,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -213,7 +214,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -229,7 +230,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index a928a790e389..9da4281c8bd3 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -157,7 +157,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -377,7 +377,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359e51d4..57069ae4db4c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -543,6 +543,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 7c5a45b72afb..b2af9a0755d6 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -61,19 +61,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> \
+ $ ./<build_dir>/examples/dpdk-bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> \
-c 0x38 --socket-mem=2,2 --file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -93,20 +93,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index b4fc587a09e2..41ee8b7ee3f4 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,8 +46,8 @@ these settings is shown below:
.. code-block:: console
- ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 /
- -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./<build_dir>/examples/dpdk-eventdev_pipeline --vdev event_sw0 -- -r1 -t1 \
+ -e4 -a FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 1f37dccf8bb7..cb637abdfaf4 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -323,15 +323,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./<build_dir>/examples/dpdk-ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -929,13 +929,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1023,4 +1023,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 7acbd7404e3b..5d53bf633db7 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,18 @@ Following is the sample command:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./<build_dir>/examples/dpdk-l3fwd -l 0-3 -n 4 -a <event device> \
+ -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 4a96800ec648..eee5d8185061 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index d7e1dc581328..831f2bf58f99 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,8 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 \
+ -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index a8bedbab5321..9a7743146b82 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -w 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 29340d94e801..73cabf0098d3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -394,7 +394,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -404,7 +404,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -414,7 +414,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v6 00/23] Add DLB PMD
2020-10-17 19:03 3% ` [dpdk-dev] [PATCH v5 00/22] Add DLB PMD Timothy McDaniel
@ 2020-10-23 18:32 3% ` Timothy McDaniel
1 sibling, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-23 18:32 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v6 after dpdk reviews:
=====================
- fixed meson conditional build. Moved test into driver’s meson.build
file instead of event/meson.build
- documentation is populated as associated code is introduced
- add log_register in add dynamic logging patch
- rename RTE_xxx symbol(s) as DLB2_xxx
- replaced function ptr enqueue_four with direct call to movdir64b
- remove unused port_pages
- broke up probe patch into 3 smaller patches for easier review
- changed param order of movdir64b/movntdq to match intrinsics
- added self to MAINTAINERS files
- squashed announcement of availability into last patch in series
- correct spelling errors and delete repeated words
- DPDK_21.0 -> DPDK 21 in map file
- add experimental banner to public structs and APIs
- implemented other suggestions from code reviews of DLB2 PMD. The
software is very similar in form so some DLB2 reviews comments
were applicable to DLB as well
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (23):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add eventdev probe
event/dlb: add flexible interface
event/dlb: add probe-time hardware init
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
MAINTAINERS | 5 +
app/test/test_eventdev.c | 7 +
config/rte_config.h | 8 +-
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4129 ++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 59 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1551 +++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 ++++
drivers/event/dlb/meson.build | 21 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 310 +
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 ++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6904 +++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 +++
drivers/event/dlb/pf/dlb_main.c | 586 ++
drivers/event/dlb/pf/dlb_main.h | 47 +
drivers/event/dlb/pf/dlb_pf.c | 750 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 77 +
drivers/event/dlb/rte_pmd_dlb_event_version.map | 9 +
drivers/event/meson.build | 2 +-
32 files changed, 21765 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/rte_pmd_dlb_event_version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] doc: update abi version references
2020-10-23 16:07 33% [dpdk-dev] [PATCH v1] doc: update abi version references Ray Kinsella
@ 2020-10-23 16:51 7% ` David Marchand
2020-10-26 18:27 4% ` Kinsella, Ray
2020-10-26 19:23 33% ` [dpdk-dev] [PATCH] " Ray Kinsella
2020-10-26 19:31 33% ` [dpdk-dev] [PATCH v3] " Ray Kinsella
2 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-10-23 16:51 UTC (permalink / raw)
To: Ray Kinsella; +Cc: Neil Horman, Mcnamara, John, Thomas Monjalon, dev
On Fri, Oct 23, 2020 at 6:11 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Updated references to abi versions in the contributors guide.
Thanks for looking at it.
I would keep the dpdk release version aligned with updated ABI ver.
Caught 3 references in the first file.
%s/19.11/20.11/g can fix this.
Then:
Reviewed-by: David Marchand <david.marchand@redhat.com>
--
David Marchand
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v1] doc: update abi version references
@ 2020-10-23 16:07 33% Ray Kinsella
2020-10-23 16:51 7% ` David Marchand
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Ray Kinsella @ 2020-10-23 16:07 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman; +Cc: john.mcnamara, thomas, david.marchand, dev
Updated references to abi versions in the contributors guide.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/contributing/abi_policy.rst | 52 ++++-----
doc/guides/contributing/abi_versioning.rst | 120 ++++++++++-----------
2 files changed, 86 insertions(+), 86 deletions(-)
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index e17758a107..bc564f0cf6 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -78,15 +78,15 @@ The DPDK ABI policy
-------------------
A new major ABI version is declared no more frequently than yearly, with
-declarations usually aligning with a LTS release, e.g. ABI 20 for DPDK 19.11.
+declarations usually aligning with a LTS release, e.g. ABI 21 for DPDK 19.11.
Compatibility with the major ABI version is then mandatory in subsequent
-releases until the next major ABI version is declared, e.g. ABI 21 for DPDK
-20.11.
+releases until the next major ABI version is declared, e.g. ABI 22 for DPDK
+21.11.
At the declaration of a major ABI version, major version numbers encoded in
libraries' sonames are bumped to indicate the new version, with the minor
-version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
-``librte_eal.so.21.0``.
+version reset to ``0``. An example would be ``librte_eal.so.21.3`` would become
+``librte_eal.so.22.0``.
The ABI may then change multiple times, without warning, between the last major
ABI version increment and the HEAD label of the git tree, with the condition
@@ -95,8 +95,8 @@ sonames do not change.
Minor versions are incremented to indicate the release of a new ABI compatible
DPDK release, typically the DPDK quarterly releases. An example of this, might
-be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
-release, following the declaration of the new major ABI version ``20``.
+be that ``librte_eal.so.21.1`` would indicate the first ABI compatible DPDK
+release, following the declaration of the new major ABI version ``21``.
An ABI version is supported in all new releases until the next major ABI version
is declared. When changing the major ABI version, the release notes will detail
@@ -223,10 +223,10 @@ The following are examples of allowable ABI changes occurring between
declarations of major ABI versions.
* DPDK 19.11 release defines the function ``rte_foo()`` ; ``rte_foo()``
- is part of the major ABI version ``20``.
+ is part of the major ABI version ``21``.
-* DPDK 20.02 release defines a new function ``rte_foo(uint8_t bar)``.
- This is not a problem as long as the symbol ``rte_foo@DPDK20`` is
+* DPDK 21.02 release defines a new function ``rte_foo(uint8_t bar)``.
+ This is not a problem as long as the symbol ``rte_foo@DPDK21`` is
preserved through :ref:`abi_versioning`.
- The new function may be marked with the ``__rte_experimental`` tag for a
@@ -236,20 +236,20 @@ declarations of major ABI versions.
declared as ``__rte_deprecated`` and an deprecation notice is provided.
* DPDK 19.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
- version of ``rte_foo`` only exists from DPDK 20.02 onwards as described in the
+ version of ``rte_foo`` only exists from DPDK 21.02 onwards as described in the
:ref:`note on forward-only compatibility<forward-only>`.
-* DPDK 20.02 release defines the experimental function ``__rte_experimental
- rte_baz()``. This function may or may not exist in the DPDK 20.05 release.
+* DPDK 21.02 release defines the experimental function ``__rte_experimental
+ rte_baz()``. This function may or may not exist in the DPDK 21.05 release.
* An application ``dPacket`` wishes to use ``rte_foo(uint8_t bar)``, before the
- declaration of the DPDK ``21`` major ABI version. The application can only
- ensure its runtime dependencies are met by specifying ``DPDK (>= 20.2)`` as
+ declaration of the DPDK ``22`` major ABI version. The application can only
+ ensure its runtime dependencies are met by specifying ``DPDK (>= 21.2)`` as
an explicit package dependency, as the soname can only indicate the
supported major ABI version.
-* At the release of DPDK 20.11, the function ``rte_foo(uint8_t bar)`` becomes
- formally part of then new major ABI version DPDK ``21`` and ``rte_foo()`` may be
+* At the release of DPDK 21.11, the function ``rte_foo(uint8_t bar)`` becomes
+ formally part of then new major ABI version DPDK ``22`` and ``rte_foo()`` may be
removed.
.. _deprecation_notices:
@@ -261,25 +261,25 @@ The following are some examples of ABI deprecation notices which would be
added to the Release Notes:
* The Macro ``#RTE_FOO`` is deprecated and will be removed with ABI version
- 21, to be replaced with the inline function ``rte_foo()``.
+ 22, to be replaced with the inline function ``rte_foo()``.
* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
- in version 20.2. Backwards compatibility will be maintained for this function
- until the release of the new DPDK major ABI version 21, in DPDK version
- 20.11.
+ in version 21.2. Backwards compatibility will be maintained for this function
+ until the release of the new DPDK major ABI version 22, in DPDK version
+ 21.11.
-* The members of ``struct rte_foo`` have been reorganized in DPDK 20.02 for
+* The members of ``struct rte_foo`` have been reorganized in DPDK 21.02 for
performance reasons. Existing binary applications will have backwards
- compatibility in release 20.02, while newly built binaries will need to
+ compatibility in release 21.02, while newly built binaries will need to
reference the new structure variant ``struct rte_foo2``. Compatibility will be
- removed in release 20.11, and all applications will require updating and
+ removed in release 21.11, and all applications will require updating and
rebuilding to the new structure at that time, which will be renamed to the
original ``struct rte_foo``.
* Significant ABI changes are planned for the ``librte_dostuff`` library. The
- upcoming release 20.02 will not contain these changes, but release 20.11 will,
+ upcoming release 21.02 will not contain these changes, but release 21.11 will,
and no backwards compatibility is planned due to the extensive nature of
- these changes. Binaries using this library built prior to ABI version 21 will
+ these changes. Binaries using this library built prior to ABI version 22 will
require updating and recompilation.
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index b8b35761e2..91ada18dd7 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -14,22 +14,22 @@ What is a library's soname?
---------------------------
System libraries usually adopt the familiar major and minor version naming
-convention, where major versions (e.g. ``librte_eal 20.x, 21.x``) are presumed
+convention, where major versions (e.g. ``librte_eal 21.x, 22.x``) are presumed
to be ABI incompatible with each other and minor versions (e.g. ``librte_eal
-20.1, 20.2``) are presumed to be ABI compatible. A library's `soname
+21.1, 21.2``) are presumed to be ABI compatible. A library's `soname
<https://en.wikipedia.org/wiki/Soname>`_. is typically used to provide backward
compatibility information about a given library, describing the lowest common
denominator ABI supported by the library. The soname or logical name for the
library, is typically comprised of the library's name and major version e.g.
-``librte_eal.so.20``.
+``librte_eal.so.21``.
During an application's build process, a library's soname is noted as a runtime
dependency of the application. This information is then used by the `dynamic
linker <https://en.wikipedia.org/wiki/Dynamic_linker>`_ when resolving the
applications dependencies at runtime, to load a library supporting the correct
ABI version. The library loaded at runtime therefore, may be a minor revision
-supporting the same major ABI version (e.g. ``librte_eal.20.2``), as the library
-used to link the application (e.g ``librte_eal.20.0``).
+supporting the same major ABI version (e.g. ``librte_eal.21.2``), as the library
+used to link the application (e.g ``librte_eal.21.0``).
.. _major_abi_versions:
@@ -59,41 +59,41 @@ persists over multiple releases.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
When an ABI change is made between major ABI versions to a given library, a new
section is added to that library's version map describing the impending new ABI
version, as described in the section :ref:`example_abi_macro_usage`. The
-library's soname and filename however do not change, e.g. ``libacl.so.20``, as
+library's soname and filename however do not change, e.g. ``libacl.so.21``, as
ABI compatibility with the last major ABI version continues to be preserved for
that library.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
- DPDK_21 {
+ DPDK_22 {
global:
- } DPDK_20;
+ } DPDK_21;
...
$ head ./lib/librte_eal/version.map
- DPDK_20 {
+ DPDK_21 {
global:
...
-However when a new ABI version is declared, for example DPDK ``21``, old
+However when a new ABI version is declared, for example DPDK ``22``, old
depreciated functions may be safely removed at this point and the entire old
major ABI version removed, see the section :ref:`deprecating_entire_abi` on
how this may be done.
@@ -101,12 +101,12 @@ how this may be done.
.. code-block:: none
$ head ./lib/librte_acl/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
$ head ./lib/librte_eal/version.map
- DPDK_21 {
+ DPDK_22 {
global:
...
@@ -216,7 +216,7 @@ library looks like this
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -242,7 +242,7 @@ This file needs to be modified as follows
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -264,15 +264,15 @@ This file needs to be modified as follows
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
The addition of the new block tells the linker that a new version node
-``DPDK_21`` is available, which contains the symbol rte_acl_create, and inherits
-the symbols from the DPDK_20 node. This list is directly translated into a
+``DPDK_22`` is available, which contains the symbol rte_acl_create, and inherits
+the symbols from the DPDK_21 node. This list is directly translated into a
list of exported symbols when DPDK is compiled as a shared library.
Next, we need to specify in the code which function maps to the rte_acl_create
@@ -285,7 +285,7 @@ with the public symbol name
-struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param)
+struct rte_acl_ctx * __vsym
- +rte_acl_create_v20(const struct rte_acl_param *param)
+ +rte_acl_create_v21(const struct rte_acl_param *param)
{
size_t sz;
struct rte_acl_ctx *ctx;
@@ -294,7 +294,7 @@ with the public symbol name
Note that the base name of the symbol was kept intact, as this is conducive to
the macros used for versioning symbols and we have annotated the function as
``__vsym``, an implementation of a versioned symbol . That is our next step,
-mapping this new symbol name to the initial symbol name at version node 20.
+mapping this new symbol name to the initial symbol name at version node 21.
Immediately after the function, we add the VERSION_SYMBOL macro.
.. code-block:: c
@@ -302,26 +302,26 @@ Immediately after the function, we add the VERSION_SYMBOL macro.
#include <rte_function_versioning.h>
...
- VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ VERSION_SYMBOL(rte_acl_create, _v21, 21);
Remembering to also add the rte_function_versioning.h header to the requisite c
file where these changes are being made. The macro instructs the linker to
-create a new symbol ``rte_acl_create@DPDK_20``, which matches the symbol created
+create a new symbol ``rte_acl_create@DPDK_21``, which matches the symbol created
in older builds, but now points to the above newly named function. We have now
mapped the original rte_acl_create symbol to the original function (but with a
new name).
Please see the section :ref:`Enabling versioning macros
<enabling_versioning_macros>` to enable this macro in the meson/ninja build.
-Next, we need to create the new ``v21`` version of the symbol. We create a new
-function name, with the ``v21`` suffix, and implement it appropriately.
+Next, we need to create the new ``v22`` version of the symbol. We create a new
+function name, with the ``v22`` suffix, and implement it appropriately.
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
{
- struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
+ struct rte_acl_ctx *ctx = rte_acl_create_v21(param);
ctx->debug = debug;
@@ -330,7 +330,7 @@ function name, with the ``v21`` suffix, and implement it appropriately.
This code serves as our new API call. Its the same as our old call, but adds the
new parameter in place. Next we need to map this function to the new default
-symbol ``rte_acl_create@DPDK_21``. To do this, immediately after the function,
+symbol ``rte_acl_create@DPDK_22``. To do this, immediately after the function,
we add the BIND_DEFAULT_SYMBOL macro.
.. code-block:: c
@@ -338,10 +338,10 @@ we add the BIND_DEFAULT_SYMBOL macro.
#include <rte_function_versioning.h>
...
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
The macro instructs the linker to create the new default symbol
-``rte_acl_create@DPDK_21``, which points to the above newly named function.
+``rte_acl_create@DPDK_22``, which points to the above newly named function.
We finally modify the prototype of the call in the public header file,
such that it contains both versions of the symbol and the public API.
@@ -352,15 +352,15 @@ such that it contains both versions of the symbol and the public API.
rte_acl_create(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v20(const struct rte_acl_param *param);
+ rte_acl_create_v21(const struct rte_acl_param *param);
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug);
And that's it, on the next shared library rebuild, there will be two versions of
-rte_acl_create, an old DPDK_20 version, used by previously built applications,
-and a new DPDK_21 version, used by future built applications.
+rte_acl_create, an old DPDK_21 version, used by previously built applications,
+and a new DPDK_22 version, used by future built applications.
.. note::
@@ -385,21 +385,21 @@ this code in a position of no longer having a symbol simply named
To correct this, we can simply map a function of our choosing back to the public
symbol in the static build with the ``MAP_STATIC_SYMBOL`` macro. Generally the
assumption is that the most recent version of the symbol is the one you want to
-map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
+map. So, back in the C file where, immediately after ``rte_acl_create_v22`` is
defined, we add this
.. code-block:: c
struct rte_acl_ctx * __vsym
- rte_acl_create_v21(const struct rte_acl_param *param, int debug)
+ rte_acl_create_v22(const struct rte_acl_param *param, int debug)
{
...
}
- MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v21);
+ MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v22);
That tells the compiler that, when building a static library, any calls to the
-symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
+symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v22``
.. _enabling_versioning_macros:
@@ -456,7 +456,7 @@ version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
@@ -486,22 +486,22 @@ When we promote the symbol to the stable ABI, we simply strip the
}
We then update the map file, adding the symbol ``rte_acl_create``
-to the ``DPDK_21`` version node.
+to the ``DPDK_22`` version node.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Although there are strictly no guarantees or commitments associated with
@@ -509,7 +509,7 @@ Although there are strictly no guarantees or commitments associated with
an alias to experimental. The process to add an alias to experimental,
is similar to the symbol versioning process. Assuming we have an experimental
symbol as before, we now add the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: c
@@ -535,29 +535,29 @@ and ``DPDK_21`` version nodes.
VERSION_SYMBOL_EXPERIMENTAL(rte_acl_create, _e);
struct rte_acl_ctx *
- rte_acl_create_v21(const struct rte_acl_param *param)
+ rte_acl_create_v22(const struct rte_acl_param *param)
{
return rte_acl_create(param);
}
- BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
In the map file, we map the symbol to both the ``EXPERIMENTAL``
-and ``DPDK_21`` version nodes.
+and ``DPDK_22`` version nodes.
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
...
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
EXPERIMENTAL {
global:
@@ -585,7 +585,7 @@ file:
.. code-block:: none
- DPDK_20 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -607,21 +607,21 @@ file:
local: *;
};
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_create;
- } DPDK_20;
+ } DPDK_21;
Next remove the corresponding versioned export.
.. code-block:: c
- -VERSION_SYMBOL(rte_acl_create, _v20, 20);
+ -VERSION_SYMBOL(rte_acl_create, _v21, 21);
Note that the internal function definition could also be removed, but its used
-in our example by the newer version ``v21``, so we leave it in place and declare
+in our example by the newer version ``v22``, so we leave it in place and declare
it as static. This is a coding style choice.
.. _deprecating_entire_abi:
@@ -642,7 +642,7 @@ In the case of our map above, it would transform to look as follows
.. code-block:: none
- DPDK_21 {
+ DPDK_22 {
global:
rte_acl_add_rules;
@@ -670,8 +670,8 @@ symbols.
.. code-block:: c
- -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 20);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ -BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v22, 22);
Lastly, any VERSION_SYMBOL macros that point to the old version nodes
should be removed, taking care to preserve any code that is shared
--
2.23.0
^ permalink raw reply [relevance 33%]
* Re: [dpdk-dev] [v3 1/2] cryptodev: support enqueue callback functions
2020-10-21 19:33 3% ` Ananyev, Konstantin
@ 2020-10-23 12:36 0% ` Gujjar, Abhinandan S
0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2020-10-23 12:36 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, Doherty, Declan, akhil.goyal,
Honnappa.Nagarahalli
Cc: Vangati, Narender, jerinj
Hi Konstantin,
Thanks. I will generate a new patch with suggested changes.
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Thursday, October 22, 2020 1:04 AM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org;
> Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com;
> Honnappa.Nagarahalli@arm.com
> Cc: Vangati, Narender <narender.vangati@intel.com>; jerinj@marvell.com
> Subject: RE: [v3 1/2] cryptodev: support enqueue callback functions
>
>
> Hi Abhinandan,
>
> Thanks for the effort, good progress.
> Though few more comments, see below.
>
> > This patch adds APIs to add/remove callback functions. The callback
> > function will be called for each burst of crypto ops received on a
> > given crypto device queue pair.
> >
> > Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> > ---
> > config/rte_config.h | 1 +
> > lib/librte_cryptodev/meson.build | 2 +-
> > lib/librte_cryptodev/rte_cryptodev.c | 201
> +++++++++++++++++++++++++
> > lib/librte_cryptodev/rte_cryptodev.h | 153 ++++++++++++++++++-
> > lib/librte_cryptodev/rte_cryptodev_version.map | 2 +
> > 5 files changed, 357 insertions(+), 2 deletions(-)
>
> Don't forget to update Release Notes and probably Prog Guide too.
>
> >
> > diff --git a/config/rte_config.h b/config/rte_config.h index
> > 03d90d7..e999d93 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -61,6 +61,7 @@
> > /* cryptodev defines */
> > #define RTE_CRYPTO_MAX_DEVS 64
> > #define RTE_CRYPTODEV_NAME_LEN 64
> > +#define RTE_CRYPTO_CALLBACKS 1
> >
> > /* compressdev defines */
> > #define RTE_COMPRESS_MAX_DEVS 64
> > diff --git a/lib/librte_cryptodev/meson.build
> > b/lib/librte_cryptodev/meson.build
> > index c4c6b3b..8c5493f 100644
> > --- a/lib/librte_cryptodev/meson.build
> > +++ b/lib/librte_cryptodev/meson.build
> > @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
> > 'rte_crypto.h',
> > 'rte_crypto_sym.h',
> > 'rte_crypto_asym.h')
> > -deps += ['kvargs', 'mbuf']
> > +deps += ['kvargs', 'mbuf', 'rcu']
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> > b/lib/librte_cryptodev/rte_cryptodev.c
> > index 3d95ac6..5ba774a 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.c
> > +++ b/lib/librte_cryptodev/rte_cryptodev.c
> > @@ -448,6 +448,10 @@ struct
> rte_cryptodev_sym_session_pool_private_data {
> > return 0;
> > }
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/* spinlock for crypto device enq callbacks */ static rte_spinlock_t
> > +rte_cryptodev_enq_cb_lock = RTE_SPINLOCK_INITIALIZER; #endif
> >
> > const char *
> > rte_cryptodev_get_feature_name(uint64_t flag) @@ -1136,6 +1140,203 @@
> > struct rte_cryptodev *
> > socket_id);
> > }
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +
> > +struct rte_cryptodev_cb *
> > +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + rte_cryptodev_callback_fn cb_fn,
> > + void *cb_arg)
> > +{
> > + struct rte_cryptodev *dev;
> > + struct rte_cryptodev_cb *cb, *tail;
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + struct rte_rcu_qsbr *qsbr;
> > + size_t size;
> > +
> > + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> > + const uint32_t max_threads = 1;
> > +
> > + if (!cb_fn)
> > + return NULL;
> > +
> > + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> > + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> > + return NULL;
> > + }
> > +
> > + dev = &rte_crypto_devices[dev_id];
> > + if (qp_id >= dev->data->nb_queue_pairs) {
> > + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> > + return NULL;
> > + }
> > +
> > + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock);
> > + if (dev->enq_cbs == NULL) {
> > + dev->enq_cbs = rte_zmalloc(NULL, sizeof(cb) *
> > + dev->data->nb_queue_pairs, 0);
> > + if (dev->enq_cbs == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for
> callbacks");
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
>
> It is a bit clumsy to do unlock() for every return with error.
> Probably an easier way - create an internal function that would do the actual
> job, and then lock(); ret=actual_job_internal_functio(...); unlock();...
>
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + list = rte_zmalloc(NULL, sizeof(*list), 0);
>
> As I understand, list is per queue, while enq_cbs[] is per port.
> So if enq_cbs is not null, it doesn't mean that list for that particular queue is
> already properly initialized.
>
> Another thing - is there any point for dev->enq_cbs[] to be a an array of
> pointers to rte_cryptodev_enq_cb_rcu? Considering that
> rte_cryptodev_enq_cb_rcu itself contains just two pointers inside, I think it
> enq_cbs can point just to an array of rte_cryptodev_enq_cb_rcu:
>
> struct rte_cryptodev {
> ...
> struct rte_cryptodev_enq_cb_rcu *enq_cbs;
>
> And you can remove one level of indirection here and in other places.
>
> > + if (list == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for list on
> "
> > + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > + rte_errno = ENOMEM;
> > + rte_free(dev->enq_cbs);
>
> Here and in other places: you free dev->enq_cbs, but not set it to NULL.
> In fact - probably a good idea to have one cleanup() function that would free all
> necessary stuff and set it to null, and then use it in all such places.
>
> > + return NULL;
> > + }
> > +
> > + /* Create RCU QSBR variable */
> > + size = rte_rcu_qsbr_get_memsize(max_threads);
> > + qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> > + if (qsbr == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for RCU
> on "
> > + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > + rte_errno = ENOMEM;
> > + rte_free(list);
> > + rte_free(dev->enq_cbs);
> > + dev->enq_cbs[qp_id] = NULL;
> > + return NULL;
> > + }
> > +
> > + if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> > + CDEV_LOG_ERR("Failed to initialize for RCU on "
> > + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > + rte_free(qsbr);
> > + rte_free(list);
> > + rte_free(dev->enq_cbs);
> > + dev->enq_cbs[qp_id] = NULL;
> > + return NULL;
> > + }
> > +
> > + dev->enq_cbs[qp_id] = list;
> > + list->qsbr = qsbr;
> > + }
> > +
> > + cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> > + if (cb == NULL) {
> > + CDEV_LOG_ERR("Failed to allocate memory for callback on "
> > + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + cb->fn = cb_fn;
> > + cb->arg = cb_arg;
> > +
> > + /* Add the callbacks in fifo order. */
> > + list = dev->enq_cbs[qp_id];
> > + tail = list->next;
> > + if (tail) {
> > + while (tail->next)
> > + tail = tail->next;
> > + tail->next = cb;
> > + } else
> > + list->next = cb;
> > +
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > +
> > + return cb;
> > +}
> > +
> > +int
> > +rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + struct rte_cryptodev_cb *cb)
> > +{
> > + struct rte_cryptodev *dev;
> > + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + uint16_t qp;
> > + int free_mem;
> > + int ret;
> > +
> > + free_mem = 1;
> > + ret = -EINVAL;
> > +
> > + if (!cb) {
> > + CDEV_LOG_ERR("cb is NULL");
> > + return ret;
> > + }
> > +
> > + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> > + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> > + return ret;
> > + }
> > +
> > + dev = &rte_crypto_devices[dev_id];
> > + if (qp_id >= dev->data->nb_queue_pairs) {
> > + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> > + return ret;
> > + }
> > +
> > + list = dev->enq_cbs[qp_id];
> > + if (list == NULL) {
> > + CDEV_LOG_ERR("Callback list is NULL");
> > + return ret;
> > + }
> > +
> > + if (list->qsbr == NULL) {
> > + CDEV_LOG_ERR("Rcu qsbr is NULL");
> > + return ret;
> > + }
> > +
> > + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock);
> > + if (dev->enq_cbs == NULL) {
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > + return ret;
> > + }
> > +
> > + prev_cb = &list->next;
> > + for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> > + curr_cb = *prev_cb;
> > + if (curr_cb == cb) {
> > + /* Remove the user cb from the callback list. */
> > + *prev_cb = curr_cb->next;
> > + ret = 0;
> > + break;
> > + }
> > + }
> > +
> > + if (!ret) {
> > + /* Call sync with invalid thread id as this is part of
> > + * control plane API
> > + */
> > + rte_rcu_qsbr_synchronize(list->qsbr,
> RTE_QSBR_THRID_INVALID);
> > + rte_free(cb);
> > + }
> > +
> > + if (list->next == NULL) {
> > + rte_free(list->qsbr);
>
> We can't destroy our sync variable while device is not stopped or destroyed.
> It can be still used by DP.
> Probably the easiest way to deal with it - allocate and initialize enq_cbs[] and
> all related qsbrs at first add_callback and free all that memory only on
> dev_destroy().
>
> > + rte_free(list);
> > + dev->enq_cbs[qp_id] = NULL;
> > + }
> > +
> > + for (qp = 0; qp < dev->data->nb_queue_pairs; qp++)
> > + if (dev->enq_cbs[qp] != NULL) {
> > + free_mem = 0;
> > + break;
> > + }
> > +
> > + if (free_mem) {
> > + rte_free(dev->enq_cbs);
>
> Again, not safe to do here, see above.
>
> > + dev->enq_cbs = NULL;
> > + }
> > +
> > + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> > +
> > + return ret;
> > +}
> > +#endif
> >
> > int
> > rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats
> > *stats) diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> > b/lib/librte_cryptodev/rte_cryptodev.h
> > index 0935fd5..669746d 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -23,6 +23,7 @@
> > #include "rte_dev.h"
> > #include <rte_common.h>
> > #include <rte_config.h>
> > +#include <rte_rcu_qsbr.h>
> >
> > #include "rte_cryptodev_trace_fp.h"
> >
> > @@ -522,6 +523,34 @@ struct rte_cryptodev_qp_conf {
> > /**< The mempool for creating sess private data in sessionless mode
> > */ };
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * Function type used for pre processing crypto ops when enqueue
> > +burst is
> > + * called.
> > + *
> > + * The callback function is called on enqueue burst immediately
> > + * before the crypto ops are put onto the hardware queue for processing.
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param ops The address of an array of *nb_ops* pointers
> > + * to *rte_crypto_op* structures which contain
> > + * the crypto operations to be processed.
> > + * @param nb_ops The number of operations to process.
> > + * @param user_param The arbitrary user parameter passed in by the
> > + * application when the callback was originally
> > + * registered.
> > + * @return The number of ops to be enqueued to the
> > + * crypto device.
> > + */
> > +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t
> qp_id,
> > + struct rte_crypto_op **ops, uint16_t nb_ops, void
> *user_param);
> > +#endif
> > +
> > /**
> > * Typedef for application callback function to be registered by application
> > * software for notification of device events @@ -822,7 +851,6 @@
> > struct rte_cryptodev_config {
> > enum rte_cryptodev_event_type event,
> > rte_cryptodev_cb_fn cb_fn, void *cb_arg);
> >
> > -
> > typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
> > struct rte_crypto_op **ops, uint16_t nb_ops);
> > /**< Dequeue processed packets from queue pair of a device. */ @@
> > -839,6 +867,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
> > /** Structure to keep track of registered callbacks */
> > TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * @internal
> > + * Structure used to hold information about the callbacks to be
> > +called for a
> > + * queue pair on enqueue.
> > + */
> > +struct rte_cryptodev_cb {
> > + struct rte_cryptodev_cb *next;
> > + /** < Pointer to next callback */
> > + rte_cryptodev_callback_fn fn;
> > + /** < Pointer to callback function */
> > + void *arg;
> > + /** < Pointer to argument */
> > +};
> > +
> > +/**
> > + * @internal
> > + * Structure used to hold information about the RCU for a queue pair.
> > + */
> > +struct rte_cryptodev_enq_cb_rcu {
> > + struct rte_cryptodev_cb *next;
> > + /** < Pointer to next callback */
> > + struct rte_rcu_qsbr *qsbr;
> > + /** < RCU QSBR variable per queue pair */ }; #endif
> > +
> > /** The data structure associated with each crypto device. */ struct
> > rte_cryptodev {
> > dequeue_pkt_burst_t dequeue_burst;
> > @@ -867,6 +922,11 @@ struct rte_cryptodev {
> > __extension__
> > uint8_t attached : 1;
> > /**< Flag indicating the device is attached */
> > +
> > +#ifdef RTE_CRYPTO_CALLBACKS
>
> I'd *always* reserve space for it.
> No matter is RTE_CRYPTO_CALLBACKS defined or not.
> To avoid difference in public structure layout.
>
> > + struct rte_cryptodev_enq_cb_rcu **enq_cbs;
>
> As I said above, no need for extra level of indirection.
>
> > + /**< User application callback for pre enqueue processing */ #endif
>
> As I understand, it is not an ABI breakage - as there are some free space right
> now at the end of struct rte_cryptodev (due to it alignment), but definitely need
> to update RN.
>
>
> > } __rte_cache_aligned;
> >
> > void *
> > @@ -989,6 +1049,25 @@ struct rte_cryptodev_data { {
> > struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > + if (unlikely(dev->enq_cbs != NULL && dev->enq_cbs[qp_id] != NULL)) {
>
> Agree with Honnappa's comment for that piece of code.
> Probably need to be something like:
>
> if (unlikely(dev->enq_cbs != NULL && dev->enq_cbs[qp_id].next != NULL) {
> list = &dev->enq_cbs[qp_id];
> rte_rcu_qsbr_thread_online(list->qsbr, 0);
> for (cb = list->next; cb != NULL; cb = cb->next)
> ....
> rte_rcu_qsbr_thread_offline(list->qsbr, 0); }
>
>
> > + struct rte_cryptodev_enq_cb_rcu *list;
> > + struct rte_cryptodev_cb *cb;
> > +
> > + list = dev->enq_cbs[qp_id];
> > + cb = list->next;
> > + rte_rcu_qsbr_thread_online(list->qsbr, 0);
> > +
> > + do {
> > + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> > + cb->arg);
> > + cb = cb->next;
> > + } while (cb != NULL);
> > +
> > + rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> > + }
> > +#endif
> > +
> > rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> > return (*dev->enqueue_burst)(
> > dev->data->queue_pairs[qp_id], ops, nb_ops); @@ -
> 1730,6 +1809,78
> > @@ struct rte_crypto_raw_dp_ctx {
> > rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> > uint32_t n);
> >
> > +#ifdef RTE_CRYPTO_CALLBACKS
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a user callback for a given crypto device and queue pair which
> > +will be
> > + * called on crypto ops enqueue.
> > + *
> > + * This API configures a function to be called for each burst of
> > +crypto ops
> > + * received on a given crypto device queue pair. The return value is
> > +a pointer
> > + * that can be used later to remove the callback using
> > + * rte_cryptodev_remove_enq_callback().
> > + *
> > + * Multiple functions are called in the order that they are added.
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param cb_fn The callback function
> > + * @param cb_arg A generic pointer parameter which will be
> passed
> > + * to each invocation of the callback function on
> > + * this crypto device and queue pair.
> > + *
> > + * @return
> > + * NULL on error.
> > + * On success, a pointer value which can later be used to remove the
> callback.
> > + */
> > +
> > +__rte_experimental
> > +struct rte_cryptodev_cb *
> > +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + rte_cryptodev_callback_fn cb_fn,
> > + void *cb_arg);
> > +
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a user callback function for given crypto device and queue pair.
> > + *
> > + * This function is used to removed callbacks that were added to a
> > +crypto
> > + * device queue pair using rte_cryptodev_add_enq_callback().
> > + *
> > + *
> > + *
> > + * @param dev_id The identifier of the device.
> > + * @param qp_id The index of the queue pair in which ops are
> > + * to be enqueued for processing. The value
> > + * must be in the range [0, nb_queue_pairs - 1]
> > + * previously supplied to
> > + * *rte_cryptodev_configure*.
> > + * @param cb Pointer to user supplied callback created via
> > + * rte_cryptodev_add_enq_callback().
> > + *
> > + * @return
> > + * - 0: Success. Callback was removed.
> > + * - -EINVAL: The dev_id or the qp_id is out of range, or the callback
> > + * is NULL or not found for the crypto device queue pair.
> > + */
> > +
> > +__rte_experimental
> > +int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> > + uint16_t qp_id,
> > + struct rte_cryptodev_cb *cb);
> > +
> > +#endif
> > +
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map
> > b/lib/librte_cryptodev/rte_cryptodev_version.map
> > index 7e4360f..5d8d6b0 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> > +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> > @@ -101,6 +101,7 @@ EXPERIMENTAL {
> > rte_cryptodev_get_qp_status;
> >
> > # added in 20.11
> > + rte_cryptodev_add_enq_callback;
> > rte_cryptodev_configure_raw_dp_ctx;
> > rte_cryptodev_get_raw_dp_ctx_size;
> > rte_cryptodev_raw_dequeue;
> > @@ -109,4 +110,5 @@ EXPERIMENTAL {
> > rte_cryptodev_raw_enqueue;
> > rte_cryptodev_raw_enqueue_burst;
> > rte_cryptodev_raw_enqueue_done;
> > + rte_cryptodev_remove_enq_callback;
> > };
> > --
> > 1.9.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 2/2] lpm: hide internal data
@ 2020-10-23 9:38 3% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-10-23 9:38 UTC (permalink / raw)
To: dev
Cc: honnappa.nagarahalli, ruifeng.wang, nd, Kevin Traynor,
Thomas Monjalon, Vladimir Medvedkin, Bruce Richardson
From: Ruifeng Wang <ruifeng.wang@arm.com>
Fields except tbl24 and tbl8 in rte_lpm structure have no
need to be exposed to the user.
Hide the unneeded exposure of structure fields for better
ABI maintainability.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v2:
- hid rte_lpm_rule and rte_lpm_rule_info,
- used i_lpm as the preferred variable name,
- moved lpm <-> i_lpm at public API boundaries, all internal functions
deal with __rte_lpm object,
---
doc/guides/rel_notes/release_20_11.rst | 3 +
lib/librte_lpm/rte_lpm.c | 388 +++++++++++++------------
lib/librte_lpm/rte_lpm.h | 19 --
3 files changed, 208 insertions(+), 202 deletions(-)
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359e51..dca8d41eb6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -606,6 +606,9 @@ ABI Changes
* sched: Added new fields to ``struct rte_sched_subport_port_params``.
+* lpm: Removed fields other than ``tbl24`` and ``tbl8`` from the struct
+ ``rte_lpm``. The removed fields were made internal.
+
Known Issues
------------
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 51a0ae5780..002811f4de 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -40,11 +40,31 @@ enum valid_flag {
VALID
};
+/** @internal Rule structure. */
+struct rte_lpm_rule {
+ uint32_t ip; /**< Rule IP address. */
+ uint32_t next_hop; /**< Rule next hop. */
+};
+
+/** @internal Contains metadata about the rules table. */
+struct rte_lpm_rule_info {
+ uint32_t used_rules; /**< Used rules so far. */
+ uint32_t first_rule; /**< Indexes the first rule of a given depth. */
+};
+
/** @internal LPM structure. */
struct __rte_lpm {
- /* LPM metadata. */
+ /* Exposed LPM data. */
struct rte_lpm lpm;
+ /* LPM metadata. */
+ char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
+ uint32_t max_rules; /**< Max. balanced rules per lpm. */
+ uint32_t number_tbl8s; /**< Number of tbl8s. */
+ /**< Rule info table. */
+ struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH];
+ struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
+
/* RCU config. */
struct rte_rcu_qsbr *v; /* RCU QSBR variable. */
enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */
@@ -104,7 +124,7 @@ depth_to_range(uint8_t depth)
struct rte_lpm *
rte_lpm_find_existing(const char *name)
{
- struct rte_lpm *l = NULL;
+ struct __rte_lpm *i_lpm = NULL;
struct rte_tailq_entry *te;
struct rte_lpm_list *lpm_list;
@@ -112,8 +132,8 @@ rte_lpm_find_existing(const char *name)
rte_mcfg_tailq_read_lock();
TAILQ_FOREACH(te, lpm_list, next) {
- l = te->data;
- if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
+ i_lpm = te->data;
+ if (strncmp(name, i_lpm->name, RTE_LPM_NAMESIZE) == 0)
break;
}
rte_mcfg_tailq_read_unlock();
@@ -123,7 +143,7 @@ rte_lpm_find_existing(const char *name)
return NULL;
}
- return l;
+ return &i_lpm->lpm;
}
/*
@@ -134,7 +154,7 @@ rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config)
{
char mem_name[RTE_LPM_NAMESIZE];
- struct __rte_lpm *internal_lpm;
+ struct __rte_lpm *i_lpm;
struct rte_lpm *lpm = NULL;
struct rte_tailq_entry *te;
uint32_t mem_size, rules_size, tbl8s_size;
@@ -157,19 +177,18 @@ rte_lpm_create(const char *name, int socket_id,
/* guarantee there's no existing */
TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
+ i_lpm = te->data;
+ if (strncmp(name, i_lpm->name, RTE_LPM_NAMESIZE) == 0)
break;
}
if (te != NULL) {
- lpm = NULL;
rte_errno = EEXIST;
goto exit;
}
/* Determine the amount of memory to allocate. */
- mem_size = sizeof(*internal_lpm);
+ mem_size = sizeof(*i_lpm);
rules_size = sizeof(struct rte_lpm_rule) * config->max_rules;
tbl8s_size = sizeof(struct rte_lpm_tbl_entry) *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES * config->number_tbl8s;
@@ -183,49 +202,47 @@ rte_lpm_create(const char *name, int socket_id,
}
/* Allocate memory to store the LPM data structures. */
- internal_lpm = rte_zmalloc_socket(mem_name, mem_size,
+ i_lpm = rte_zmalloc_socket(mem_name, mem_size,
RTE_CACHE_LINE_SIZE, socket_id);
- if (internal_lpm == NULL) {
+ if (i_lpm == NULL) {
RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
rte_free(te);
rte_errno = ENOMEM;
goto exit;
}
- lpm = &internal_lpm->lpm;
- lpm->rules_tbl = rte_zmalloc_socket(NULL,
+ i_lpm->rules_tbl = rte_zmalloc_socket(NULL,
(size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm->rules_tbl == NULL) {
+ if (i_lpm->rules_tbl == NULL) {
RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n");
- rte_free(internal_lpm);
- internal_lpm = NULL;
- lpm = NULL;
+ rte_free(i_lpm);
+ i_lpm = NULL;
rte_free(te);
rte_errno = ENOMEM;
goto exit;
}
- lpm->tbl8 = rte_zmalloc_socket(NULL,
+ i_lpm->lpm.tbl8 = rte_zmalloc_socket(NULL,
(size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm->tbl8 == NULL) {
+ if (i_lpm->lpm.tbl8 == NULL) {
RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n");
- rte_free(lpm->rules_tbl);
- rte_free(internal_lpm);
- internal_lpm = NULL;
- lpm = NULL;
+ rte_free(i_lpm->rules_tbl);
+ rte_free(i_lpm);
+ i_lpm = NULL;
rte_free(te);
rte_errno = ENOMEM;
goto exit;
}
/* Save user arguments. */
- lpm->max_rules = config->max_rules;
- lpm->number_tbl8s = config->number_tbl8s;
- strlcpy(lpm->name, name, sizeof(lpm->name));
+ i_lpm->max_rules = config->max_rules;
+ i_lpm->number_tbl8s = config->number_tbl8s;
+ strlcpy(i_lpm->name, name, sizeof(i_lpm->name));
- te->data = lpm;
+ te->data = i_lpm;
+ lpm = &i_lpm->lpm;
TAILQ_INSERT_TAIL(lpm_list, te, next);
@@ -241,13 +258,14 @@ rte_lpm_create(const char *name, int socket_id,
void
rte_lpm_free(struct rte_lpm *lpm)
{
- struct __rte_lpm *internal_lpm;
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
+ struct __rte_lpm *i_lpm;
/* Check user arguments. */
if (lpm == NULL)
return;
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
@@ -255,7 +273,7 @@ rte_lpm_free(struct rte_lpm *lpm)
/* find our tailq entry */
TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
+ if (te->data == (void *)i_lpm)
break;
}
if (te != NULL)
@@ -263,19 +281,18 @@ rte_lpm_free(struct rte_lpm *lpm)
rte_mcfg_tailq_write_unlock();
- internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
- if (internal_lpm->dq != NULL)
- rte_rcu_qsbr_dq_delete(internal_lpm->dq);
- rte_free(lpm->tbl8);
- rte_free(lpm->rules_tbl);
- rte_free(internal_lpm);
+ if (i_lpm->dq != NULL)
+ rte_rcu_qsbr_dq_delete(i_lpm->dq);
+ rte_free(i_lpm->lpm.tbl8);
+ rte_free(i_lpm->rules_tbl);
+ rte_free(i_lpm);
rte_free(te);
}
static void
__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n)
{
- struct rte_lpm_tbl_entry *tbl8 = ((struct rte_lpm *)p)->tbl8;
+ struct rte_lpm_tbl_entry *tbl8 = ((struct __rte_lpm *)p)->lpm.tbl8;
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
uint32_t tbl8_group_index = *(uint32_t *)data;
@@ -292,15 +309,15 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
{
struct rte_rcu_qsbr_dq_parameters params = {0};
char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
- struct __rte_lpm *internal_lpm;
+ struct __rte_lpm *i_lpm;
if (lpm == NULL || cfg == NULL) {
rte_errno = EINVAL;
return 1;
}
- internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
- if (internal_lpm->v != NULL) {
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
+ if (i_lpm->v != NULL) {
rte_errno = EEXIST;
return 1;
}
@@ -310,21 +327,21 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
} else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) {
/* Init QSBR defer queue. */
snprintf(rcu_dq_name, sizeof(rcu_dq_name),
- "LPM_RCU_%s", lpm->name);
+ "LPM_RCU_%s", i_lpm->name);
params.name = rcu_dq_name;
params.size = cfg->dq_size;
if (params.size == 0)
- params.size = lpm->number_tbl8s;
+ params.size = i_lpm->number_tbl8s;
params.trigger_reclaim_limit = cfg->reclaim_thd;
params.max_reclaim_size = cfg->reclaim_max;
if (params.max_reclaim_size == 0)
params.max_reclaim_size = RTE_LPM_RCU_DQ_RECLAIM_MAX;
params.esize = sizeof(uint32_t); /* tbl8 group index */
params.free_fn = __lpm_rcu_qsbr_free_resource;
- params.p = lpm;
+ params.p = i_lpm;
params.v = cfg->v;
- internal_lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
- if (internal_lpm->dq == NULL) {
+ i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
+ if (i_lpm->dq == NULL) {
RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n");
return 1;
}
@@ -332,8 +349,8 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
rte_errno = EINVAL;
return 1;
}
- internal_lpm->rcu_mode = cfg->mode;
- internal_lpm->v = cfg->v;
+ i_lpm->rcu_mode = cfg->mode;
+ i_lpm->v = cfg->v;
return 0;
}
@@ -349,7 +366,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+rule_add(struct __rte_lpm *i_lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
@@ -358,68 +375,68 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
VERIFY_DEPTH(depth);
/* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
+ if (i_lpm->rule_info[depth - 1].used_rules > 0) {
/* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
+ rule_gindex = i_lpm->rule_info[depth - 1].first_rule;
/* Initialise rule_index to point to start of rule group. */
rule_index = rule_gindex;
/* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+ last_rule = rule_gindex + i_lpm->rule_info[depth - 1].used_rules;
for (; rule_index < last_rule; rule_index++) {
/* If rule already exists update next hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
+ if (i_lpm->rules_tbl[rule_index].ip == ip_masked) {
- if (lpm->rules_tbl[rule_index].next_hop
+ if (i_lpm->rules_tbl[rule_index].next_hop
== next_hop)
return -EEXIST;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ i_lpm->rules_tbl[rule_index].next_hop = next_hop;
return rule_index;
}
}
- if (rule_index == lpm->max_rules)
+ if (rule_index == i_lpm->max_rules)
return -ENOSPC;
} else {
/* Calculate the position in which the rule will be stored. */
rule_index = 0;
for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
+ if (i_lpm->rule_info[i - 1].used_rules > 0) {
+ rule_index = i_lpm->rule_info[i - 1].first_rule
+ + i_lpm->rule_info[i - 1].used_rules;
break;
}
}
- if (rule_index == lpm->max_rules)
+ if (rule_index == i_lpm->max_rules)
return -ENOSPC;
- lpm->rule_info[depth - 1].first_rule = rule_index;
+ i_lpm->rule_info[depth - 1].first_rule = rule_index;
}
/* Make room for the new rule in the array. */
for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
+ if (i_lpm->rule_info[i - 1].first_rule
+ + i_lpm->rule_info[i - 1].used_rules == i_lpm->max_rules)
return -ENOSPC;
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
+ if (i_lpm->rule_info[i - 1].used_rules > 0) {
+ i_lpm->rules_tbl[i_lpm->rule_info[i - 1].first_rule
+ + i_lpm->rule_info[i - 1].used_rules]
+ = i_lpm->rules_tbl[i_lpm->rule_info[i - 1].first_rule];
+ i_lpm->rule_info[i - 1].first_rule++;
}
}
/* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ i_lpm->rules_tbl[rule_index].ip = ip_masked;
+ i_lpm->rules_tbl[rule_index].next_hop = next_hop;
/* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
+ i_lpm->rule_info[depth - 1].used_rules++;
return rule_index;
}
@@ -429,26 +446,26 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static void
-rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+rule_delete(struct __rte_lpm *i_lpm, int32_t rule_index, uint8_t depth)
{
int i;
VERIFY_DEPTH(depth);
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
+ i_lpm->rules_tbl[rule_index] =
+ i_lpm->rules_tbl[i_lpm->rule_info[depth - 1].first_rule
+ + i_lpm->rule_info[depth - 1].used_rules - 1];
for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
+ if (i_lpm->rule_info[i].used_rules > 0) {
+ i_lpm->rules_tbl[i_lpm->rule_info[i].first_rule - 1] =
+ i_lpm->rules_tbl[i_lpm->rule_info[i].first_rule
+ + i_lpm->rule_info[i].used_rules - 1];
+ i_lpm->rule_info[i].first_rule--;
}
}
- lpm->rule_info[depth - 1].used_rules--;
+ i_lpm->rule_info[depth - 1].used_rules--;
}
/*
@@ -456,19 +473,19 @@ rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+rule_find(struct __rte_lpm *i_lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
VERIFY_DEPTH(depth);
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+ rule_gindex = i_lpm->rule_info[depth - 1].first_rule;
+ last_rule = rule_gindex + i_lpm->rule_info[depth - 1].used_rules;
/* Scan used rules at given depth to find rule. */
for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
/* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
+ if (i_lpm->rules_tbl[rule_index].ip == ip_masked)
return rule_index;
}
@@ -480,14 +497,14 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static int32_t
-_tbl8_alloc(struct rte_lpm *lpm)
+_tbl8_alloc(struct __rte_lpm *i_lpm)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
- tbl8_entry = &lpm->tbl8[group_idx *
+ for (group_idx = 0; group_idx < i_lpm->number_tbl8s; group_idx++) {
+ tbl8_entry = &i_lpm->lpm.tbl8[group_idx *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
/* If a free tbl8 group is found clean it and set as VALID. */
if (!tbl8_entry->valid_group) {
@@ -515,45 +532,41 @@ _tbl8_alloc(struct rte_lpm *lpm)
}
static int32_t
-tbl8_alloc(struct rte_lpm *lpm)
+tbl8_alloc(struct __rte_lpm *i_lpm)
{
int32_t group_idx; /* tbl8 group index. */
- struct __rte_lpm *internal_lpm;
- internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
- group_idx = _tbl8_alloc(lpm);
- if (group_idx == -ENOSPC && internal_lpm->dq != NULL) {
+ group_idx = _tbl8_alloc(i_lpm);
+ if (group_idx == -ENOSPC && i_lpm->dq != NULL) {
/* If there are no tbl8 groups try to reclaim one. */
- if (rte_rcu_qsbr_dq_reclaim(internal_lpm->dq, 1,
+ if (rte_rcu_qsbr_dq_reclaim(i_lpm->dq, 1,
NULL, NULL, NULL) == 0)
- group_idx = _tbl8_alloc(lpm);
+ group_idx = _tbl8_alloc(i_lpm);
}
return group_idx;
}
static int32_t
-tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
+tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start)
{
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
- struct __rte_lpm *internal_lpm;
int status;
- internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
- if (internal_lpm->v == NULL) {
+ if (i_lpm->v == NULL) {
/* Set tbl8 group invalid*/
- __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[tbl8_group_start], &zero_tbl8_entry,
__ATOMIC_RELAXED);
- } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {
+ } else if (i_lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {
/* Wait for quiescent state change. */
- rte_rcu_qsbr_synchronize(internal_lpm->v,
+ rte_rcu_qsbr_synchronize(i_lpm->v,
RTE_QSBR_THRID_INVALID);
/* Set tbl8 group invalid*/
- __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[tbl8_group_start], &zero_tbl8_entry,
__ATOMIC_RELAXED);
- } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {
+ } else if (i_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {
/* Push into QSBR defer queue. */
- status = rte_rcu_qsbr_dq_enqueue(internal_lpm->dq,
+ status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq,
(void *)&tbl8_group_start);
if (status == 1) {
RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n");
@@ -565,7 +578,7 @@ tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
}
static __rte_noinline int32_t
-add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+add_depth_small(struct __rte_lpm *i_lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -580,8 +593,8 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
* For invalid OR valid and non-extended tbl 24 entries set
* entry.
*/
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth)) {
+ if (!i_lpm->lpm.tbl24[i].valid || (i_lpm->lpm.tbl24[i].valid_group == 0 &&
+ i_lpm->lpm.tbl24[i].depth <= depth)) {
struct rte_lpm_tbl_entry new_tbl24_entry = {
.next_hop = next_hop,
@@ -593,24 +606,24 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
/* Setting tbl24 entry in one go to avoid race
* conditions
*/
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
+ __atomic_store(&i_lpm->lpm.tbl24[i], &new_tbl24_entry,
__ATOMIC_RELEASE);
continue;
}
- if (lpm->tbl24[i].valid_group == 1) {
+ if (i_lpm->lpm.tbl24[i].valid_group == 1) {
/* If tbl24 entry is valid and extended calculate the
* index into tbl8.
*/
- tbl8_index = lpm->tbl24[i].group_idx *
+ tbl8_index = i_lpm->lpm.tbl24[i].group_idx *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
tbl8_group_end = tbl8_index +
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
+ if (!i_lpm->lpm.tbl8[j].valid ||
+ i_lpm->lpm.tbl8[j].depth <= depth) {
struct rte_lpm_tbl_entry
new_tbl8_entry = {
.valid = VALID,
@@ -623,7 +636,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
* Setting tbl8 entry in one go to avoid
* race conditions
*/
- __atomic_store(&lpm->tbl8[j],
+ __atomic_store(&i_lpm->lpm.tbl8[j],
&new_tbl8_entry,
__ATOMIC_RELAXED);
@@ -637,7 +650,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static __rte_noinline int32_t
-add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+add_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -648,9 +661,9 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
tbl24_index = (ip_masked >> 8);
tbl8_range = depth_to_range(depth);
- if (!lpm->tbl24[tbl24_index].valid) {
+ if (!i_lpm->lpm.tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc(lpm);
+ tbl8_group_index = tbl8_alloc(i_lpm);
/* Check tbl8 allocation was successful. */
if (tbl8_group_index < 0) {
@@ -667,10 +680,10 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
struct rte_lpm_tbl_entry new_tbl8_entry = {
.valid = VALID,
.depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
+ .valid_group = i_lpm->lpm.tbl8[i].valid_group,
.next_hop = next_hop,
};
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[i], &new_tbl8_entry,
__ATOMIC_RELAXED);
}
@@ -690,13 +703,13 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
/* The tbl24 entry must be written only after the
* tbl8 entries are written.
*/
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
+ __atomic_store(&i_lpm->lpm.tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELEASE);
} /* If valid entry but not extended calculate the index into Table8. */
- else if (lpm->tbl24[tbl24_index].valid_group == 0) {
+ else if (i_lpm->lpm.tbl24[tbl24_index].valid_group == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc(lpm);
+ tbl8_group_index = tbl8_alloc(i_lpm);
if (tbl8_group_index < 0) {
return tbl8_group_index;
@@ -711,11 +724,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
for (i = tbl8_group_start; i < tbl8_group_end; i++) {
struct rte_lpm_tbl_entry new_tbl8_entry = {
.valid = VALID,
- .depth = lpm->tbl24[tbl24_index].depth,
- .valid_group = lpm->tbl8[i].valid_group,
- .next_hop = lpm->tbl24[tbl24_index].next_hop,
+ .depth = i_lpm->lpm.tbl24[tbl24_index].depth,
+ .valid_group = i_lpm->lpm.tbl8[i].valid_group,
+ .next_hop = i_lpm->lpm.tbl24[tbl24_index].next_hop,
};
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[i], &new_tbl8_entry,
__ATOMIC_RELAXED);
}
@@ -726,10 +739,10 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
struct rte_lpm_tbl_entry new_tbl8_entry = {
.valid = VALID,
.depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
+ .valid_group = i_lpm->lpm.tbl8[i].valid_group,
.next_hop = next_hop,
};
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[i], &new_tbl8_entry,
__ATOMIC_RELAXED);
}
@@ -749,33 +762,33 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
/* The tbl24 entry must be written only after the
* tbl8 entries are written.
*/
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
+ __atomic_store(&i_lpm->lpm.tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELEASE);
} else { /*
* If it is valid, extended entry calculate the index into tbl8.
*/
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
+ tbl8_group_index = i_lpm->lpm.tbl24[tbl24_index].group_idx;
tbl8_group_start = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
+ if (!i_lpm->lpm.tbl8[i].valid ||
+ i_lpm->lpm.tbl8[i].depth <= depth) {
struct rte_lpm_tbl_entry new_tbl8_entry = {
.valid = VALID,
.depth = depth,
.next_hop = next_hop,
- .valid_group = lpm->tbl8[i].valid_group,
+ .valid_group = i_lpm->lpm.tbl8[i].valid_group,
};
/*
* Setting tbl8 entry in one go to avoid race
* condition
*/
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
+ __atomic_store(&i_lpm->lpm.tbl8[i], &new_tbl8_entry,
__ATOMIC_RELAXED);
continue;
@@ -794,16 +807,18 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
int32_t rule_index, status = 0;
+ struct __rte_lpm *i_lpm;
uint32_t ip_masked;
/* Check user arguments. */
if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
return -EINVAL;
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(i_lpm, ip_masked, depth, next_hop);
/* Skip table entries update if The rule is the same as
* the rule in the rules table.
@@ -817,16 +832,16 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(i_lpm, ip_masked, depth, next_hop);
} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(i_lpm, ip_masked, depth, next_hop);
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
if (status < 0) {
- rule_delete(lpm, rule_index, depth);
+ rule_delete(i_lpm, rule_index, depth);
return status;
}
@@ -842,6 +857,7 @@ int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop)
{
+ struct __rte_lpm *i_lpm;
uint32_t ip_masked;
int32_t rule_index;
@@ -852,11 +868,12 @@ uint32_t *next_hop)
return -EINVAL;
/* Look for the rule using rule_find. */
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find(lpm, ip_masked, depth);
+ rule_index = rule_find(i_lpm, ip_masked, depth);
if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ *next_hop = i_lpm->rules_tbl[rule_index].next_hop;
return 1;
}
@@ -865,7 +882,7 @@ uint32_t *next_hop)
}
static int32_t
-find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+find_previous_rule(struct __rte_lpm *i_lpm, uint32_t ip, uint8_t depth,
uint8_t *sub_rule_depth)
{
int32_t rule_index;
@@ -875,7 +892,7 @@ find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
- rule_index = rule_find(lpm, ip_masked, prev_depth);
+ rule_index = rule_find(i_lpm, ip_masked, prev_depth);
if (rule_index >= 0) {
*sub_rule_depth = prev_depth;
@@ -887,7 +904,7 @@ find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static int32_t
-delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_small(struct __rte_lpm *i_lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -909,26 +926,26 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
*/
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i],
+ if (i_lpm->lpm.tbl24[i].valid_group == 0 &&
+ i_lpm->lpm.tbl24[i].depth <= depth) {
+ __atomic_store(&i_lpm->lpm.tbl24[i],
&zero_tbl24_entry, __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
+ } else if (i_lpm->lpm.tbl24[i].valid_group == 1) {
/*
* If TBL24 entry is extended, then there has
* to be a rule with depth >= 25 in the
* associated TBL8 group.
*/
- tbl8_group_index = lpm->tbl24[i].group_idx;
+ tbl8_group_index = i_lpm->lpm.tbl24[i].group_idx;
tbl8_index = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
for (j = tbl8_index; j < (tbl8_index +
RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
- if (lpm->tbl8[j].depth <= depth)
- lpm->tbl8[j].valid = INVALID;
+ if (i_lpm->lpm.tbl8[j].depth <= depth)
+ i_lpm->lpm.tbl8[j].valid = INVALID;
}
}
}
@@ -939,7 +956,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
*/
struct rte_lpm_tbl_entry new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .next_hop = i_lpm->rules_tbl[sub_rule_index].next_hop,
.valid = VALID,
.valid_group = 0,
.depth = sub_rule_depth,
@@ -949,32 +966,32 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
.valid = VALID,
.valid_group = VALID,
.depth = sub_rule_depth,
- .next_hop = lpm->rules_tbl
+ .next_hop = i_lpm->rules_tbl
[sub_rule_index].next_hop,
};
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
+ if (i_lpm->lpm.tbl24[i].valid_group == 0 &&
+ i_lpm->lpm.tbl24[i].depth <= depth) {
+ __atomic_store(&i_lpm->lpm.tbl24[i], &new_tbl24_entry,
__ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
+ } else if (i_lpm->lpm.tbl24[i].valid_group == 1) {
/*
* If TBL24 entry is extended, then there has
* to be a rule with depth >= 25 in the
* associated TBL8 group.
*/
- tbl8_group_index = lpm->tbl24[i].group_idx;
+ tbl8_group_index = i_lpm->lpm.tbl24[i].group_idx;
tbl8_index = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
for (j = tbl8_index; j < (tbl8_index +
RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
- if (lpm->tbl8[j].depth <= depth)
- __atomic_store(&lpm->tbl8[j],
+ if (i_lpm->lpm.tbl8[j].depth <= depth)
+ __atomic_store(&i_lpm->lpm.tbl8[j],
&new_tbl8_entry,
__ATOMIC_RELAXED);
}
@@ -1041,7 +1058,7 @@ tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8,
}
static int32_t
-delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1056,7 +1073,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
tbl24_index = ip_masked >> 8;
/* Calculate the index into tbl8 and range. */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
+ tbl8_group_index = i_lpm->lpm.tbl24[tbl24_index].group_idx;
tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
tbl8_range = depth_to_range(depth);
@@ -1067,16 +1084,16 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
* rule_to_delete must be removed or modified.
*/
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- lpm->tbl8[i].valid = INVALID;
+ if (i_lpm->lpm.tbl8[i].depth <= depth)
+ i_lpm->lpm.tbl8[i].valid = INVALID;
}
} else {
/* Set new tbl8 entry. */
struct rte_lpm_tbl_entry new_tbl8_entry = {
.valid = VALID,
.depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .valid_group = i_lpm->lpm.tbl8[tbl8_group_start].valid_group,
+ .next_hop = i_lpm->rules_tbl[sub_rule_index].next_hop,
};
/*
@@ -1084,8 +1101,8 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
* rule_to_delete must be modified.
*/
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
+ if (i_lpm->lpm.tbl8[i].depth <= depth)
+ __atomic_store(&i_lpm->lpm.tbl8[i], &new_tbl8_entry,
__ATOMIC_RELAXED);
}
}
@@ -1096,31 +1113,31 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
* associated tbl24 entry.
*/
- tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
+ tbl8_recycle_index = tbl8_recycle_check(i_lpm->lpm.tbl8, tbl8_group_start);
if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition.
* Prevent the free of the tbl8 group from hoisting.
*/
- lpm->tbl24[tbl24_index].valid = 0;
+ i_lpm->lpm.tbl24[tbl24_index].valid = 0;
__atomic_thread_fence(__ATOMIC_RELEASE);
- status = tbl8_free(lpm, tbl8_group_start);
+ status = tbl8_free(i_lpm, tbl8_group_start);
} else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl_entry new_tbl24_entry = {
- .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
+ .next_hop = i_lpm->lpm.tbl8[tbl8_recycle_index].next_hop,
.valid = VALID,
.valid_group = 0,
- .depth = lpm->tbl8[tbl8_recycle_index].depth,
+ .depth = i_lpm->lpm.tbl8[tbl8_recycle_index].depth,
};
/* Set tbl24 before freeing tbl8 to avoid race condition.
* Prevent the free of the tbl8 group from hoisting.
*/
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
+ __atomic_store(&i_lpm->lpm.tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELAXED);
__atomic_thread_fence(__ATOMIC_RELEASE);
- status = tbl8_free(lpm, tbl8_group_start);
+ status = tbl8_free(i_lpm, tbl8_group_start);
}
#undef group_idx
return status;
@@ -1133,6 +1150,7 @@ int
rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
+ struct __rte_lpm *i_lpm;
uint32_t ip_masked;
uint8_t sub_rule_depth;
/*
@@ -1143,13 +1161,14 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
}
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
ip_masked = ip & depth_to_mask(depth);
/*
* Find the index of the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find(lpm, ip_masked, depth);
+ rule_to_delete_index = rule_find(i_lpm, ip_masked, depth);
/*
* Check if rule_to_delete_index was found. If no rule was found the
@@ -1159,7 +1178,7 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
/* Delete the rule from the rule table. */
- rule_delete(lpm, rule_to_delete_index, depth);
+ rule_delete(i_lpm, rule_to_delete_index, depth);
/*
* Find rule to replace the rule_to_delete. If there is no rule to
@@ -1167,17 +1186,17 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
* entries associated with this rule.
*/
sub_rule_depth = 0;
- sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
+ sub_rule_index = find_previous_rule(i_lpm, ip, depth, &sub_rule_depth);
/*
* If the input depth value is less than 25 use function
* delete_depth_small otherwise use delete_depth_big.
*/
if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small(lpm, ip_masked, depth,
+ return delete_depth_small(i_lpm, ip_masked, depth,
sub_rule_index, sub_rule_depth);
} else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big(lpm, ip_masked, depth, sub_rule_index,
+ return delete_depth_big(i_lpm, ip_masked, depth, sub_rule_index,
sub_rule_depth);
}
}
@@ -1188,16 +1207,19 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
void
rte_lpm_delete_all(struct rte_lpm *lpm)
{
+ struct __rte_lpm *i_lpm;
+
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
/* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+ memset(i_lpm->rule_info, 0, sizeof(i_lpm->rule_info));
/* Zero tbl24. */
- memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
+ memset(i_lpm->lpm.tbl24, 0, sizeof(i_lpm->lpm.tbl24));
/* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8[0])
- * RTE_LPM_TBL8_GROUP_NUM_ENTRIES * lpm->number_tbl8s);
+ memset(i_lpm->lpm.tbl8, 0, sizeof(i_lpm->lpm.tbl8[0])
+ * RTE_LPM_TBL8_GROUP_NUM_ENTRIES * i_lpm->number_tbl8s);
/* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+ memset(i_lpm->rules_tbl, 0, sizeof(i_lpm->rules_tbl[0]) * i_lpm->max_rules);
}
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 5b3b7b5b58..1afe55cdcb 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -118,31 +118,12 @@ struct rte_lpm_config {
int flags; /**< This field is currently unused. */
};
-/** @internal Rule structure. */
-struct rte_lpm_rule {
- uint32_t ip; /**< Rule IP address. */
- uint32_t next_hop; /**< Rule next hop. */
-};
-
-/** @internal Contains metadata about the rules table. */
-struct rte_lpm_rule_info {
- uint32_t used_rules; /**< Used rules so far. */
- uint32_t first_rule; /**< Indexes the first rule of a given depth. */
-};
-
/** @internal LPM structure. */
struct rte_lpm {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- uint32_t number_tbl8s; /**< Number of tbl8s. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
/* LPM Tables. */
struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
__rte_cache_aligned; /**< LPM tbl24 table. */
struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
- struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
};
/** LPM RCU QSBR configuration structure. */
--
2.23.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 5/5] doc: change references to blacklist and whitelist
@ 2020-10-22 20:40 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-22 20:40 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Luca Boccassi, Akhil Goyal, Hemant Agrawal,
John Griffin, Fiona Trahe, Deepak Kumar Jain, Pavan Nikhilesh,
Jerin Jacob, Bruce Richardson, Nithin Dabilpuram, Ajit Khaparde,
Somnath Kotur, Rahul Lakkireddy, Sachin Saxena, John Daley,
Hyong Youb Kim, Gaetan Rivet, Beilei Xing, Jeff Guo, Qiming Yang,
Qi Zhang, Haiyue Wang, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Martin Spinler, Kiran Kumar K,
Andrew Rybchenko, Keith Wiles, Maciej Czekaj, Anatoly Burakov,
Thomas Monjalon, Ferruh Yigit, Nicolas Chautru, Harry van Haaren,
Radu Nicolau, Konstantin Ananyev, David Hunt, Maxime Coquelin,
Chenbo Xia, Declan Doherty, Wisam Jaddo
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 18 ++++++------
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 18 ++++++------
doc/guides/nics/mlx5.rst | 14 +++++-----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 22 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 2 +-
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 6 ++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 2 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 171 insertions(+), 154 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 3053636b8295..b50fee76954a 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index db3c8e918945..38ad45e66d76 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index a0becf689109..d41ee82aff52 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -264,7 +264,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem CONFIG_RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -302,7 +302,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -662,7 +662,7 @@ QAT SYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
@@ -670,7 +670,7 @@ QAT ASYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
@@ -679,7 +679,7 @@ QAT compression PMD can be tested by running the test application::
sed -i 's,\(CONFIG_RTE_COMPRESSDEV_TEST\)=n,\1=y,' build/.config
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 6502f6415fb4..1c671518a4db 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -66,7 +66,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -74,7 +74,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -89,7 +89,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -98,7 +98,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -107,7 +107,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -118,7 +118,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -126,7 +126,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -136,7 +136,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -146,7 +146,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -156,7 +156,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 49b45a04e8ec..efaef85f90fc 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -50,7 +50,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -69,7 +69,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 28973fc3e2e9..37c632c3f046 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -686,7 +686,7 @@ The feature uses a newly implemented control-plane firmware interface which
optimizes flow insertions and deletions.
This is a tech preview feature, and is disabled by default. It can be enabled
-using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
Notes
-----
@@ -728,7 +728,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
@@ -753,12 +753,12 @@ same host domain, additional dev args have been added to the PMD.
The sample command line with the new ``devargs`` looks like this::
- -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
.. code-block:: console
- testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
+ testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 54a4c138998c..ee91c85ebfee 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -112,7 +112,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
^^^^^^^^^^^^^^^^^^^^^^
@@ -317,7 +317,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88 -- -i
+ testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -344,7 +344,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -776,7 +776,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 74d4a6058ef0..eb9defca0f09 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index ca6ba5b5e291..693be5ce8707 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -527,10 +527,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index a28a7f4e477a..fa8459435730 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -312,7 +312,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -378,7 +378,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -427,7 +427,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index f80346a35898..25525ef19aad 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -60,7 +60,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -68,11 +68,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -123,8 +123,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -132,13 +132,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 16e00b8f64b5..14b8d0f33fae 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index a0b81e66950f..0eb1d7c1af2f 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -194,7 +194,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -207,7 +207,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -222,7 +222,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -234,7 +234,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -244,7 +244,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -479,7 +479,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -487,7 +487,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -822,7 +822,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./l3fwd -l 18-21 -n 4 -a 82:00.0 -w 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 25a821177a4c..bb76d62a0139 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -47,7 +47,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -58,7 +58,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -70,7 +70,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -79,8 +91,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -91,14 +103,14 @@ Runtime Config Options
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -250,7 +262,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index 6818b6af515e..428e71d88687 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -29,8 +29,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -422,7 +422,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -430,14 +430,14 @@ devices managed by librte_pmd_mlx4.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -450,7 +450,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a071db276fe4..9f0dc8388951 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1537,7 +1537,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1545,14 +1545,14 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
(cd "/sys/class/net/${intf}/device/" && pwd -P);
done;
} |
- sed -n 's,.*/\(.*\),-w \1,p'
+ sed -n 's,.*/\(.*\),-a \1,p'
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -a 0000:05:00.1
+ -a 0000:06:00.0
+ -a 0000:06:00.1
+ -a 0000:05:00.0
#. Request huge pages::
@@ -1560,7 +1560,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index 10f33a025ede..7766a76d7a6d 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -78,7 +78,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- $RTE_TARGET/app/testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ $RTE_TARGET/app/testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index f3be79bbb8a3..9862a1d4508c 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -74,7 +74,7 @@ use arm64-octeontx2-linux-gcc as target.
.. code-block:: console
- ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./build/app/testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -127,7 +127,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -138,7 +138,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -150,7 +150,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -162,7 +162,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -174,7 +174,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -196,7 +196,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -205,7 +205,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -216,7 +216,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -224,7 +224,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -240,7 +240,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index b1ef9eba59b8..db64503a9ab8 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -178,7 +178,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -398,7 +398,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0d45b500325f..28ab5a03be8c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -539,6 +539,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 54ff6574aed8..f0947a7544e4 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -79,19 +79,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./build/bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
+ $ ./build/bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
--file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -111,20 +111,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index dc7972aa9a5c..e4c23da9ebcb 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,7 +46,7 @@ these settings is shown below:
.. code-block:: console
- ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -i FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 434f484138d0..db2685660ff7 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -329,15 +329,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -935,13 +935,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1029,4 +1029,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 07c8d44936d6..5173da8b108a 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,17 @@ Following is the sample command:
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index c2d4ca73abde..1e580ff86cf4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index f05816d9b24e..bc162a0118ac 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,7 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index d66a724827af..60a7eb227db2 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -i 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 28b729dbda8b..72707e9a4a9d 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -417,7 +417,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -427,7 +427,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -437,7 +437,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 5/5] doc: change references to blacklist and whitelist
@ 2020-10-22 14:39 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-22 14:39 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Luca Boccassi, Akhil Goyal, Hemant Agrawal,
John Griffin, Fiona Trahe, Deepak Kumar Jain, Pavan Nikhilesh,
Jerin Jacob, Bruce Richardson, Nithin Dabilpuram, Ajit Khaparde,
Somnath Kotur, Rahul Lakkireddy, Sachin Saxena, John Daley,
Hyong Youb Kim, Gaetan Rivet, Beilei Xing, Jeff Guo, Qiming Yang,
Qi Zhang, Haiyue Wang, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Martin Spinler, Kiran Kumar K,
Andrew Rybchenko, Keith Wiles, Maciej Czekaj, Anatoly Burakov,
Thomas Monjalon, Ferruh Yigit, Nicolas Chautru, Harry van Haaren,
Radu Nicolau, Konstantin Ananyev, David Hunt, Maxime Coquelin,
Chenbo Xia, Declan Doherty, Wisam Jaddo
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 12 ++++----
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 16 +++++------
doc/guides/nics/mlx5.rst | 12 ++++----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 22 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 2 +-
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 6 ++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 2 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 166 insertions(+), 149 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 3053636b8295..b50fee76954a 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index db3c8e918945..38ad45e66d76 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index a0becf689109..d41ee82aff52 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -264,7 +264,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem CONFIG_RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -302,7 +302,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -662,7 +662,7 @@ QAT SYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
@@ -670,7 +670,7 @@ QAT ASYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
@@ -679,7 +679,7 @@ QAT compression PMD can be tested by running the test application::
sed -i 's,\(CONFIG_RTE_COMPRESSDEV_TEST\)=n,\1=y,' build/.config
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 6502f6415fb4..1c671518a4db 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -66,7 +66,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -74,7 +74,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -89,7 +89,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -98,7 +98,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -107,7 +107,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -118,7 +118,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -126,7 +126,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -136,7 +136,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -146,7 +146,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -156,7 +156,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 49b45a04e8ec..efaef85f90fc 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -50,7 +50,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -69,7 +69,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 28973fc3e2e9..82cabab6885d 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -728,7 +728,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 54a4c138998c..ee91c85ebfee 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -112,7 +112,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
^^^^^^^^^^^^^^^^^^^^^^
@@ -317,7 +317,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88 -- -i
+ testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -344,7 +344,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -776,7 +776,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 74d4a6058ef0..eb9defca0f09 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index ca6ba5b5e291..693be5ce8707 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -527,10 +527,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index a28a7f4e477a..fa8459435730 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -312,7 +312,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -378,7 +378,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -427,7 +427,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index f80346a35898..25525ef19aad 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -60,7 +60,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -68,11 +68,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -123,8 +123,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -132,13 +132,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 16e00b8f64b5..14b8d0f33fae 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index a0b81e66950f..0eb1d7c1af2f 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -194,7 +194,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -207,7 +207,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -222,7 +222,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -234,7 +234,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -244,7 +244,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -479,7 +479,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -487,7 +487,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -822,7 +822,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./l3fwd -l 18-21 -n 4 -a 82:00.0 -w 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 25a821177a4c..bb76d62a0139 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -47,7 +47,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -58,7 +58,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -70,7 +70,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -79,8 +91,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -91,14 +103,14 @@ Runtime Config Options
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -250,7 +262,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index 6818b6af515e..67e3964b2b3b 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -29,8 +29,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -422,7 +422,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -434,10 +434,10 @@ devices managed by librte_pmd_mlx4.
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -450,7 +450,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a071db276fe4..b44490cfe5e4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1537,7 +1537,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1549,10 +1549,10 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -i 0000:05:00.1
+ -i 0000:06:00.0
+ -i 0000:06:00.1
+ -i 0000:05:00.0
#. Request huge pages::
@@ -1560,7 +1560,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -i 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index 10f33a025ede..7766a76d7a6d 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -78,7 +78,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- $RTE_TARGET/app/testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ $RTE_TARGET/app/testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index f3be79bbb8a3..9862a1d4508c 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -74,7 +74,7 @@ use arm64-octeontx2-linux-gcc as target.
.. code-block:: console
- ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./build/app/testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -127,7 +127,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -138,7 +138,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -150,7 +150,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -162,7 +162,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -174,7 +174,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -196,7 +196,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -205,7 +205,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -216,7 +216,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -224,7 +224,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -240,7 +240,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index b1ef9eba59b8..db64503a9ab8 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -178,7 +178,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -398,7 +398,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0d45b500325f..28ab5a03be8c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -539,6 +539,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 54ff6574aed8..f0947a7544e4 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -79,19 +79,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./build/bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
+ $ ./build/bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
--file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -111,20 +111,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index dc7972aa9a5c..e4c23da9ebcb 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,7 +46,7 @@ these settings is shown below:
.. code-block:: console
- ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -i FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 434f484138d0..db2685660ff7 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -329,15 +329,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -935,13 +935,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1029,4 +1029,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 07c8d44936d6..5173da8b108a 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,17 @@ Following is the sample command:
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index c2d4ca73abde..1e580ff86cf4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index f05816d9b24e..bc162a0118ac 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,7 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index d66a724827af..60a7eb227db2 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -i 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 28b729dbda8b..72707e9a4a9d 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -417,7 +417,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -427,7 +427,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -437,7 +437,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] build: fix version map file references in documentation
2020-10-22 12:11 3% ` David Marchand
@ 2020-10-22 14:24 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-22 14:24 UTC (permalink / raw)
To: David Marchand, dev
Cc: Thomas Monjalon, Bruce Richardson, Neil Horman, Rosen Xu,
Andrew Rybchenko, Luca Boccassi
No worries, happy to.
On 22/10/2020 13:11, David Marchand wrote:
> On Thu, Oct 22, 2020 at 9:47 AM David Marchand
> <david.marchand@redhat.com> wrote:
>>
>> Fixes: 63b3907833d8 ("build: remove library name from version map file name")
>>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
> Applied, thanks.
>
> Ray, I'll let you update the documentation with better examples on the
> ABI version.
> Thanks.
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] build: fix version map file references in documentation
2020-10-22 7:47 3% [dpdk-dev] [PATCH] build: fix version map file references in documentation David Marchand
2020-10-22 7:52 0% ` Kinsella, Ray
@ 2020-10-22 12:11 3% ` David Marchand
2020-10-22 14:24 0% ` Kinsella, Ray
1 sibling, 1 reply; 200+ results
From: David Marchand @ 2020-10-22 12:11 UTC (permalink / raw)
To: dev, Ray Kinsella
Cc: Thomas Monjalon, Bruce Richardson, Neil Horman, Rosen Xu,
Andrew Rybchenko, Luca Boccassi
On Thu, Oct 22, 2020 at 9:47 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> Fixes: 63b3907833d8 ("build: remove library name from version map file name")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Applied, thanks.
Ray, I'll let you update the documentation with better examples on the
ABI version.
Thanks.
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] build: fix version map file references in documentation
2020-10-22 7:47 3% [dpdk-dev] [PATCH] build: fix version map file references in documentation David Marchand
@ 2020-10-22 7:52 0% ` Kinsella, Ray
2020-10-22 12:11 3% ` David Marchand
1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-22 7:52 UTC (permalink / raw)
To: David Marchand, dev
Cc: thomas, bruce.richardson, Neil Horman, Rosen Xu,
Andrew Rybchenko, Luca Boccassi
On 22/10/2020 08:47, David Marchand wrote:
> Fixes: 63b3907833d8 ("build: remove library name from version map file name")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Note: we might want to update the ABI version in the examples shown in
> the documentation. I can send a followup patch.
I was thinking similar, I can do it also.
> ---
> doc/guides/contributing/abi_versioning.rst | 14 +++++++-------
> lib/librte_eal/include/rte_function_versioning.h | 2 +-
> 2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
> index 7a771dba10..b8b35761e2 100644
> --- a/doc/guides/contributing/abi_versioning.rst
> +++ b/doc/guides/contributing/abi_versioning.rst
> @@ -58,12 +58,12 @@ persists over multiple releases.
>
> .. code-block:: none
>
> - $ head ./lib/librte_acl/rte_acl_version.map
> + $ head ./lib/librte_acl/version.map
> DPDK_20 {
> global:
> ...
>
> - $ head ./lib/librte_eal/rte_eal_version.map
> + $ head ./lib/librte_eal/version.map
> DPDK_20 {
> global:
> ...
> @@ -77,7 +77,7 @@ that library.
>
> .. code-block:: none
>
> - $ head ./lib/librte_acl/rte_acl_version.map
> + $ head ./lib/librte_acl/version.map
> DPDK_20 {
> global:
> ...
> @@ -88,7 +88,7 @@ that library.
> } DPDK_20;
> ...
>
> - $ head ./lib/librte_eal/rte_eal_version.map
> + $ head ./lib/librte_eal/version.map
> DPDK_20 {
> global:
> ...
> @@ -100,12 +100,12 @@ how this may be done.
>
> .. code-block:: none
>
> - $ head ./lib/librte_acl/rte_acl_version.map
> + $ head ./lib/librte_acl/version.map
> DPDK_21 {
> global:
> ...
>
> - $ head ./lib/librte_eal/rte_eal_version.map
> + $ head ./lib/librte_eal/version.map
> DPDK_21 {
> global:
> ...
> @@ -134,7 +134,7 @@ linked to the DPDK.
>
> To support backward compatibility the ``rte_function_versioning.h``
> header file provides macros to use when updating exported functions. These
> -macros are used in conjunction with the ``rte_<library>_version.map`` file for
> +macros are used in conjunction with the ``version.map`` file for
> a given library to allow multiple versions of a symbol to exist in a shared
> library so that older binaries need not be immediately recompiled.
>
> diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
> index f588f2643b..746a1e1992 100644
> --- a/lib/librte_eal/include/rte_function_versioning.h
> +++ b/lib/librte_eal/include/rte_function_versioning.h
> @@ -22,7 +22,7 @@
> * allow for backwards compatibility for a time with older binaries that are
> * dynamically linked to the dpdk. To support that, the __vsym and
> * VERSION_SYMBOL macros are created. They, in conjunction with the
> - * <library>_version.map file for a given library allow for multiple versions of
> + * version.map file for a given library allow for multiple versions of
> * a symbol to exist in a shared library so that older binaries need not be
> * immediately recompiled.
> *
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] build: fix version map file references in documentation
@ 2020-10-22 7:47 3% David Marchand
2020-10-22 7:52 0% ` Kinsella, Ray
2020-10-22 12:11 3% ` David Marchand
0 siblings, 2 replies; 200+ results
From: David Marchand @ 2020-10-22 7:47 UTC (permalink / raw)
To: dev
Cc: thomas, bruce.richardson, Ray Kinsella, Neil Horman, Rosen Xu,
Andrew Rybchenko, Luca Boccassi
Fixes: 63b3907833d8 ("build: remove library name from version map file name")
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Note: we might want to update the ABI version in the examples shown in
the documentation. I can send a followup patch.
---
doc/guides/contributing/abi_versioning.rst | 14 +++++++-------
lib/librte_eal/include/rte_function_versioning.h | 2 +-
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 7a771dba10..b8b35761e2 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -58,12 +58,12 @@ persists over multiple releases.
.. code-block:: none
- $ head ./lib/librte_acl/rte_acl_version.map
+ $ head ./lib/librte_acl/version.map
DPDK_20 {
global:
...
- $ head ./lib/librte_eal/rte_eal_version.map
+ $ head ./lib/librte_eal/version.map
DPDK_20 {
global:
...
@@ -77,7 +77,7 @@ that library.
.. code-block:: none
- $ head ./lib/librte_acl/rte_acl_version.map
+ $ head ./lib/librte_acl/version.map
DPDK_20 {
global:
...
@@ -88,7 +88,7 @@ that library.
} DPDK_20;
...
- $ head ./lib/librte_eal/rte_eal_version.map
+ $ head ./lib/librte_eal/version.map
DPDK_20 {
global:
...
@@ -100,12 +100,12 @@ how this may be done.
.. code-block:: none
- $ head ./lib/librte_acl/rte_acl_version.map
+ $ head ./lib/librte_acl/version.map
DPDK_21 {
global:
...
- $ head ./lib/librte_eal/rte_eal_version.map
+ $ head ./lib/librte_eal/version.map
DPDK_21 {
global:
...
@@ -134,7 +134,7 @@ linked to the DPDK.
To support backward compatibility the ``rte_function_versioning.h``
header file provides macros to use when updating exported functions. These
-macros are used in conjunction with the ``rte_<library>_version.map`` file for
+macros are used in conjunction with the ``version.map`` file for
a given library to allow multiple versions of a symbol to exist in a shared
library so that older binaries need not be immediately recompiled.
diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
index f588f2643b..746a1e1992 100644
--- a/lib/librte_eal/include/rte_function_versioning.h
+++ b/lib/librte_eal/include/rte_function_versioning.h
@@ -22,7 +22,7 @@
* allow for backwards compatibility for a time with older binaries that are
* dynamically linked to the dpdk. To support that, the __vsym and
* VERSION_SYMBOL macros are created. They, in conjunction with the
- * <library>_version.map file for a given library allow for multiple versions of
+ * version.map file for a given library allow for multiple versions of
* a symbol to exist in a shared library so that older binaries need not be
* immediately recompiled.
*
--
2.23.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [v3 1/2] cryptodev: support enqueue callback functions
@ 2020-10-21 19:33 3% ` Ananyev, Konstantin
2020-10-23 12:36 0% ` Gujjar, Abhinandan S
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2020-10-21 19:33 UTC (permalink / raw)
To: Gujjar, Abhinandan S, dev, Doherty, Declan, akhil.goyal,
Honnappa.Nagarahalli
Cc: Vangati, Narender, jerinj
Hi Abhinandan,
Thanks for the effort, good progress.
Though few more comments, see below.
> This patch adds APIs to add/remove callback functions. The callback
> function will be called for each burst of crypto ops received on a
> given crypto device queue pair.
>
> Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> ---
> config/rte_config.h | 1 +
> lib/librte_cryptodev/meson.build | 2 +-
> lib/librte_cryptodev/rte_cryptodev.c | 201 +++++++++++++++++++++++++
> lib/librte_cryptodev/rte_cryptodev.h | 153 ++++++++++++++++++-
> lib/librte_cryptodev/rte_cryptodev_version.map | 2 +
> 5 files changed, 357 insertions(+), 2 deletions(-)
Don't forget to update Release Notes and probably Prog Guide too.
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 03d90d7..e999d93 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -61,6 +61,7 @@
> /* cryptodev defines */
> #define RTE_CRYPTO_MAX_DEVS 64
> #define RTE_CRYPTODEV_NAME_LEN 64
> +#define RTE_CRYPTO_CALLBACKS 1
>
> /* compressdev defines */
> #define RTE_COMPRESS_MAX_DEVS 64
> diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
> index c4c6b3b..8c5493f 100644
> --- a/lib/librte_cryptodev/meson.build
> +++ b/lib/librte_cryptodev/meson.build
> @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h',
> 'rte_crypto.h',
> 'rte_crypto_sym.h',
> 'rte_crypto_asym.h')
> -deps += ['kvargs', 'mbuf']
> +deps += ['kvargs', 'mbuf', 'rcu']
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> index 3d95ac6..5ba774a 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -448,6 +448,10 @@ struct rte_cryptodev_sym_session_pool_private_data {
> return 0;
> }
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/* spinlock for crypto device enq callbacks */
> +static rte_spinlock_t rte_cryptodev_enq_cb_lock = RTE_SPINLOCK_INITIALIZER;
> +#endif
>
> const char *
> rte_cryptodev_get_feature_name(uint64_t flag)
> @@ -1136,6 +1140,203 @@ struct rte_cryptodev *
> socket_id);
> }
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + rte_cryptodev_callback_fn cb_fn,
> + void *cb_arg)
> +{
> + struct rte_cryptodev *dev;
> + struct rte_cryptodev_cb *cb, *tail;
> + struct rte_cryptodev_enq_cb_rcu *list;
> + struct rte_rcu_qsbr *qsbr;
> + size_t size;
> +
> + /* Max thread set to 1, as one DP thread accessing a queue-pair */
> + const uint32_t max_threads = 1;
> +
> + if (!cb_fn)
> + return NULL;
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> + return NULL;
> + }
> +
> + dev = &rte_crypto_devices[dev_id];
> + if (qp_id >= dev->data->nb_queue_pairs) {
> + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> + return NULL;
> + }
> +
> + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock);
> + if (dev->enq_cbs == NULL) {
> + dev->enq_cbs = rte_zmalloc(NULL, sizeof(cb) *
> + dev->data->nb_queue_pairs, 0);
> + if (dev->enq_cbs == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for callbacks");
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
It is a bit clumsy to do unlock() for every return with error.
Probably an easier way - create an internal function that would do the actual job, and then
lock(); ret=actual_job_internal_functio(...); unlock();...
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + list = rte_zmalloc(NULL, sizeof(*list), 0);
As I understand, list is per queue, while enq_cbs[] is per port.
So if enq_cbs is not null, it doesn't mean that list for that particular queue is
already properly initialized.
Another thing - is there any point for dev->enq_cbs[] to be a an array of pointers to
rte_cryptodev_enq_cb_rcu? Considering that rte_cryptodev_enq_cb_rcu itself contains
just two pointers inside, I think it enq_cbs can point just to an array of rte_cryptodev_enq_cb_rcu:
struct rte_cryptodev {
...
struct rte_cryptodev_enq_cb_rcu *enq_cbs;
And you can remove one level of indirection here and in other places.
> + if (list == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for list on "
> + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> + rte_errno = ENOMEM;
> + rte_free(dev->enq_cbs);
Here and in other places: you free dev->enq_cbs, but not set it to NULL.
In fact - probably a good idea to have one cleanup() function that would free
all necessary stuff and set it to null, and then use it in all such places.
> + return NULL;
> + }
> +
> + /* Create RCU QSBR variable */
> + size = rte_rcu_qsbr_get_memsize(max_threads);
> + qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
> + if (qsbr == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for RCU on "
> + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> + rte_errno = ENOMEM;
> + rte_free(list);
> + rte_free(dev->enq_cbs);
> + dev->enq_cbs[qp_id] = NULL;
> + return NULL;
> + }
> +
> + if (rte_rcu_qsbr_init(qsbr, max_threads)) {
> + CDEV_LOG_ERR("Failed to initialize for RCU on "
> + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> + rte_free(qsbr);
> + rte_free(list);
> + rte_free(dev->enq_cbs);
> + dev->enq_cbs[qp_id] = NULL;
> + return NULL;
> + }
> +
> + dev->enq_cbs[qp_id] = list;
> + list->qsbr = qsbr;
> + }
> +
> + cb = rte_zmalloc(NULL, sizeof(*cb), 0);
> + if (cb == NULL) {
> + CDEV_LOG_ERR("Failed to allocate memory for callback on "
> + "dev=%d, queue_pair_id=%d", dev_id, qp_id);
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + cb->fn = cb_fn;
> + cb->arg = cb_arg;
> +
> + /* Add the callbacks in fifo order. */
> + list = dev->enq_cbs[qp_id];
> + tail = list->next;
> + if (tail) {
> + while (tail->next)
> + tail = tail->next;
> + tail->next = cb;
> + } else
> + list->next = cb;
> +
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> +
> + return cb;
> +}
> +
> +int
> +rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + struct rte_cryptodev_cb *cb)
> +{
> + struct rte_cryptodev *dev;
> + struct rte_cryptodev_cb **prev_cb, *curr_cb;
> + struct rte_cryptodev_enq_cb_rcu *list;
> + uint16_t qp;
> + int free_mem;
> + int ret;
> +
> + free_mem = 1;
> + ret = -EINVAL;
> +
> + if (!cb) {
> + CDEV_LOG_ERR("cb is NULL");
> + return ret;
> + }
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
> + return ret;
> + }
> +
> + dev = &rte_crypto_devices[dev_id];
> + if (qp_id >= dev->data->nb_queue_pairs) {
> + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id);
> + return ret;
> + }
> +
> + list = dev->enq_cbs[qp_id];
> + if (list == NULL) {
> + CDEV_LOG_ERR("Callback list is NULL");
> + return ret;
> + }
> +
> + if (list->qsbr == NULL) {
> + CDEV_LOG_ERR("Rcu qsbr is NULL");
> + return ret;
> + }
> +
> + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock);
> + if (dev->enq_cbs == NULL) {
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> + return ret;
> + }
> +
> + prev_cb = &list->next;
> + for (; *prev_cb != NULL; prev_cb = &curr_cb->next) {
> + curr_cb = *prev_cb;
> + if (curr_cb == cb) {
> + /* Remove the user cb from the callback list. */
> + *prev_cb = curr_cb->next;
> + ret = 0;
> + break;
> + }
> + }
> +
> + if (!ret) {
> + /* Call sync with invalid thread id as this is part of
> + * control plane API
> + */
> + rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID);
> + rte_free(cb);
> + }
> +
> + if (list->next == NULL) {
> + rte_free(list->qsbr);
We can't destroy our sync variable while device is not stopped or destroyed.
It can be still used by DP.
Probably the easiest way to deal with it - allocate and initialize enq_cbs[] and all
related qsbrs at first add_callback and free all that memory only on dev_destroy().
> + rte_free(list);
> + dev->enq_cbs[qp_id] = NULL;
> + }
> +
> + for (qp = 0; qp < dev->data->nb_queue_pairs; qp++)
> + if (dev->enq_cbs[qp] != NULL) {
> + free_mem = 0;
> + break;
> + }
> +
> + if (free_mem) {
> + rte_free(dev->enq_cbs);
Again, not safe to do here, see above.
> + dev->enq_cbs = NULL;
> + }
> +
> + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock);
> +
> + return ret;
> +}
> +#endif
>
> int
> rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 0935fd5..669746d 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -23,6 +23,7 @@
> #include "rte_dev.h"
> #include <rte_common.h>
> #include <rte_config.h>
> +#include <rte_rcu_qsbr.h>
>
> #include "rte_cryptodev_trace_fp.h"
>
> @@ -522,6 +523,34 @@ struct rte_cryptodev_qp_conf {
> /**< The mempool for creating sess private data in sessionless mode */
> };
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * Function type used for pre processing crypto ops when enqueue burst is
> + * called.
> + *
> + * The callback function is called on enqueue burst immediately
> + * before the crypto ops are put onto the hardware queue for processing.
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param ops The address of an array of *nb_ops* pointers
> + * to *rte_crypto_op* structures which contain
> + * the crypto operations to be processed.
> + * @param nb_ops The number of operations to process.
> + * @param user_param The arbitrary user parameter passed in by the
> + * application when the callback was originally
> + * registered.
> + * @return The number of ops to be enqueued to the
> + * crypto device.
> + */
> +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id,
> + struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param);
> +#endif
> +
> /**
> * Typedef for application callback function to be registered by application
> * software for notification of device events
> @@ -822,7 +851,6 @@ struct rte_cryptodev_config {
> enum rte_cryptodev_event_type event,
> rte_cryptodev_cb_fn cb_fn, void *cb_arg);
>
> -
> typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
> struct rte_crypto_op **ops, uint16_t nb_ops);
> /**< Dequeue processed packets from queue pair of a device. */
> @@ -839,6 +867,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
> /** Structure to keep track of registered callbacks */
> TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * @internal
> + * Structure used to hold information about the callbacks to be called for a
> + * queue pair on enqueue.
> + */
> +struct rte_cryptodev_cb {
> + struct rte_cryptodev_cb *next;
> + /** < Pointer to next callback */
> + rte_cryptodev_callback_fn fn;
> + /** < Pointer to callback function */
> + void *arg;
> + /** < Pointer to argument */
> +};
> +
> +/**
> + * @internal
> + * Structure used to hold information about the RCU for a queue pair.
> + */
> +struct rte_cryptodev_enq_cb_rcu {
> + struct rte_cryptodev_cb *next;
> + /** < Pointer to next callback */
> + struct rte_rcu_qsbr *qsbr;
> + /** < RCU QSBR variable per queue pair */
> +};
> +#endif
> +
> /** The data structure associated with each crypto device. */
> struct rte_cryptodev {
> dequeue_pkt_burst_t dequeue_burst;
> @@ -867,6 +922,11 @@ struct rte_cryptodev {
> __extension__
> uint8_t attached : 1;
> /**< Flag indicating the device is attached */
> +
> +#ifdef RTE_CRYPTO_CALLBACKS
I'd *always* reserve space for it.
No matter is RTE_CRYPTO_CALLBACKS defined or not.
To avoid difference in public structure layout.
> + struct rte_cryptodev_enq_cb_rcu **enq_cbs;
As I said above, no need for extra level of indirection.
> + /**< User application callback for pre enqueue processing */
> +#endif
As I understand, it is not an ABI breakage - as there are some free space right now
at the end of struct rte_cryptodev (due to it alignment), but definitely need to update RN.
> } __rte_cache_aligned;
>
> void *
> @@ -989,6 +1049,25 @@ struct rte_cryptodev_data {
> {
> struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> + if (unlikely(dev->enq_cbs != NULL && dev->enq_cbs[qp_id] != NULL)) {
Agree with Honnappa's comment for that piece of code.
Probably need to be something like:
if (unlikely(dev->enq_cbs != NULL && dev->enq_cbs[qp_id].next != NULL) {
list = &dev->enq_cbs[qp_id];
rte_rcu_qsbr_thread_online(list->qsbr, 0);
for (cb = list->next; cb != NULL; cb = cb->next)
....
rte_rcu_qsbr_thread_offline(list->qsbr, 0);
}
> + struct rte_cryptodev_enq_cb_rcu *list;
> + struct rte_cryptodev_cb *cb;
> +
> + list = dev->enq_cbs[qp_id];
> + cb = list->next;
> + rte_rcu_qsbr_thread_online(list->qsbr, 0);
> +
> + do {
> + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,
> + cb->arg);
> + cb = cb->next;
> + } while (cb != NULL);
> +
> + rte_rcu_qsbr_thread_offline(list->qsbr, 0);
> + }
> +#endif
> +
> rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
> return (*dev->enqueue_burst)(
> dev->data->queue_pairs[qp_id], ops, nb_ops);
> @@ -1730,6 +1809,78 @@ struct rte_crypto_raw_dp_ctx {
> rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> uint32_t n);
>
> +#ifdef RTE_CRYPTO_CALLBACKS
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a user callback for a given crypto device and queue pair which will be
> + * called on crypto ops enqueue.
> + *
> + * This API configures a function to be called for each burst of crypto ops
> + * received on a given crypto device queue pair. The return value is a pointer
> + * that can be used later to remove the callback using
> + * rte_cryptodev_remove_enq_callback().
> + *
> + * Multiple functions are called in the order that they are added.
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param cb_fn The callback function
> + * @param cb_arg A generic pointer parameter which will be passed
> + * to each invocation of the callback function on
> + * this crypto device and queue pair.
> + *
> + * @return
> + * NULL on error.
> + * On success, a pointer value which can later be used to remove the callback.
> + */
> +
> +__rte_experimental
> +struct rte_cryptodev_cb *
> +rte_cryptodev_add_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + rte_cryptodev_callback_fn cb_fn,
> + void *cb_arg);
> +
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a user callback function for given crypto device and queue pair.
> + *
> + * This function is used to removed callbacks that were added to a crypto
> + * device queue pair using rte_cryptodev_add_enq_callback().
> + *
> + *
> + *
> + * @param dev_id The identifier of the device.
> + * @param qp_id The index of the queue pair in which ops are
> + * to be enqueued for processing. The value
> + * must be in the range [0, nb_queue_pairs - 1]
> + * previously supplied to
> + * *rte_cryptodev_configure*.
> + * @param cb Pointer to user supplied callback created via
> + * rte_cryptodev_add_enq_callback().
> + *
> + * @return
> + * - 0: Success. Callback was removed.
> + * - -EINVAL: The dev_id or the qp_id is out of range, or the callback
> + * is NULL or not found for the crypto device queue pair.
> + */
> +
> +__rte_experimental
> +int rte_cryptodev_remove_enq_callback(uint8_t dev_id,
> + uint16_t qp_id,
> + struct rte_cryptodev_cb *cb);
> +
> +#endif
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 7e4360f..5d8d6b0 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -101,6 +101,7 @@ EXPERIMENTAL {
> rte_cryptodev_get_qp_status;
>
> # added in 20.11
> + rte_cryptodev_add_enq_callback;
> rte_cryptodev_configure_raw_dp_ctx;
> rte_cryptodev_get_raw_dp_ctx_size;
> rte_cryptodev_raw_dequeue;
> @@ -109,4 +110,5 @@ EXPERIMENTAL {
> rte_cryptodev_raw_enqueue;
> rte_cryptodev_raw_enqueue_burst;
> rte_cryptodev_raw_enqueue_done;
> + rte_cryptodev_remove_enq_callback;
> };
> --
> 1.9.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v7 14/14] doc: update patch cheatsheet to use meson
2020-10-21 8:17 9% ` [dpdk-dev] [PATCH v7 12/14] doc: remove references to make from contributing guide Ciara Power
@ 2020-10-21 8:17 2% ` Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2020-10-21 8:17 UTC (permalink / raw)
To: dev; +Cc: Kevin Laatz
From: Kevin Laatz <kevin.laatz@intel.com>
With 'make' being removed, the patch cheatsheet needs to be updated to
remove any references to 'make'. These references have been replaced with
meson alternatives in this patch.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
.../contributing/img/patch_cheatsheet.svg | 582 ++++++++----------
1 file changed, 270 insertions(+), 312 deletions(-)
diff --git a/doc/guides/contributing/img/patch_cheatsheet.svg b/doc/guides/contributing/img/patch_cheatsheet.svg
index 85225923e1..986e4db815 100644
--- a/doc/guides/contributing/img/patch_cheatsheet.svg
+++ b/doc/guides/contributing/img/patch_cheatsheet.svg
@@ -1,6 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
@@ -13,7 +11,7 @@
width="210mm"
height="297mm"
id="svg2985"
- inkscape:version="0.48.4 r9939"
+ inkscape:version="1.0.1 (3bc2e813f5, 2020-09-07)"
sodipodi:docname="patch_cheatsheet.svg">
<sodipodi:namedview
pagecolor="#ffffff"
@@ -24,17 +22,19 @@
guidetolerance="10"
inkscape:pageopacity="0"
inkscape:pageshadow="2"
- inkscape:window-width="1184"
- inkscape:window-height="1822"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
id="namedview274"
showgrid="false"
- inkscape:zoom="1.2685914"
- inkscape:cx="289.93958"
- inkscape:cy="509.84194"
- inkscape:window-x="0"
- inkscape:window-y="19"
- inkscape:window-maximized="0"
- inkscape:current-layer="g3272" />
+ inkscape:zoom="0.89702958"
+ inkscape:cx="246.07409"
+ inkscape:cy="416.76022"
+ inkscape:window-x="1072"
+ inkscape:window-y="-8"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="layer1"
+ inkscape:document-rotation="0"
+ inkscape:snap-grids="false" />
<defs
id="defs3">
<linearGradient
@@ -549,347 +549,336 @@
</g>
</switch>
<g
- transform="matrix(0.89980358,0,0,0.89980358,45.57817,-2.8793563)"
+ transform="matrix(0.89980358,0,0,0.89980358,57.57817,-2.8793563)"
id="g4009">
<text
x="325.02054"
y="107.5126"
id="text3212"
xml:space="preserve"
- style="font-size:43.11383057px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:43.1138px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
transform="scale(1.193782,0.83767389)"><tspan
x="325.02054"
y="107.5126"
- id="tspan3214">CHEATSHEET</tspan></text>
+ id="tspan3214"
+ style="font-family:monospace">CHEATSHEET</tspan></text>
<text
x="386.51117"
y="58.178116"
transform="scale(1.0054999,0.99453018)"
id="text3212-1"
xml:space="preserve"
- style="font-size:42.11373901px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:42.1137px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="386.51117"
y="58.178116"
- id="tspan3214-7">PATCH SUBMIT</tspan></text>
+ id="tspan3214-7"
+ style="font-family:monospace">PATCH SUBMIT</tspan></text>
</g>
<rect
- width="714.94495"
- height="88.618027"
- rx="20.780111"
- ry="15.96909"
- x="14.574773"
- y="7.0045133"
+ width="759.50977"
+ height="88.591248"
+ rx="22.075403"
+ ry="15.964265"
+ x="14.588161"
+ y="7.0179014"
id="rect3239"
- style="fill:none;stroke:#00233b;stroke-width:0.87678075;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:0.903557;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<rect
- width="713.28113"
- height="887.29156"
- rx="17.656931"
- ry="17.280584"
- x="15.406689"
- y="104.73515"
+ width="757.84167"
+ height="887.2605"
+ rx="18.760006"
+ ry="17.27998"
+ x="15.422211"
+ y="104.75068"
id="rect3239-0"
- style="fill:none;stroke:#00233b;stroke-width:1.00973284;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:1.04078;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<rect
- width="694.94904"
- height="381.31"
- rx="9.4761629"
- ry="9.0904856"
- x="24.336016"
- y="601.75836"
+ width="732.82446"
+ height="381.28253"
+ rx="9.9926233"
+ ry="9.0898304"
+ x="24.349754"
+ y="601.77209"
id="rect3239-0-9-4"
- style="fill:none;stroke:#00233b;stroke-width:1.02322531;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:1.0507;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
- d="m 386.3921,327.23442 323.14298,0"
+ d="M 422.0654,327.23442 H 709.53508"
id="path4088"
- style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="fill:none;stroke:#00233b;stroke-width:0.943189px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
inkscape:connector-curvature="0" />
<text
- x="396.18015"
+ x="428.18015"
y="314.45731"
id="text4090"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="396.18015"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="428.18015"
y="314.45731"
id="tspan4092"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Patch Pre-Checks</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Patch Pre-Checks</tspan></text>
<text
x="43.44949"
y="147.32129"
id="text4090-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.44949"
y="147.32129"
id="tspan4092-3"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Commit Pre-Checks</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Commit Pre-Checks</tspan></text>
<text
- x="397.1235"
+ x="429.1235"
y="144.8549"
id="text4090-4-3"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="397.1235"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="429.1235"
y="144.8549"
id="tspan4092-3-3"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Bugfix?</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Bugfix?</tspan></text>
<text
x="41.215897"
y="634.38617"
id="text4090-1"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="41.215897"
y="634.38617"
id="tspan4092-38"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Git send-email </tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Git send-email </tspan></text>
<path
d="m 31.232443,642.80575 376.113467,0"
id="path4088-7"
style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
inkscape:connector-curvature="0" />
<rect
- width="342.13785"
- height="230.74609"
- rx="10.411126"
- ry="10.411126"
- x="25.418407"
- y="114.92036"
+ width="376.65033"
+ height="230.70007"
+ rx="11.461329"
+ ry="10.40905"
+ x="25.441414"
+ y="114.94337"
id="rect3239-0-9-4-2"
- style="fill:none;stroke:#00233b;stroke-width:0.93674862;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:0.982762;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<text
x="43.44949"
y="385.8045"
id="text4090-86"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.44949"
y="385.8045"
id="tspan4092-5"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Compile Pre-Checks</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Compile Pre-Checks</tspan></text>
<g
- transform="translate(352.00486,-348.25973)"
+ transform="matrix(1.0077634,0,0,1,384.57109,-348.25973)"
id="g3295">
<text
x="43.87738"
y="568.03088"
id="text4090-8-14"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.87738"
y="568.03088"
id="tspan4289"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Include warning/error</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Include warning/error</tspan></text>
<text
x="43.87738"
y="537.71906"
id="text4090-8-14-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.87738"
y="537.71906"
id="tspan4289-1"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Fixes: line</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Fixes: line</tspan></text>
<text
x="43.87738"
y="598.9939"
id="text4090-8-14-0"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.87738"
y="598.9939"
id="tspan4289-2"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ How to reproduce</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ How to reproduce</tspan></text>
</g>
<g
- transform="translate(-2.6258125,-26.708615)"
+ transform="matrix(0.88614399,0,0,1.0199334,-5.7864591,-38.84504)"
id="g4115">
<g
id="g3272">
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1"
y="454.36987"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7"
y="454.36987"
x="49.093246">+ build gcc icc clang </tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
+ xml:space="preserve"
+ id="text581"
+ y="454.36987"
+ x="49.093246" />
+ <text
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-2"
y="516.59979"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-79"
y="516.59979"
- x="49.093246">+ make test doc </tspan></text>
+ x="49.093246">+ meson -Denable_docs=true</tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-2-0-0"
y="544.71033"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-79-9-0"
y="544.71033"
- x="49.093246">+ make examples</tspan></text>
+ x="49.093246">+ meson -Dexamples=all</tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-2-0-07"
y="576.83533"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-79-9-3"
y="576.83533"
- x="49.093246">+ make shared-lib</tspan></text>
+ x="49.093246"
+ transform="matrix(1.0305467,0,0,1,-1.5447426,0)">+ meson -Ddefault_library=shared</tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-2-0-07-4"
y="604.88947"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-79-9-3-9"
y="604.88947"
x="49.093246">+ library ABI version</tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-2-9"
y="486.56659"
x="49.093246"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-79-3"
y="486.56659"
x="49.093246">+ build 32 and 64 bits</tspan></text>
</g>
</g>
<text
- x="74.388756"
- y="914.65686"
+ x="72.598656"
+ y="937.21002"
id="text4090-8-1-8-65-9"
xml:space="preserve"
- style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.2959px;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.02466"
+ transform="scale(1.0246575,0.97593587)"><tspan
sodipodi:role="line"
id="tspan3268"
- x="74.388756"
- y="914.65686">git send-email *.patch --annotate --to <maintainer></tspan><tspan
+ x="72.598656"
+ y="937.21002"
+ style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466">git send-email *.patch --annotate --to <maintainer></tspan><tspan
sodipodi:role="line"
id="tspan3272"
- x="74.388756"
- y="938.40686"> --cc dev@dpdk.org [ --cc other@participants.com</tspan><tspan
+ x="72.598656"
+ y="961.54565"
+ style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466"> --cc dev@dpdk.org [ --cc other@participants.com</tspan><tspan
sodipodi:role="line"
- x="74.388756"
- y="962.15686"
- id="tspan3266"> --cover-letter -v[N] --in-reply-to <message ID> ]</tspan></text>
+ x="72.598656"
+ y="985.88129"
+ id="tspan3266"
+ style="font-size:19.4685px;line-height:1.25;font-family:monospace;stroke-width:1.02466"> --cover-letter -v[N] --in-reply-to <message ID> ]</tspan></text>
<text
x="543.47675"
y="1032.3459"
id="text4090-8-7-8-7-6-3-8-2-5"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="543.47675"
y="1032.3459"
id="tspan4092-8-6-3-1-8-4-4-5-3"
- style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">harry.van.haaren@intel.com</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">harry.van.haaren@intel.com</tspan></text>
<rect
- width="678.14105"
- height="87.351799"
- rx="6.7972355"
- ry="6.7972355"
- x="31.865864"
- y="888.44696"
+ width="711.56055"
+ height="87.327599"
+ rx="7.1322103"
+ ry="6.795352"
+ x="31.877964"
+ y="888.45905"
id="rect3239-0-9-4-3"
- style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:1.0242;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<text
x="543.29498"
y="1018.1843"
id="text4090-8-7-8-7-6-3-8-2-5-3"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="543.29498"
y="1018.1843"
id="tspan4092-8-6-3-1-8-4-4-5-3-7"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Suggestions / Updates?</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Suggestions / Updates?</tspan></text>
<g
id="g3268"
transform="translate(0,-6)">
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8"
y="704.07019"
x="41.658669"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6"
y="704.07019"
x="41.658669">+ Patch version ( eg: -v2 ) </tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-0"
y="736.29175"
x="41.658669"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-2"
y="736.29175"
x="41.658669">+ Patch version annotations</tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-6"
y="766.70355"
x="41.911205"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-1"
y="766.70355"
x="41.911205">+ Send --to maintainer </tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-6-3"
y="795.30548"
x="41.658669"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-1-8"
y="795.30548"
x="41.658669">+ Send --cc dev@dpdk.org </tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-9"
y="675.25287"
x="41.658669"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-9"
y="675.25287"
x="41.658669">+ Cover letter</tspan></text>
@@ -897,73 +886,70 @@
id="g3303"
transform="translate(1.0962334,-40.034939)">
<text
- sodipodi:linespacing="125%"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-65"
y="868.70337"
x="41.572586"><tspan
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-10"
y="868.70337"
x="41.572586">+ Send --in-reply-to <message ID><tspan
- style="font-size:20px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:20px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan3184" /></tspan></text>
<text
- sodipodi:linespacing="125%"
- style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
xml:space="preserve"
id="text4090-8-1-8-9-1"
y="855.79816"
x="460.18405"><tspan
- style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start"
id="tspan4092-8-7-6-9-7"
y="855.79816"
x="460.18405">****</tspan></text>
</g>
</g>
<text
- x="685.67828"
+ x="697.67828"
y="76.55056"
id="text4090-8-1-8-9-1-9"
xml:space="preserve"
- style="font-size:20.20989037px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="685.67828"
+ style="font-style:normal;font-weight:normal;font-size:20.2099px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="697.67828"
y="76.55056"
id="tspan4092-8-7-6-9-7-4"
- style="font-size:9.09445095px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">v1.0</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:9.09445px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">v2.0</tspan></text>
<rect
- width="342.3053"
- height="155.54948"
- rx="9.2344503"
- ry="9.2344503"
- x="377.58942"
- y="114.55766"
+ width="347.40179"
+ height="155.50351"
+ rx="9.3719397"
+ ry="9.2317209"
+ x="412.60239"
+ y="114.58065"
id="rect3239-0-9-4-2-1"
- style="fill:none;stroke:#00233b;stroke-width:0.76930124;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:0.774892;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<rect
- width="342.12564"
- height="236.79482"
- rx="10.647112"
- ry="9.584527"
- x="25.642178"
- y="356.86249"
+ width="377.75555"
+ height="234.52185"
+ rx="11.755931"
+ ry="9.4925261"
+ x="25.663876"
+ y="356.88416"
id="rect3239-0-9-4-2-0"
- style="fill:none;stroke:#00233b;stroke-width:0.9489302;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:0.99232;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<rect
- width="341.98428"
- height="312.73181"
- rx="8.5358429"
- ry="8.5358429"
- x="377.96762"
- y="280.45331"
+ width="343.53604"
+ height="312.67508"
+ rx="8.5745735"
+ ry="8.5342941"
+ x="414.29037"
+ y="280.48166"
id="rect3239-0-9-4-2-1-9"
- style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:1.00217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
- d="m 387.02742,157.3408 323.14298,0"
+ d="M 419.35634,157.3408 H 710.1704"
id="path4088-8"
- style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="fill:none;stroke:#00233b;stroke-width:0.94866px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
d="m 36.504486,397.33869 323.142974,0"
@@ -971,9 +957,9 @@
style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
- d="m 35.494337,156.92238 323.142983,0"
+ d="M 35.494337,156.92238 H 372.01481"
id="path4088-4"
- style="fill:none;stroke:#00233b;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ style="fill:none;stroke:#00233b;stroke-width:1.02049px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
inkscape:connector-curvature="0" />
<g
transform="translate(1.0962334,-30.749225)"
@@ -983,45 +969,41 @@
y="214.1572"
id="text4090-8-11"
xml:space="preserve"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="214.1572"
id="tspan4092-8-52"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Signed-off-by: </tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Signed-off-by: </tspan></text>
<text
x="45.371201"
y="243.81795"
id="text4090-8-7-8"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="243.81795"
id="tspan4092-8-6-3"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Suggested-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Suggested-by:</tspan></text>
<text
x="45.371201"
y="273.90939"
id="text4090-8-7-8-7"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="273.90939"
id="tspan4092-8-6-3-1"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Reported-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Reported-by:</tspan></text>
<text
x="45.371201"
y="304.00082"
id="text4090-8-7-8-7-6"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="304.00082"
id="tspan4092-8-6-3-1-8"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Tested-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Tested-by:</tspan></text>
<g
id="g3297"
transform="translate(1.1147904,-7.2461378)">
@@ -1030,110 +1012,102 @@
y="368.8187"
id="text4090-8-7-8-7-6-3"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="368.8187"
id="tspan4092-8-6-3-1-8-4"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Previous Acks</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Previous Acks</tspan></text>
<text
x="235.24362"
y="360.3028"
id="text4090-8-1-8-9-1-4"
xml:space="preserve"
- style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="235.24362"
y="360.3028"
id="tspan4092-8-7-6-9-7-0"
- style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
</g>
<text
x="45.371201"
y="334.52298"
id="text4090-8-7-8-7-6-3-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="45.371201"
y="334.52298"
id="tspan4092-8-6-3-1-8-4-0"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Commit message</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Commit message</tspan></text>
</g>
<rect
width="295.87207"
height="164.50136"
rx="7.3848925"
ry="4.489974"
- x="414.80502"
+ x="444.80502"
y="611.47064"
id="rect3239-0-9-4-2-1-9-9"
- style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<text
- x="439.4429"
+ x="469.4429"
y="638.35608"
id="text4090-1-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="439.4429"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="469.4429"
y="638.35608"
id="tspan4092-38-8"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Mailing List</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Mailing List</tspan></text>
<text
- x="431.55353"
+ x="461.55353"
y="675.59857"
id="text4090-8-5-6-9-4-6-6-8"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="431.55353"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="461.55353"
y="675.59857"
id="tspan4092-8-5-5-3-4-0-6-2"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Acked-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Acked-by:</tspan></text>
<text
- x="431.39734"
+ x="461.39734"
y="734.18231"
id="text4090-8-5-6-9-4-6-6-8-5"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="431.39734"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="461.39734"
y="734.18231"
id="tspan4092-8-5-5-3-4-0-6-2-1"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Reviewed-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Reviewed-by:</tspan></text>
<text
- x="450.8428"
+ x="480.8428"
y="766.5578"
id="text4090-8-5-6-9-4-6-6-8-7"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="450.8428"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="480.8428"
y="766.5578"
id="tspan4092-8-5-5-3-4-0-6-2-11"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">Nack (refuse patch)</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">Nack (refuse patch)</tspan></text>
<path
- d="m 426.99385,647.80575 272.72607,0"
+ d="M 456.99385,647.80575 H 729.71992"
id="path4088-7-5"
- style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
- d="m 424.7332,742.35699 272.72607,0"
+ d="M 454.7332,742.35699 H 727.45927"
id="path4088-7-5-2"
- style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ style="fill:none;stroke:#00233b;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<text
- x="431.39734"
+ x="461.39734"
y="704.78278"
id="text4090-8-5-6-9-4-6-6-8-5-1"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="431.39734"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="461.39734"
y="704.78278"
id="tspan4092-8-5-5-3-4-0-6-2-1-7"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Tested-by:</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Tested-by:</tspan></text>
<g
transform="translate(1.0962334,-2.7492248)"
id="g3613">
@@ -1142,22 +1116,21 @@
y="1007.5879"
id="text4090-8-7-8-7-6-3-8"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="43.146141"
y="1007.5879"
id="tspan4092-8-6-3-1-8-4-4"
- style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">Previous Acks only when fixing typos, rebased, or checkpatch issues.</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">Previous Acks only when fixing typos, rebased, or checkpatch issues.</tspan></text>
<text
x="30.942892"
y="1011.3757"
id="text4090-8-7-8-7-6-3-8-4-1"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="30.942892"
y="1011.3757"
id="tspan4092-8-6-3-1-8-4-4-55-7"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
</g>
<g
transform="translate(1.0962334,-2.7492248)"
@@ -1167,35 +1140,34 @@
y="1020.4383"
id="text4090-8-7-8-7-6-3-8-4"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="42.176418"
y="1020.4383"
id="tspan4092-8-6-3-1-8-4-4-55"
- style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">The version.map function names must be in alphabetical order.</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">The version.map function names must be in alphabetical order.</tspan></text>
<text
x="30.942892"
y="1024.2014"
id="text4090-8-7-8-7-6-3-8-4-1-5"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="30.942892"
y="1024.2014"
id="tspan4092-8-6-3-1-8-4-4-55-7-2"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="25.247679"
y="1024.2014"
id="text4090-8-7-8-7-6-3-8-4-1-5-6"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="25.247679"
y="1024.2014"
id="tspan4092-8-6-3-1-8-4-4-55-7-2-8"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
</g>
<g
- transform="translate(1.0962334,-30.749225)"
+ transform="matrix(1.0211743,0,0,1,25.427515,-30.749225)"
id="g3275">
<g
id="g3341">
@@ -1204,67 +1176,61 @@
y="390.17807"
id="text4090-8"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="390.17807"
id="tspan4092-8"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Rebase to git </tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Rebase to git </tspan></text>
<text
x="394.78601"
y="420.24835"
id="text4090-8-5"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="420.24835"
id="tspan4092-8-5"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Checkpatch </tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Checkpatch </tspan></text>
<text
x="394.78601"
y="450.53394"
id="text4090-8-5-6"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="450.53394"
id="tspan4092-8-5-5"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ ABI breakage </tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ ABI breakage </tspan></text>
<text
x="394.78601"
y="513.13031"
id="text4090-8-5-6-9-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="513.13031"
id="tspan4092-8-5-5-3-4"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Maintainers file</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Maintainers file</tspan></text>
<text
x="394.78601"
y="573.48621"
id="text4090-8-5-6-9-4-6"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="573.48621"
id="tspan4092-8-5-5-3-4-0"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Release notes</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Release notes</tspan></text>
<text
x="395.79617"
y="603.98718"
id="text4090-8-5-6-9-4-6-6"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="395.79617"
y="603.98718"
id="tspan4092-8-5-5-3-4-0-6"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Documentation</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Documentation</tspan></text>
<g
transform="translate(0,-0.83470152)"
id="g3334">
@@ -1276,24 +1242,22 @@
y="468.01297"
id="text4090-8-1-8-9-1-4-1"
xml:space="preserve"
- style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="660.46729"
y="468.01297"
id="tspan4092-8-7-6-9-7-0-7"
- style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">**</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">**</tspan></text>
</g>
<text
x="394.78601"
y="483.59955"
id="text4090-8-5-6-9"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="483.59955"
id="tspan4092-8-5-5-3"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Update version.map</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Update version.map</tspan></text>
</g>
<g
id="g3428"
@@ -1303,12 +1267,11 @@
y="541.38928"
id="text4090-8-5-6-9-4-6-1"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="394.78601"
y="541.38928"
id="tspan4092-8-5-5-3-4-0-7"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+ Doxygen</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+ Doxygen</tspan></text>
<g
transform="translate(-119.92979,57.949844)"
id="g3267-9">
@@ -1317,28 +1280,26 @@
y="473.13675"
id="text4090-8-1-8-9-1-4-1-4"
xml:space="preserve"
- style="font-size:25.6917057px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-weight:normal;font-size:25.6917px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="628.93628"
y="473.13675"
id="tspan4092-8-7-6-9-7-0-7-8"
- style="font-size:11.56126785px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">***</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:11.5613px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">***</tspan></text>
</g>
</g>
</g>
</g>
<text
- x="840.1828"
- y="234.34692"
- transform="matrix(0.70710678,0.70710678,-0.70710678,0.70710678,0,0)"
+ x="861.39557"
+ y="213.1337"
+ transform="rotate(45)"
id="text4090-8-5-6-9-4-6-6-8-7-4"
xml:space="preserve"
- style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
- sodipodi:linespacing="125%"><tspan
- x="840.1828"
- y="234.34692"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"><tspan
+ x="861.39557"
+ y="213.1337"
id="tspan4092-8-5-5-3-4-0-6-2-11-0"
- style="font-size:21px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">+</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:21px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">+</tspan></text>
<g
transform="translate(1.0962334,-2.7492248)"
id="g3595">
@@ -1347,42 +1308,41 @@
y="1037.0271"
id="text4090-8-7-8-7-6-3-8-4-1-2"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="30.942892"
y="1037.0271"
id="tspan4092-8-6-3-1-8-4-4-55-7-3"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="25.247679"
y="1037.0271"
id="text4090-8-7-8-7-6-3-8-4-1-2-5"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="25.247679"
y="1037.0271"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-7"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="19.552465"
y="1037.0271"
id="text4090-8-7-8-7-6-3-8-4-1-2-7"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="19.552465"
y="1037.0271"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-9"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="42.830166"
y="1033.2393"
id="text4090-8-7-8-7-6-3-8-4-8"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="42.830166"
y="1033.2393"
id="tspan4092-8-6-3-1-8-4-4-55-2"
- style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">New header files must get a new page in the API docs.</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">New header files must get a new page in the API docs.</tspan></text>
</g>
<g
transform="translate(1.0962334,-2.7492248)"
@@ -1392,52 +1352,51 @@
y="1046.0962"
id="text4090-8-7-8-7-6-3-8-2"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="42.212418"
y="1046.0962"
id="tspan4092-8-6-3-1-8-4-4-5"
- style="font-size:11px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">Available from patchwork, or email header. Reply to Cover letters.</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:11px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">Available from patchwork, or email header. Reply to Cover letters.</tspan></text>
<text
x="31.140535"
y="1049.8527"
id="text4090-8-7-8-7-6-3-8-4-1-2-2"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="31.140535"
y="1049.8527"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-3"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="25.445322"
y="1049.8527"
id="text4090-8-7-8-7-6-3-8-4-1-2-5-2"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="25.445322"
y="1049.8527"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-7-2"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="19.750109"
y="1049.8527"
id="text4090-8-7-8-7-6-3-8-4-1-2-7-1"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="19.750109"
y="1049.8527"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-9-6"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
<text
x="14.016749"
y="1049.8527"
id="text4090-8-7-8-7-6-3-8-4-1-2-7-1-8"
xml:space="preserve"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace Bold"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:0%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
x="14.016749"
y="1049.8527"
id="tspan4092-8-6-3-1-8-4-4-55-7-3-9-6-5"
- style="font-size:13px;font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace Bold">*</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:300;font-stretch:normal;font-size:13px;line-height:125%;font-family:monospace;-inkscape-font-specification:'Monospace Bold';text-align:start;writing-mode:lr-tb;text-anchor:start">*</tspan></text>
</g>
<rect
width="196.44218"
@@ -1449,36 +1408,35 @@
id="rect3239-0-9-4-2-1-9-9-7"
style="fill:none;stroke:#00233b;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
<rect
- width="678.43036"
- height="43.497677"
- rx="7.8557949"
- ry="6.7630997"
- x="31.274473"
- y="836.69745"
+ width="710.73767"
+ height="43.476074"
+ rx="8.2298937"
+ ry="6.7597408"
+ x="31.285275"
+ y="836.70825"
id="rect3239-0-9-4-3-6"
- style="fill:none;stroke:#00233b;stroke-width:0.92794865;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ style="fill:none;stroke:#00233b;stroke-width:0.949551;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<text
x="73.804535"
y="864.28137"
id="text4090-8-1-8-65-9-1"
xml:space="preserve"
- style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
sodipodi:role="line"
x="73.804535"
y="864.28137"
- id="tspan3266-8">git format-patch -[N]</tspan></text>
+ id="tspan3266-8"
+ style="font-size:19px;line-height:1.25;font-family:monospace">git format-patch -[N]</tspan></text>
<text
x="342.70221"
y="862.83478"
id="text4090-8-1-8-65-9-1-7"
xml:space="preserve"
- style="font-size:19px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Monospace;-inkscape-font-specification:Monospace"
- sodipodi:linespacing="125%"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:0%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"><tspan
sodipodi:role="line"
x="342.70221"
y="862.83478"
id="tspan3266-8-2"
- style="font-size:14px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Monospace;-inkscape-font-specification:Monospace">// creates .patch files for final review</tspan></text>
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:14px;line-height:125%;font-family:monospace;-inkscape-font-specification:Monospace;text-align:start;writing-mode:lr-tb;text-anchor:start">// creates .patch files for final review</tspan></text>
</g>
</svg>
--
2.22.0
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v7 12/14] doc: remove references to make from contributing guide
@ 2020-10-21 8:17 9% ` Ciara Power
2020-10-21 8:17 2% ` [dpdk-dev] [PATCH v7 14/14] doc: update patch cheatsheet to use meson Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2020-10-21 8:17 UTC (permalink / raw)
To: dev; +Cc: Ciara Power, Louise Kilheeney
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
---
v7:
- Updated exec_env and arch lists.
- Updated documentation build instruction.
v5:
- Removed reference to test-build.sh used for Make.
- Added point back in for handling specific code, reworded as
necessary.
- Added library statistics section, removing only the mention of
CONFIG options.
---
doc/guides/contributing/design.rst | 37 ++++++++---------------
doc/guides/contributing/documentation.rst | 31 ++++---------------
doc/guides/contributing/patches.rst | 6 ++--
3 files changed, 21 insertions(+), 53 deletions(-)
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 5fe7f63942..cbd0c3dd8e 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -21,7 +21,7 @@ A file located in a subdir of "linux" is specific to this execution environment.
When absolutely necessary, there are several ways to handle specific code:
-* Use a ``#ifdef`` with the CONFIG option in the C code.
+* Use a ``#ifdef`` with a build definition macro in the C code.
This can be done when the differences are small and they can be embedded in the same C file:
.. code-block:: c
@@ -32,30 +32,25 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use the CONFIG option in the Makefile. This is done when the differences are more significant.
+* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
In this case, the code is split into two separate files that are architecture or environment specific.
This should only apply inside the EAL library.
-.. note::
-
- As in the linux kernel, the ``CONFIG_`` prefix is not used in C code.
- This is only needed in Makefiles or shell scripts.
-
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
-The following config options can be used:
+The following macro options can be used:
-* ``CONFIG_RTE_ARCH`` is a string that contains the name of the architecture.
-* ``CONFIG_RTE_ARCH_I686``, ``CONFIG_RTE_ARCH_X86_64``, ``CONFIG_RTE_ARCH_X86_64_32`` or ``CONFIG_RTE_ARCH_PPC_64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH`` is a string that contains the name of the architecture.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_64_32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The following config options can be used:
+The following macro options can be used:
-* ``CONFIG_RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``CONFIG_RTE_EXEC_ENV_FREEBSD`` or ``CONFIG_RTE_EXEC_ENV_LINUX`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
Mbuf features
-------------
@@ -87,22 +82,14 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each library that maintains statistics counters should provide a single build
-time flag that decides whether the statistics counter collection is enabled or
-not. This flag should be exposed as a variable within the DPDK configuration
-file. When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended,
+as build-time options should be avoided. However, if build-time options are used,
+for example as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
are collected for any instance of any object type provided by the library:
-.. code-block:: console
-
- # DPDK file config/common_linux, config/common_freebsd, etc.
- CONFIG_RTE_<LIBRARY_NAME>_STATS_COLLECT=y/n
-
-The default value for this DPDK configuration file variable (either "yes" or
-"no") is decided by each library.
-
Prevention of ABI changes due to library statistics support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index be985e6cf8..a4e6be6aca 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -218,25 +218,14 @@ Build commands
~~~~~~~~~~~~~~
The documentation is built using the standard DPDK build system.
-Some examples are shown below:
-* Generate all the documentation targets::
+To build the documentation::
- make doc
+ ninja -C build doc
-* Generate the Doxygen API documentation in Html::
+See :doc:`../linux_gsg/build_dpdk` for more detail on compiling DPDK with meson.
- make doc-api-html
-
-* Generate the guides documentation in Html::
-
- make doc-guides-html
-
-* Generate the guides documentation in Pdf::
-
- make doc-guides-pdf
-
-The output of these commands is generated in the ``build`` directory::
+The output is generated in the ``build`` directory::
build/doc
|-- html
@@ -251,10 +240,6 @@ The output of these commands is generated in the ``build`` directory::
Make sure to fix any Sphinx or Doxygen warnings when adding or updating documentation.
-The documentation output files can be removed as follows::
-
- make doc-clean
-
Document Guidelines
-------------------
@@ -304,7 +289,7 @@ Line Length
Long literal command lines can be shown wrapped with backslashes. For
example::
- testpmd -l 2-3 -n 4 \
+ dpdk-testpmd -l 2-3 -n 4 \
--vdev=virtio_user0,path=/dev/vhost-net,queues=2,queue_size=1024 \
-- -i --tx-offloads=0x0000002c --enable-lro --txq=2 --rxq=2 \
--txd=1024 --rxd=1024
@@ -456,7 +441,7 @@ Code and Literal block sections
For long literal lines that exceed that limit try to wrap the text at sensible locations.
For example a long command line could be documented like this and still work if copied directly from the docs::
- build/app/testpmd -l 0-2 -n3 --vdev=net_pcap0,iface=eth0 \
+ ./<build_dir>/app/dpdk-testpmd -l 0-2 -n3 --vdev=net_pcap0,iface=eth0 \
--vdev=net_pcap1,iface=eth1 \
-- -i --nb-cores=2 --nb-ports=2 \
--total-num-mbufs=2048
@@ -739,9 +724,5 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati
/** Array of physical page addresses for the mempool buffer. */
phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
-* Check for Doxygen warnings in new code by checking the API documentation build::
-
- make doc-api-html >/dev/null
-
* Read the rendered section of the documentation that you have added for correctness, clarity and consistency
with the surrounding text.
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c3..9fa5a79c85 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -486,9 +486,9 @@ By default, ABI compatibility checks are disabled.
To enable them, a reference version must be selected via the environment
variable ``DPDK_ABI_REF_VERSION``.
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
-results in a subfolder of the current working directory.
+The ``devtools/test-meson-builds.sh`` script then build this reference version
+in a temporary directory and store the results in a subfolder of the current
+working directory.
The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
to a different location.
--
2.22.0
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v2 2/2] lpm: hide internal data
2020-10-21 7:58 0% ` Thomas Monjalon
@ 2020-10-21 8:15 0% ` Ruifeng Wang
0 siblings, 0 replies; 200+ results
From: Ruifeng Wang @ 2020-10-21 8:15 UTC (permalink / raw)
To: thomas
Cc: Bruce Richardson, Vladimir Medvedkin, dev, Honnappa Nagarahalli,
nd, David Marchand, Kevin Traynor, nd
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, October 21, 2020 3:59 PM
> To: Ruifeng Wang <Ruifeng.Wang@arm.com>
> Cc: Bruce Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin
> <vladimir.medvedkin@intel.com>; dev@dpdk.org; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>; David Marchand
> <david.marchand@redhat.com>; Kevin Traynor <ktraynor@redhat.com>
> Subject: Re: [PATCH v2 2/2] lpm: hide internal data
>
> 21/10/2020 05:02, Ruifeng Wang:
> > --- a/doc/guides/rel_notes/release_20_11.rst
> > +++ b/doc/guides/rel_notes/release_20_11.rst
> > @@ -602,6 +602,8 @@ ABI Changes
> >
> > * sched: Added new fields to ``struct rte_sched_subport_port_params``.
> >
> > +* lpm: Removed fields other than ``tbl24`` and ``tbl8`` from the struct
> ``rte_lpm``.
> > + The removed fields were made internal.
> >
> > Known Issues
> > ------------
>
> Can be changed on apply, but please note when adding a new paragraph,
> that you should add a new blank line, keeping 2 blank lines before the next
> section as it was before your patch.
>
> PS: having this kind of minor comment means you are a regular contributor,
> so we expect perfect patches :)
>
Sorry for the extra burden added to merge. Will pay more attention on format and other details.
Thanks.
> Thanks
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 2/2] lpm: hide internal data
2020-10-21 3:02 5% ` [dpdk-dev] [PATCH v2 2/2] lpm: hide internal data Ruifeng Wang
@ 2020-10-21 7:58 0% ` Thomas Monjalon
2020-10-21 8:15 0% ` Ruifeng Wang
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-21 7:58 UTC (permalink / raw)
To: Ruifeng Wang
Cc: Bruce Richardson, Vladimir Medvedkin, dev, honnappa.nagarahalli,
nd, David Marchand, Kevin Traynor
21/10/2020 05:02, Ruifeng Wang:
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -602,6 +602,8 @@ ABI Changes
>
> * sched: Added new fields to ``struct rte_sched_subport_port_params``.
>
> +* lpm: Removed fields other than ``tbl24`` and ``tbl8`` from the struct ``rte_lpm``.
> + The removed fields were made internal.
>
> Known Issues
> ------------
Can be changed on apply, but please note when adding a new paragraph,
that you should add a new blank line, keeping 2 blank lines
before the next section as it was before your patch.
PS: having this kind of minor comment means you are a regular
contributor, so we expect perfect patches :)
Thanks
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 2/2] lpm: hide internal data
@ 2020-10-21 3:02 5% ` Ruifeng Wang
2020-10-21 7:58 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ruifeng Wang @ 2020-10-21 3:02 UTC (permalink / raw)
To: Bruce Richardson, Vladimir Medvedkin
Cc: dev, honnappa.nagarahalli, nd, Ruifeng Wang, David Marchand,
Kevin Traynor, Thomas Monjalon
Fields except tbl24 and tbl8 in rte_lpm structure have no
need to be exposed to the user.
Hide the unneeded exposure of structure fields for better
ABI maintainability.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
v2:
Added release notes.
doc/guides/rel_notes/release_20_11.rst | 2 +
lib/librte_lpm/rte_lpm.c | 152 +++++++++++++++----------
lib/librte_lpm/rte_lpm.h | 7 --
3 files changed, 93 insertions(+), 68 deletions(-)
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0d45b5003..3b5034ce5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -602,6 +602,8 @@ ABI Changes
* sched: Added new fields to ``struct rte_sched_subport_port_params``.
+* lpm: Removed fields other than ``tbl24`` and ``tbl8`` from the struct ``rte_lpm``.
+ The removed fields were made internal.
Known Issues
------------
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 51a0ae578..88d31df6d 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -42,9 +42,17 @@ enum valid_flag {
/** @internal LPM structure. */
struct __rte_lpm {
- /* LPM metadata. */
+ /* Exposed LPM data. */
struct rte_lpm lpm;
+ /* LPM metadata. */
+ char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
+ uint32_t max_rules; /**< Max. balanced rules per lpm. */
+ uint32_t number_tbl8s; /**< Number of tbl8s. */
+ /**< Rule info table. */
+ struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH];
+ struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
+
/* RCU config. */
struct rte_rcu_qsbr *v; /* RCU QSBR variable. */
enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */
@@ -104,7 +112,7 @@ depth_to_range(uint8_t depth)
struct rte_lpm *
rte_lpm_find_existing(const char *name)
{
- struct rte_lpm *l = NULL;
+ struct __rte_lpm *l = NULL;
struct rte_tailq_entry *te;
struct rte_lpm_list *lpm_list;
@@ -123,7 +131,7 @@ rte_lpm_find_existing(const char *name)
return NULL;
}
- return l;
+ return &l->lpm;
}
/*
@@ -157,8 +165,8 @@ rte_lpm_create(const char *name, int socket_id,
/* guarantee there's no existing */
TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
+ internal_lpm = te->data;
+ if (strncmp(name, internal_lpm->name, RTE_LPM_NAMESIZE) == 0)
break;
}
@@ -193,10 +201,10 @@ rte_lpm_create(const char *name, int socket_id,
}
lpm = &internal_lpm->lpm;
- lpm->rules_tbl = rte_zmalloc_socket(NULL,
+ internal_lpm->rules_tbl = rte_zmalloc_socket(NULL,
(size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm->rules_tbl == NULL) {
+ if (internal_lpm->rules_tbl == NULL) {
RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n");
rte_free(internal_lpm);
internal_lpm = NULL;
@@ -211,7 +219,7 @@ rte_lpm_create(const char *name, int socket_id,
if (lpm->tbl8 == NULL) {
RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n");
- rte_free(lpm->rules_tbl);
+ rte_free(internal_lpm->rules_tbl);
rte_free(internal_lpm);
internal_lpm = NULL;
lpm = NULL;
@@ -221,11 +229,11 @@ rte_lpm_create(const char *name, int socket_id,
}
/* Save user arguments. */
- lpm->max_rules = config->max_rules;
- lpm->number_tbl8s = config->number_tbl8s;
- strlcpy(lpm->name, name, sizeof(lpm->name));
+ internal_lpm->max_rules = config->max_rules;
+ internal_lpm->number_tbl8s = config->number_tbl8s;
+ strlcpy(internal_lpm->name, name, sizeof(internal_lpm->name));
- te->data = lpm;
+ te->data = internal_lpm;
TAILQ_INSERT_TAIL(lpm_list, te, next);
@@ -241,7 +249,7 @@ rte_lpm_create(const char *name, int socket_id,
void
rte_lpm_free(struct rte_lpm *lpm)
{
- struct __rte_lpm *internal_lpm;
+ struct __rte_lpm *internal_lpm = NULL;
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -255,7 +263,8 @@ rte_lpm_free(struct rte_lpm *lpm)
/* find our tailq entry */
TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
+ internal_lpm = te->data;
+ if (&internal_lpm->lpm == lpm)
break;
}
if (te != NULL)
@@ -263,11 +272,10 @@ rte_lpm_free(struct rte_lpm *lpm)
rte_mcfg_tailq_write_unlock();
- internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
if (internal_lpm->dq != NULL)
rte_rcu_qsbr_dq_delete(internal_lpm->dq);
rte_free(lpm->tbl8);
- rte_free(lpm->rules_tbl);
+ rte_free(internal_lpm->rules_tbl);
rte_free(internal_lpm);
rte_free(te);
}
@@ -310,11 +318,11 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
} else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) {
/* Init QSBR defer queue. */
snprintf(rcu_dq_name, sizeof(rcu_dq_name),
- "LPM_RCU_%s", lpm->name);
+ "LPM_RCU_%s", internal_lpm->name);
params.name = rcu_dq_name;
params.size = cfg->dq_size;
if (params.size == 0)
- params.size = lpm->number_tbl8s;
+ params.size = internal_lpm->number_tbl8s;
params.trigger_reclaim_limit = cfg->reclaim_thd;
params.max_reclaim_size = cfg->reclaim_max;
if (params.max_reclaim_size == 0)
@@ -352,74 +360,79 @@ static int32_t
rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
- uint32_t rule_gindex, rule_index, last_rule;
+ uint32_t rule_gindex, rule_index, last_rule, first_index;
+ struct __rte_lpm *i_lpm;
int i;
VERIFY_DEPTH(depth);
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
/* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
+ if (i_lpm->rule_info[depth - 1].used_rules > 0) {
/* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
+ rule_gindex = i_lpm->rule_info[depth - 1].first_rule;
/* Initialise rule_index to point to start of rule group. */
rule_index = rule_gindex;
/* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+ last_rule = rule_gindex
+ + i_lpm->rule_info[depth - 1].used_rules;
for (; rule_index < last_rule; rule_index++) {
/* If rule already exists update next hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
+ if (i_lpm->rules_tbl[rule_index].ip == ip_masked) {
- if (lpm->rules_tbl[rule_index].next_hop
+ if (i_lpm->rules_tbl[rule_index].next_hop
== next_hop)
return -EEXIST;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ i_lpm->rules_tbl[rule_index].next_hop
+ = next_hop;
return rule_index;
}
}
- if (rule_index == lpm->max_rules)
+ if (rule_index == i_lpm->max_rules)
return -ENOSPC;
} else {
/* Calculate the position in which the rule will be stored. */
rule_index = 0;
for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
+ if (i_lpm->rule_info[i - 1].used_rules > 0) {
+ rule_index = i_lpm->rule_info[i - 1].first_rule
+ + i_lpm->rule_info[i - 1].used_rules;
break;
}
}
- if (rule_index == lpm->max_rules)
+ if (rule_index == i_lpm->max_rules)
return -ENOSPC;
- lpm->rule_info[depth - 1].first_rule = rule_index;
+ i_lpm->rule_info[depth - 1].first_rule = rule_index;
}
/* Make room for the new rule in the array. */
for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
+ first_index = i_lpm->rule_info[i - 1].first_rule;
+ if (first_index + i_lpm->rule_info[i - 1].used_rules
+ == i_lpm->max_rules)
return -ENOSPC;
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
+ if (i_lpm->rule_info[i - 1].used_rules > 0) {
+ i_lpm->rules_tbl[first_index
+ + i_lpm->rule_info[i - 1].used_rules]
+ = i_lpm->rules_tbl[first_index];
+ i_lpm->rule_info[i - 1].first_rule++;
}
}
/* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ i_lpm->rules_tbl[rule_index].ip = ip_masked;
+ i_lpm->rules_tbl[rule_index].next_hop = next_hop;
/* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
+ i_lpm->rule_info[depth - 1].used_rules++;
return rule_index;
}
@@ -432,23 +445,25 @@ static void
rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
int i;
+ struct __rte_lpm *i_lpm;
VERIFY_DEPTH(depth);
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
+ i_lpm->rules_tbl[rule_index] =
+ i_lpm->rules_tbl[i_lpm->rule_info[depth - 1].first_rule
+ + i_lpm->rule_info[depth - 1].used_rules - 1];
for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
+ if (i_lpm->rule_info[i].used_rules > 0) {
+ i_lpm->rules_tbl[i_lpm->rule_info[i].first_rule - 1] =
+ i_lpm->rules_tbl[i_lpm->rule_info[i].first_rule
+ + i_lpm->rule_info[i].used_rules - 1];
+ i_lpm->rule_info[i].first_rule--;
}
}
- lpm->rule_info[depth - 1].used_rules--;
+ i_lpm->rule_info[depth - 1].used_rules--;
}
/*
@@ -459,16 +474,18 @@ static int32_t
rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
+ struct __rte_lpm *internal_lpm;
VERIFY_DEPTH(depth);
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+ internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
+ rule_gindex = internal_lpm->rule_info[depth - 1].first_rule;
+ last_rule = rule_gindex + internal_lpm->rule_info[depth - 1].used_rules;
/* Scan used rules at given depth to find rule. */
for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
/* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
+ if (internal_lpm->rules_tbl[rule_index].ip == ip_masked)
return rule_index;
}
@@ -484,9 +501,11 @@ _tbl8_alloc(struct rte_lpm *lpm)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
+ struct __rte_lpm *i_lpm;
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
+ for (group_idx = 0; group_idx < i_lpm->number_tbl8s; group_idx++) {
tbl8_entry = &lpm->tbl8[group_idx *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
/* If a free tbl8 group is found clean it and set as VALID. */
@@ -844,6 +863,7 @@ uint32_t *next_hop)
{
uint32_t ip_masked;
int32_t rule_index;
+ struct __rte_lpm *internal_lpm;
/* Check user arguments. */
if ((lpm == NULL) ||
@@ -855,8 +875,9 @@ uint32_t *next_hop)
ip_masked = ip & depth_to_mask(depth);
rule_index = rule_find(lpm, ip_masked, depth);
+ internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ *next_hop = internal_lpm->rules_tbl[rule_index].next_hop;
return 1;
}
@@ -897,7 +918,9 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
tbl24_range = depth_to_range(depth);
tbl24_index = (ip_masked >> 8);
struct rte_lpm_tbl_entry zero_tbl24_entry = {0};
+ struct __rte_lpm *i_lpm;
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
/*
* Firstly check the sub_rule_index. A -1 indicates no replacement rule
* and a positive number indicates a sub_rule_index.
@@ -939,7 +962,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
*/
struct rte_lpm_tbl_entry new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .next_hop = i_lpm->rules_tbl[sub_rule_index].next_hop,
.valid = VALID,
.valid_group = 0,
.depth = sub_rule_depth,
@@ -949,7 +972,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
.valid = VALID,
.valid_group = VALID,
.depth = sub_rule_depth,
- .next_hop = lpm->rules_tbl
+ .next_hop = i_lpm->rules_tbl
[sub_rule_index].next_hop,
};
@@ -1048,6 +1071,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
tbl8_range, i;
int32_t tbl8_recycle_index, status = 0;
+ struct __rte_lpm *i_lpm;
/*
* Calculate the index into tbl24 and range. Note: All depths larger
@@ -1061,6 +1085,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
tbl8_range = depth_to_range(depth);
+ i_lpm = container_of(lpm, struct __rte_lpm, lpm);
if (sub_rule_index < 0) {
/*
* Loop through the range of entries on tbl8 for which the
@@ -1076,7 +1101,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
.valid = VALID,
.depth = sub_rule_depth,
.valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .next_hop = i_lpm->rules_tbl[sub_rule_index].next_hop,
};
/*
@@ -1188,16 +1213,21 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
void
rte_lpm_delete_all(struct rte_lpm *lpm)
{
+ struct __rte_lpm *internal_lpm;
+
+ internal_lpm = container_of(lpm, struct __rte_lpm, lpm);
/* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+ memset(internal_lpm->rule_info, 0, sizeof(internal_lpm->rule_info));
/* Zero tbl24. */
memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
/* Zero tbl8. */
memset(lpm->tbl8, 0, sizeof(lpm->tbl8[0])
- * RTE_LPM_TBL8_GROUP_NUM_ENTRIES * lpm->number_tbl8s);
+ * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+ * internal_lpm->number_tbl8s);
/* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+ memset(internal_lpm->rules_tbl, 0,
+ sizeof(internal_lpm->rules_tbl[0]) * internal_lpm->max_rules);
}
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 5b3b7b5b5..9a0ac97ab 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
/** @internal LPM structure. */
struct rte_lpm {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- uint32_t number_tbl8s; /**< Number of tbl8s. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
/* LPM Tables. */
struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
__rte_cache_aligned; /**< LPM tbl24 table. */
struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
- struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
};
/** LPM RCU QSBR configuration structure. */
--
2.20.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 5/5] doc: change references to blacklist and whitelist
@ 2020-10-20 16:20 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-20 16:20 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Luca Boccassi
There are two areas where documentation needed update.
The first was use of whitelist when describing address
filtering.
The other is the legacy -w whitelist option for PCI
which is used in many examples
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
doc: replace -w with -a in the documentation
The -w option is deprecated and replaced with -a
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/cryptodevs/dpaa2_sec.rst | 6 ++--
doc/guides/cryptodevs/dpaa_sec.rst | 6 ++--
doc/guides/cryptodevs/qat.rst | 12 ++++----
doc/guides/eventdevs/octeontx2.rst | 20 ++++++-------
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/eal_args.include.rst | 14 +++++-----
doc/guides/linux_gsg/linux_drivers.rst | 4 +--
doc/guides/mempool/octeontx2.rst | 4 +--
doc/guides/nics/bnxt.rst | 12 ++++----
doc/guides/nics/cxgbe.rst | 12 ++++----
doc/guides/nics/dpaa.rst | 6 ++--
doc/guides/nics/dpaa2.rst | 6 ++--
doc/guides/nics/enic.rst | 6 ++--
doc/guides/nics/fail_safe.rst | 16 +++++------
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/i40e.rst | 16 +++++------
doc/guides/nics/ice.rst | 28 +++++++++++++------
doc/guides/nics/ixgbe.rst | 4 +--
doc/guides/nics/mlx4.rst | 16 +++++------
doc/guides/nics/mlx5.rst | 12 ++++----
doc/guides/nics/nfb.rst | 2 +-
doc/guides/nics/octeontx2.rst | 22 +++++++--------
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/nics/tap.rst | 2 +-
doc/guides/nics/thunderx.rst | 4 +--
.../prog_guide/env_abstraction_layer.rst | 6 ++--
doc/guides/prog_guide/multi_proc_support.rst | 4 +--
doc/guides/prog_guide/poll_mode_drv.rst | 6 ++--
.../prog_guide/switch_representation.rst | 6 ++--
doc/guides/rel_notes/release_20_11.rst | 5 ++++
doc/guides/sample_app_ug/bbdev_app.rst | 14 +++++-----
.../sample_app_ug/eventdev_pipeline.rst | 2 +-
doc/guides/sample_app_ug/ipsec_secgw.rst | 12 ++++----
doc/guides/sample_app_ug/l3_forward.rst | 6 ++--
.../sample_app_ug/l3_forward_access_ctrl.rst | 2 +-
.../sample_app_ug/l3_forward_power_man.rst | 2 +-
doc/guides/sample_app_ug/vdpa.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
doc/guides/tools/flow-perf.rst | 2 +-
doc/guides/tools/testregex.rst | 2 +-
41 files changed, 166 insertions(+), 149 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa2_sec.rst b/doc/guides/cryptodevs/dpaa2_sec.rst
index 3053636b8295..b50fee76954a 100644
--- a/doc/guides/cryptodevs/dpaa2_sec.rst
+++ b/doc/guides/cryptodevs/dpaa2_sec.rst
@@ -134,10 +134,10 @@ Supported DPAA2 SoCs
* LS2088A/LS2048A
* LS1088A/LS1048A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 SEC device, following commands can be used.
+The DPAA2 SEC device can be blocked with the following:
.. code-block:: console
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index db3c8e918945..38ad45e66d76 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -82,10 +82,10 @@ Supported DPAA SoCs
* LS1046A/LS1026A
* LS1043A/LS1023A
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index a0becf689109..d41ee82aff52 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -127,7 +127,7 @@ Limitations
optimisations in the GEN3 device. And if a GCM session is initialised on a
GEN3 device, then attached to an op sent to a GEN1/GEN2 device, it will not be
enqueued to the device and will be marked as failed. The simplest way to
- mitigate this is to use the bdf whitelist to avoid mixing devices of different
+ mitigate this is to use the PCI allowlist to avoid mixing devices of different
generations in the same process if planning to use for GCM.
* The mixed algo feature on GEN2 is not supported by all kernel drivers. Check
the notes under the Available Kernel Drivers table below for specific details.
@@ -264,7 +264,7 @@ adjusted to the number of VFs which the QAT common code will need to handle.
QAT VF may expose two crypto devices, sym and asym, it may happen that the
number of devices will be bigger than MAX_DEVS and the process will show an error
during PMD initialisation. To avoid this problem CONFIG_RTE_CRYPTO_MAX_DEVS may be
- increased or -w, pci-whitelist domain:bus:devid:func option may be used.
+ increased or -a, allow domain:bus:devid:func option may be used.
QAT compression PMD needs intermediate buffers to support Deflate compression
@@ -302,7 +302,7 @@ return 0 (thereby avoiding an MMIO) if the device is congested and number of pac
possible to enqueue is smaller.
To use this feature the user must set the parameter on process start as a device additional parameter::
- -w 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
+ -a 03:01.1,qat_sym_enq_threshold=32,qat_comp_enq_threshold=16
All parameters can be used with the same device regardless of order. Parameters are separated
by comma. When the same parameter is used more than once first occurrence of the parameter
@@ -662,7 +662,7 @@ QAT SYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_autotest
QAT ASYM crypto PMD can be tested by running the test application::
@@ -670,7 +670,7 @@ QAT ASYM crypto PMD can be tested by running the test application::
make defconfig
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>cryptodev_qat_asym_autotest
QAT compression PMD can be tested by running the test application::
@@ -679,7 +679,7 @@ QAT compression PMD can be tested by running the test application::
sed -i 's,\(CONFIG_RTE_COMPRESSDEV_TEST\)=n,\1=y,' build/.config
make -j
cd ./build/app
- ./test -l1 -n1 -w <your qat bdf>
+ ./test -l1 -n1 -a <your qat bdf>
RTE>>compressdev_autotest
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 6502f6415fb4..1c671518a4db 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -66,7 +66,7 @@ Runtime Config Options
upper limit for in-flight events.
For example::
- -w 0002:0e:00.0,xae_cnt=16384
+ -a 0002:0e:00.0,xae_cnt=16384
- ``Force legacy mode``
@@ -74,7 +74,7 @@ Runtime Config Options
single workslot mode in SSO and disable the default dual workslot mode.
For example::
- -w 0002:0e:00.0,single_ws=1
+ -a 0002:0e:00.0,single_ws=1
- ``Event Group QoS support``
@@ -89,7 +89,7 @@ Runtime Config Options
default.
For example::
- -w 0002:0e:00.0,qos=[1-50-50-50]
+ -a 0002:0e:00.0,qos=[1-50-50-50]
- ``Selftest``
@@ -98,7 +98,7 @@ Runtime Config Options
The tests are run once the vdev creation is successfully complete.
For example::
- -w 0002:0e:00.0,selftest=1
+ -a 0002:0e:00.0,selftest=1
- ``TIM disable NPA``
@@ -107,7 +107,7 @@ Runtime Config Options
parameter disables NPA and uses software mempool to manage chunks
For example::
- -w 0002:0e:00.0,tim_disable_npa=1
+ -a 0002:0e:00.0,tim_disable_npa=1
- ``TIM modify chunk slots``
@@ -118,7 +118,7 @@ Runtime Config Options
to SSO. The default value is 255 and the max value is 4095.
For example::
- -w 0002:0e:00.0,tim_chnk_slots=1023
+ -a 0002:0e:00.0,tim_chnk_slots=1023
- ``TIM enable arm/cancel statistics``
@@ -126,7 +126,7 @@ Runtime Config Options
event timer adapter.
For example::
- -w 0002:0e:00.0,tim_stats_ena=1
+ -a 0002:0e:00.0,tim_stats_ena=1
- ``TIM limit max rings reserved``
@@ -136,7 +136,7 @@ Runtime Config Options
rings.
For example::
- -w 0002:0e:00.0,tim_rings_lmt=5
+ -a 0002:0e:00.0,tim_rings_lmt=5
- ``TIM ring control internal parameters``
@@ -146,7 +146,7 @@ Runtime Config Options
default values.
For Example::
- -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
- ``Lock NPA contexts in NDC``
@@ -156,7 +156,7 @@ Runtime Config Options
For example::
- -w 0002:0e:00.0,npa_lock_mask=0xf
+ -a 0002:0e:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 2a68f5fc3820..4fba671e4f5b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -67,7 +67,7 @@ DPDK application. Some of the EAL options for FreeBSD are as follows:
is a list of cores to use instead of a core mask.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 542246df686a..043a1dcee109 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -53,7 +53,7 @@ The EAL options are as follows:
Number of memory channels per processor socket.
* ``-b <domain:bus:devid.func>``:
- Blacklisting of ports; prevent EAL from using specified PCI device
+ Blocklisting of ports; prevent EAL from using specified PCI device
(multiple ``-b`` options are allowed).
* ``--use-device``:
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 01afa1b42f94..dbd48ab4fafa 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -44,20 +44,20 @@ Lcore-related options
Device-related options
~~~~~~~~~~~~~~~~~~~~~~
-* ``-b, --pci-blacklist <[domain:]bus:devid.func>``
+* ``-b, --block <[domain:]bus:devid.func>``
- Blacklist a PCI device to prevent EAL from using it. Multiple -b options are
- allowed.
+ Skip probing a PCI device to prevent EAL from using it.
+ Multiple -b options are allowed.
.. Note::
- PCI blacklist cannot be used with ``-w`` option.
+ PCI skip probe cannot be used with the only list ``-a`` option.
-* ``-w, --pci-whitelist <[domain:]bus:devid.func>``
+* ``-a, --allow <[domain:]bus:devid.func>``
- Add a PCI device in white list.
+ Add a PCI device in to the list of probed devices.
.. Note::
- PCI whitelist cannot be used with ``-b`` option.
+ PCI only list cannot be used with the skip probe ``-b`` option.
* ``--vdev <device arguments>``
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 080b44955a11..ef8798569a80 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -93,11 +93,11 @@ parameter ``--vfio-vf-token``.
3. echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
4. Start the PF:
- <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -w 86:00.0 \
+ <build_dir>/app/dpdk-testpmd -l 22-25 -n 4 -a 86:00.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=pf -- -i
5. Start the VF:
- <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -w 86:02.0 \
+ <build_dir>/app/dpdk-testpmd -l 26-29 -n 4 -a 86:02.0 \
--vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d --file-prefix=vf0 -- -i
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 49b45a04e8ec..efaef85f90fc 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -50,7 +50,7 @@ Runtime Config Options
for the application.
For example::
- -w 0002:02:00.0,max_pools=512
+ -a 0002:02:00.0,max_pools=512
With the above configuration, the driver will set up only 512 mempools for
the given application to save HW resources.
@@ -69,7 +69,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 28973fc3e2e9..82cabab6885d 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -258,8 +258,8 @@ The BNXT PMD supports hardware-based packet filtering:
Unicast MAC Filter
^^^^^^^^^^^^^^^^^^
-The application adds (or removes) MAC addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) MAC addresses to enable (or disable)
+filtering on MAC address used to accept packets.
.. code-block:: console
@@ -269,8 +269,8 @@ whitelist filtering to accept packets.
Multicast MAC Filter
^^^^^^^^^^^^^^^^^^^^
-Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+The application can add (or remove) Multicast addresses that enable (or disable)
+filtering on multicast MAC address used to accept packets.
.. code-block:: console
@@ -278,7 +278,7 @@ whitelist filtering to accept packets.
testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
Application adds (or removes) Multicast addresses to enable (or disable)
-whitelist filtering to accept packets.
+allowlist filtering to accept packets.
Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
@@ -728,7 +728,7 @@ when the PMD is initialized on a PF or trusted-VF. The user can specify the list
of VF IDs of the VFs for which the representors are needed by using the
``devargs`` option ``representor``.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Note that currently hot-plugging of representor ports is not supported so all
the required representors must be specified on the creation of the PF or the
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 54a4c138998c..ee91c85ebfee 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -40,8 +40,8 @@ expose a single PCI bus address, thus, librte_pmd_cxgbe registers
itself as a PCI driver that allocates one Ethernet device per detected
port.
-For this reason, one cannot whitelist/blacklist a single port without
-whitelisting/blacklisting the other ports on the same device.
+For this reason, one cannot allow/block a single port without
+allowing/blocking the other ports on the same device.
.. _t5-nics:
@@ -112,7 +112,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w 02:00.4,keep_ovlan=1 -- -i
+ testpmd -a 02:00.4,keep_ovlan=1 -- -i
Common Runtime Options
^^^^^^^^^^^^^^^^^^^^^^
@@ -317,7 +317,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88 -- -i
+ testpmd -a 02:00.4,filtermode=0x88 -- -i
- ``filtermask`` (default **0**)
@@ -344,7 +344,7 @@ CXGBE PF Only Runtime Options
.. code-block:: console
- testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+ testpmd -a 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
.. _driver-compilation:
@@ -776,7 +776,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
.. code-block:: console
- ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -w 0000:02:00.4 -- -i
+ ./x86_64-native-freebsd-clang/app/testpmd -l 0-3 -n 4 -a 0000:02:00.4 -- -i
Example output:
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 74d4a6058ef0..eb9defca0f09 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -163,10 +163,10 @@ Manager.
this pool.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA device, following commands can be used.
+For blocking a DPAA device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index ca6ba5b5e291..693be5ce8707 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -527,10 +527,10 @@ which are lower than logging ``level``.
Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled
which are lower than logging ``level``.
-Whitelisting & Blacklisting
----------------------------
+Allowing & Blocking
+-------------------
-For blacklisting a DPAA2 device, following commands can be used.
+For blocking a DPAA2 device, following commands can be used.
.. code-block:: console
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index a28a7f4e477a..fa8459435730 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -312,7 +312,7 @@ enables overlay offload, it prints the following message on the console.
By default, PMD enables overlay offload if hardware supports it. To disable
it, set ``devargs`` parameter ``disable-overlay=1``. For example::
- -w 12:00.0,disable-overlay=1
+ -a 12:00.0,disable-overlay=1
By default, the NIC uses 4789 as the VXLAN port. The user may change
it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
@@ -378,7 +378,7 @@ vectorized handler, take the following steps.
PMD consider the vectorized handler when selecting the receive handler.
For example::
- -w 12:00.0,enable-avx2-rx=1
+ -a 12:00.0,enable-avx2-rx=1
As the current implementation is intended for field trials, by default, the
vectorized handler is not considered (``enable-avx2-rx=0``).
@@ -427,7 +427,7 @@ DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
- -w 12:00.0,ig-vlan-rewrite=untag
+ -a 12:00.0,ig-vlan-rewrite=untag
- **SR-IOV**
diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst
index f80346a35898..25525ef19aad 100644
--- a/doc/guides/nics/fail_safe.rst
+++ b/doc/guides/nics/fail_safe.rst
@@ -60,7 +60,7 @@ Fail-safe command line parameters
This parameter allows the user to define a sub-device. The ``<iface>`` part of
this parameter must be a valid device definition. It follows the same format
- provided to any ``-w`` or ``--vdev`` options.
+ provided to any ``-a`` or ``--vdev`` options.
Enclosing the device definition within parentheses here allows using
additional sub-device parameters if need be. They will be passed on to the
@@ -68,11 +68,11 @@ Fail-safe command line parameters
.. note::
- In case where the sub-device is also used as a whitelist device, using ``-w``
+ In case where the sub-device is also used as an allowed device, using ``-a``
on the EAL command line, the fail-safe PMD will use the device with the
options provided to the EAL instead of its own parameters.
- When trying to use a PCI device automatically probed by the blacklist mode,
+ When trying to use a PCI device automatically probed by the command line,
the name for the fail-safe sub-device must be the full PCI id:
Domain:Bus:Device.Function, *i.e.* ``00:00:00.0`` instead of ``00:00.0``,
as the second form is historically accepted by the DPDK.
@@ -123,8 +123,8 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
#. To build a PMD and configure DPDK, refer to the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`.
-#. Start testpmd. The sub-device ``84:00.0`` should be blacklisted from normal EAL
- operations to avoid probing it twice, as the PCI bus is in blacklist mode.
+#. Start testpmd. The sub-device ``84:00.0`` should be blocked from normal EAL
+ operations to avoid probing it twice, as the PCI bus is in blocklist mode.
.. code-block:: console
@@ -132,13 +132,13 @@ This section shows some example of using **testpmd** with a fail-safe PMD.
--vdev 'net_failsafe0,mac=de:ad:be:ef:01:02,dev(84:00.0),dev(net_ring0)' \
-b 84:00.0 -b 00:04.0 -- -i
- If the sub-device ``84:00.0`` is not blacklisted, it will be probed by the
+ If the sub-device ``84:00.0`` is not blocked, it will be probed by the
EAL first. When the fail-safe then tries to initialize it the probe operation
fails.
- Note that PCI blacklist mode is the default PCI operating mode.
+ Note that PCI blocklist mode is the default PCI operating mode.
-#. Alternatively, it can be used alongside any other device in whitelist mode.
+#. Alternatively, it can be used alongside any other device in allow mode.
.. code-block:: console
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 16e00b8f64b5..14b8d0f33fae 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -261,7 +261,7 @@ Supports enabling/disabling receiving multicast frames.
Unicast MAC filter
------------------
-Supports adding MAC addresses to enable whitelist filtering to accept packets.
+Supports adding MAC addresses to enable incoming filtering of packets.
* **[implements] eth_dev_ops**: ``mac_addr_set``, ``mac_addr_add``, ``mac_addr_remove``.
* **[implements] rte_eth_dev_data**: ``mac_addrs``.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index a0b81e66950f..0eb1d7c1af2f 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -194,7 +194,7 @@ Runtime Config Options
The number of reserved queue per VF is determined by its host PF. If the
PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
- VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n.
+ VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
number of reserved queues per VF is 4 by default. If VF request more than
reserved queues per VF, PF will able to allocate max to 16 queues after a VF
@@ -207,7 +207,7 @@ Runtime Config Options
Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
parameter ``support-multi-driver`` is introduced, for example::
- -w 84:00.0,support-multi-driver=1
+ -a 84:00.0,support-multi-driver=1
With the above configuration, DPDK PMD will not change global registers, and
will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
@@ -222,7 +222,7 @@ Runtime Config Options
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
@@ -234,7 +234,7 @@ Runtime Config Options
since it can get better perf in some real work loading cases. So ``devargs`` param
``use-latest-supported-vec`` is introduced, for example::
- -w 84:00.0,use-latest-supported-vec=1
+ -a 84:00.0,use-latest-supported-vec=1
- ``Enable validation for VF message`` (default ``not enabled``)
@@ -244,7 +244,7 @@ Runtime Config Options
Format -- "maximal-message@period-seconds:ignore-seconds"
For example::
- -w 84:00.0,vf_msg_cfg=80@120:180
+ -a 84:00.0,vf_msg_cfg=80@120:180
Vector RX Pre-conditions
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -479,7 +479,7 @@ no physical uplink on the associated NIC port.
To enable this feature, the user should pass a ``devargs`` parameter to the
EAL, for example::
- -w 84:00.0,enable_floating_veb=1
+ -a 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the
VFs created by this PF device.
@@ -487,7 +487,7 @@ VFs created by this PF device.
Alternatively, the user can specify which VFs need to connect to this floating
VEB using the ``floating_veb_list`` argument::
- -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
+ -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
while other VFs connect to the normal VEB.
@@ -822,7 +822,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
7. The command line of running l3fwd would be something like the following::
- ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+ ./l3fwd -l 18-21 -n 4 -a 82:00.0 -w 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 25a821177a4c..bb76d62a0139 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -47,7 +47,7 @@ Runtime Config Options
But if user intend to use the device without OS package, user can take ``devargs``
parameter ``safe-mode-support``, for example::
- -w 80:00.0,safe-mode-support=1
+ -a 80:00.0,safe-mode-support=1
Then the driver will be initialized successfully and the device will enter Safe Mode.
NOTE: In Safe mode, only very limited features are available, features like RSS,
@@ -58,7 +58,7 @@ Runtime Config Options
In pipeline mode, a flow can be set at one specific stage by setting parameter
``priority``. Currently, we support two stages: priority = 0 or !0. Flows with
priority 0 located at the first pipeline stage which typically be used as a firewall
- to drop the packet on a blacklist(we called it permission stage). At this stage,
+ to drop the packet on a blocklist(we called it permission stage). At this stage,
flow rules are created for the device's exact match engine: switch. Flows with priority
!0 located at the second stage, typically packets are classified here and be steered to
specific queue or queue group (we called it distribution stage), At this stage, flow
@@ -70,7 +70,19 @@ Runtime Config Options
use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``,
for example::
- -w 80:00.0,pipeline-mode-support=1
+ -a 80:00.0,pipeline-mode-support=1
+
+- ``Flow Mark Support`` (default ``0``)
+
+ This is a hint to the driver to select the data path that supports flow mark extraction
+ by default.
+ NOTE: This is an experimental devarg, it will be removed when any of below conditions
+ is ready.
+ 1) all data paths support flow mark (currently vPMD does not)
+ 2) a new offload like RTE_DEV_RX_OFFLOAD_FLOW_MARK be introduced as a standard way to hint.
+ Example::
+
+ -a 80:00.0,flow-mark-support=1
- ``Protocol extraction for per queue``
@@ -79,8 +91,8 @@ Runtime Config Options
The argument format is::
- -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
- -w 18:00.0,proto_xtr=<protocol>
+ -a 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -a 18:00.0,proto_xtr=<protocol>
Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
is used as a range separator and ``,`` is used as a single number separator.
@@ -91,14 +103,14 @@ Runtime Config Options
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+ testpmd -a 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
VLAN extraction, other queues run with no protocol extraction.
.. code-block:: console
- testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+ testpmd -a 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
IPv6 extraction, other queues use the default VLAN extraction.
@@ -250,7 +262,7 @@ responses for the same from PF.
#. Bind the VF0, and run testpmd with 'cap=dcf' devarg::
- testpmd -l 22-25 -n 4 -w 18:01.0,cap=dcf -- -i
+ testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf -- -i
#. Monitor the VF2 interface network traffic::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 1f424b38ac3d..c801dbae8146 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,7 +89,7 @@ be passed as part of EAL arguments. For example,
.. code-block:: console
- testpmd -w af:10.0,pflink_fullchk=1 -- -i
+ testpmd -a af:10.0,pflink_fullchk=1 -- -i
- ``pflink_fullchk`` (default **0**)
@@ -277,7 +277,7 @@ option ``representor`` the user can specify which virtual functions to create
port representors for on initialization of the PF PMD by passing the VF IDs of
the VFs which are required.::
- -w DBDF,representor=[0,1,4]
+ -a DBDF,representor=[0,1,4]
Currently hot-plugging of representor ports is not supported so all required
representors must be specified on the creation of the PF.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index 6818b6af515e..67e3964b2b3b 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -29,8 +29,8 @@ Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI
bus address, thus unlike most drivers, librte_pmd_mlx4 registers itself as a
PCI driver that allocates one Ethernet device per detected port.
-For this reason, one cannot white/blacklist a single port without also
-white/blacklisting the others on the same device.
+For this reason, one cannot use block (or allow) a single port without also
+blocking (o allowing) the others on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated
kernel support), librte_pmd_mlx4 relies heavily on system calls for control
@@ -422,7 +422,7 @@ devices managed by librte_pmd_mlx4.
eth4
eth5
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow argument::
{
for intf in eth2 eth3 eth4 eth5;
@@ -434,10 +434,10 @@ devices managed by librte_pmd_mlx4.
Example output::
- -w 0000:83:00.0
- -w 0000:83:00.0
- -w 0000:84:00.0
- -w 0000:84:00.0
+ -a 0000:83:00.0
+ -a 0000:83:00.0
+ -a 0000:84:00.0
+ -a 0000:84:00.0
.. note::
@@ -450,7 +450,7 @@ devices managed by librte_pmd_mlx4.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 0000:83:00.0 -w 0000:84:00.0 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -a 0000:83:00.0 -a 0000:84:00.0 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a071db276fe4..b44490cfe5e4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1537,7 +1537,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
eth32
eth33
-#. Optionally, retrieve their PCI bus addresses for whitelisting::
+#. Optionally, retrieve their PCI bus addresses for use in allow list::
{
for intf in eth2 eth3 eth4 eth5;
@@ -1549,10 +1549,10 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
Example output::
- -w 0000:05:00.1
- -w 0000:06:00.0
- -w 0000:06:00.1
- -w 0000:05:00.0
+ -i 0000:05:00.1
+ -i 0000:06:00.0
+ -i 0000:06:00.1
+ -i 0000:05:00.0
#. Request huge pages::
@@ -1560,7 +1560,7 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
#. Start testpmd with basic parameters::
- testpmd -l 8-15 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
+ testpmd -l 8-15 -n 4 -i 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
Example output::
diff --git a/doc/guides/nics/nfb.rst b/doc/guides/nics/nfb.rst
index 10f33a025ede..7766a76d7a6d 100644
--- a/doc/guides/nics/nfb.rst
+++ b/doc/guides/nics/nfb.rst
@@ -78,7 +78,7 @@ products) and the device argument `timestamp=1` must be used.
.. code-block:: console
- $RTE_TARGET/app/testpmd -w b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
+ $RTE_TARGET/app/testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index f3be79bbb8a3..9862a1d4508c 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -74,7 +74,7 @@ use arm64-octeontx2-linux-gcc as target.
.. code-block:: console
- ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ ./build/app/testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
@@ -127,7 +127,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,reta_size=256
+ -a 0002:02:00.0,reta_size=256
With the above configuration, reta table of size 256 is populated.
@@ -138,7 +138,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_max_priority=10
+ -a 0002:02:00.0,flow_max_priority=10
With the above configuration, priority level was set to 10 (0-9). Max
priority level supported is 32.
@@ -150,7 +150,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,flow_prealloc_size=4
+ -a 0002:02:00.0,flow_prealloc_size=4
With the above configuration, pre alloc size was set to 4. Max pre alloc
size supported is 32.
@@ -162,7 +162,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,max_sqb_count=64
+ -a 0002:02:00.0,max_sqb_count=64
With the above configuration, each send queue's decscriptor buffer count is
limited to a maximum of 64 buffers.
@@ -174,7 +174,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,switch_header="higig2"
+ -a 0002:02:00.0,switch_header="higig2"
With the above configuration, higig2 will be enabled on that port and the
traffic on this port should be higig2 traffic only. Supported switch header
@@ -196,7 +196,7 @@ Runtime Config Options
For example to select the legacy mode(RSS tag adder as XOR)::
- -w 0002:02:00.0,tag_as_xor=1
+ -a 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -205,7 +205,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,ipsec_in_max_spi=128
+ -a 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
@@ -216,7 +216,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_rx_ctx=1
+ -a 0002:02:00.0,lock_rx_ctx=1
- ``Lock Tx contexts in NDC cache``
@@ -224,7 +224,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,lock_tx_ctx=1
+ -a 0002:02:00.0,lock_tx_ctx=1
.. note::
@@ -240,7 +240,7 @@ Runtime Config Options
For example::
- -w 0002:02:00.0,npa_lock_mask=0xf
+ -a 0002:02:00.0,npa_lock_mask=0xf
.. _otx2_tmapi:
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 959b52c1c333..64322442a003 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -295,7 +295,7 @@ Per-Device Parameters
~~~~~~~~~~~~~~~~~~~~~
The following per-device parameters can be passed via EAL PCI device
-whitelist option like "-w 02:00.0,arg1=value1,...".
+allow option like "-a 02:00.0,arg1=value1,...".
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
boolean parameters value.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 7e44f846206c..3ce696b605d1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -191,7 +191,7 @@ following::
.. Note:
- Change the ``-b`` options to blacklist all of your physical ports. The
+ Change the ``-b`` options to exclude all of your physical ports. The
following command line is all one line.
Also, ``-f themes/black-yellow.theme`` is optional if the default colors
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index b1ef9eba59b8..db64503a9ab8 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -178,7 +178,7 @@ This section provides instructions to configure SR-IOV with Linux OS.
.. code-block:: console
- ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
+ ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -a 0002:01:00.2 \
-- -i --no-flush-rx \
--port-topology=loop
@@ -398,7 +398,7 @@ This scheme is useful when application would like to insert vlan header without
Example:
.. code-block:: console
- -w 0002:01:00.2,skip_data_bytes=8
+ -a 0002:01:00.2,skip_data_bytes=8
Limitations
-----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index a470fd7f29bb..9af4d6192fd4 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -407,12 +407,12 @@ device having emitted a Device Removal Event. In such case, calling
callback. Care must be taken not to close the device from the interrupt handler
context. It is necessary to reschedule such closing operation.
-Blacklisting
+Blocklisting
~~~~~~~~~~~~
-The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
+The EAL PCI device blocklist functionality can be used to mark certain NIC ports as unavailable,
so they are ignored by the DPDK.
-The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
+The ports to be blocklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Misc Functions
~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a84083b96c8a..2d083b8a4f68 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -30,7 +30,7 @@ after a primary process has already configured the hugepage shared memory for th
Secondary processes should run alongside primary process with same DPDK version.
Secondary processes which requires access to physical devices in Primary process, must
- be passed with the same whitelist and blacklist options.
+ be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
two additional command-line parameters are available to the EAL:
@@ -131,7 +131,7 @@ can use).
.. note::
Independent DPDK instances running side-by-side on a single machine cannot share any network ports.
- Any network ports being used by one process should be blacklisted in every other process.
+ Any network ports being used by one process should be blocklisted in every other process.
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 86e0a141e6c7..239ec820eaf5 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -374,9 +374,9 @@ parameters to those ports.
this argument allows user to specify which switch ports to enable port
representors for.::
- -w DBDF,representor=0
- -w DBDF,representor=[0,4,6,9]
- -w DBDF,representor=[0-31]
+ -a DBDF,representor=0
+ -a DBDF,representor=[0,4,6,9]
+ -a DBDF,representor=[0-31]
Note: PMDs are not required to support the standard device arguments and users
should consult the relevant PMD documentation to see support devargs.
diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst
index cc1d0d7569cb..07ba12bea67e 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -59,9 +59,9 @@ which can be thought as a software "patch panel" front-end for applications.
::
- -w pci:dbdf,representor=0
- -w pci:dbdf,representor=[0-3]
- -w pci:dbdf,representor=[0,5-11]
+ -a pci:dbdf,representor=0
+ -a pci:dbdf,representor=[0-3]
+ -a pci:dbdf,representor=[0,5-11]
- As virtual devices, they may be more limited than their physical
counterparts, for instance by exposing only a subset of device
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0d45b500325f..28ab5a03be8c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -539,6 +539,11 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: The definitions related to including and excluding devices
+ has been changed from blacklist/whitelist to include/exclude.
+ There are compatibility macros and command line mapping to accept
+ the old values but applications and scripts are strongly encouraged
+ to migrate to the new names.
ABI Changes
-----------
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 54ff6574aed8..f0947a7544e4 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -79,19 +79,19 @@ This means that HW baseband device/s must be bound to a DPDK driver or
a SW baseband device/s (virtual BBdev) must be created (using --vdev).
To run the application in linux environment with the turbo_sw baseband device
-using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+using the allow option for pci device running on 1 encoding lcore and 1 decoding lcore
issue the command:
.. code-block:: console
- $ ./build/bbdev --vdev='baseband_turbo_sw' -w <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
+ $ ./build/bbdev --vdev='baseband_turbo_sw' -a <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
--file-prefix=bbdev -- -e 0x10 -d 0x20
where, NIC0PCIADDR is the PCI address of the Rx port
This command creates one virtual bbdev devices ``baseband_turbo_sw`` where the
-device gets linked to a corresponding ethernet port as whitelisted by
-the parameter -w.
+device gets linked to a corresponding ethernet port as allowed by
+the parameter -a.
3 cores are allocated to the application, and assigned as:
- core 3 is the main and used to print the stats live on screen,
@@ -111,20 +111,20 @@ Using Packet Generator with baseband device sample application
To allow the bbdev sample app to do the loopback, an influx of traffic is required.
This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
it will print the transmitted along with the looped-back traffic on Rx ports.
-Executing the command below will generate traffic on the two whitelisted ethernet
+Executing the command below will generate traffic on the two allowed ethernet
ports.
.. code-block:: console
$ ./pktgen-3.4.0/app/x86_64-native-linux-gcc/pktgen -c 0x3 \
- --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+ --socket-mem=1,1 --file-prefix=pg -a <NIC1PCIADDR> -- -m 1.0 -P
where:
* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
* ``--file-prefix``: Prefix for hugepage filenames
-* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-a <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
* ``-m <string>``: Matrix for mapping ports to logical cores.
* ``-P``: PROMISCUOUS mode
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index dc7972aa9a5c..e4c23da9ebcb 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -46,7 +46,7 @@ these settings is shown below:
.. code-block:: console
- ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -i FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 434f484138d0..db2685660ff7 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -329,15 +329,15 @@ This means that if the application is using a single core and both hardware
and software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto
-devices is to whitelist the Ethernet devices needed and therefore implicitly
-blacklisting all hardware crypto devices.
+devices is to allowed the Ethernet devices needed and therefore implicitly
+blocklisting all hardware crypto devices.
For example, something like the following command line:
.. code-block:: console
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
- -w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
+ -a 81:00.0 -a 81:00.1 -a 81:00.2 -a 81:00.3 \
--vdev "crypto_aesni_mb" --vdev "crypto_null" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
@@ -935,13 +935,13 @@ The user must setup the following environment variables:
* ``REMOTE_IFACE``: interface name for the test-port on the DUT.
-* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-w <pci-id>')
+* ``ETH_DEV``: ethernet device to be used on the SUT by DPDK ('-a <pci-id>')
Also the user can optionally setup:
* ``SGW_LCORE``: lcore to run ipsec-secgw on (default value is 0)
-* ``CRYPTO_DEV``: crypto device to be used ('-w <pci-id>'). If none specified
+* ``CRYPTO_DEV``: crypto device to be used ('-a <pci-id>'). If none specified
appropriate vdevs will be created by the script
Scripts can be used for multiple test scenarios. To check all available
@@ -1029,4 +1029,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
\ No newline at end of file
+list of available modes please refer to run_test.sh.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 07c8d44936d6..5173da8b108a 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -138,17 +138,17 @@ Following is the sample command:
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x3 --eventq-sched=ordered
or
.. code-block:: console
- ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+ ./build/l3fwd -l 0-3 -n 4 -a <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
In this command:
-* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+* -a option adds the event device supported by platform. Way to pass this device may vary based on platform.
* The --mode option defines PMD to be used for packet I/O.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index c2d4ca73abde..1e580ff86cf4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -18,7 +18,7 @@ The application loads two types of rules at initialization:
* Route information rules, which are used for L3 forwarding
-* Access Control List (ACL) rules that blacklist (or block) packets with a specific characteristic
+* Access Control List (ACL) rules that block packets with a specific characteristic
When packets are received from a port,
the application extracts the necessary information from the TCP/IP header of the received packet and
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index f05816d9b24e..bc162a0118ac 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -378,7 +378,7 @@ See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Program
.. code-block:: console
- ./l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
+ ./l3fwd-power -l xxx -n 4 -a 0000:xx:00.0 -a 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1
Where,
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index d66a724827af..60a7eb227db2 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -52,7 +52,7 @@ Take IFCVF driver for example:
.. code-block:: console
./vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \
- -w 0000:06:00.3,vdpa=1 -w 0000:06:00.4,vdpa=1 \
+ -i 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \
-- --interactive
.. note::
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 28b729dbda8b..72707e9a4a9d 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -417,7 +417,7 @@ Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
one million operations, burst size 32, packet size 64::
- dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb -a 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --digest-sz 12
@@ -427,7 +427,7 @@ Call application for performance latency test of two Aesni MB PMD executed
on two cores for cipher encryption aes-cbc, ten operations in silent mode::
dpdk-test-crypto-perf -l 4-7 --vdev crypto_aesni_mb1
- --vdev crypto_aesni_mb2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
+ --vdev crypto_aesni_mb2 -a 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
@@ -437,7 +437,7 @@ for cipher encryption aes-gcm and auth generation aes-gcm,ten operations
in silent mode, test vector provide in file "test_aes_gcm.data"
with packet verification::
- dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -w 0000:00:00.0 --
+ dpdk-test-crypto-perf -l 4-7 --vdev crypto_openssl -a 0000:00:00.0 --
--devtype crypto_openssl --aead-algo aes-gcm --aead-key-sz 16
--aead-iv-sz 16 --aead-op encrypt --aead-aad-sz 16 --digest-sz 16
--optype aead --silent --ptest verify --total-ops 10
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 7e5dc0c54b1a..4771e8ecf04d 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -59,7 +59,7 @@ with a ``--`` separator:
.. code-block:: console
- sudo ./dpdk-test-flow_perf -n 4 -w 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
+ sudo ./dpdk-test-flow_perf -n 4 -a 08:00.0 -- --ingress --ether --ipv4 --queue --flows-count=1000000
The command line options are:
diff --git a/doc/guides/tools/testregex.rst b/doc/guides/tools/testregex.rst
index 4317aab533e2..112b2bb773e7 100644
--- a/doc/guides/tools/testregex.rst
+++ b/doc/guides/tools/testregex.rst
@@ -70,4 +70,4 @@ The data file, will be used as a source data for the RegEx to work on.
The tool has a number of command line options. Here is the sample command line::
- ./dpdk-test-regex -w 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
+ ./dpdk-test-regex -a 83:00.0 -- --rules rule_file.rof2 --data data_file.txt --job 100
--
2.27.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v3] mbuf: minor cleanup
@ 2020-10-20 11:55 0% ` Thomas Monjalon
2020-11-04 22:17 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-20 11:55 UTC (permalink / raw)
To: Morten Brørup; +Cc: dev, Olivier Matz
Hi Morten,
Any update about this patch please?
07/10/2020 11:16, Olivier Matz:
> Hi Morten,
>
> Thanks for this cleanup. Please see some comments below.
>
> On Wed, Sep 16, 2020 at 12:40:13PM +0200, Morten Brørup wrote:
> > The mbuf header files had some commenting style errors that affected the
> > API documentation.
> > Also, the RTE_ prefix was missing on a macro and a definition.
> >
> > Note: This patch does not touch the offload and attachment flags that are
> > also missing the RTE_ prefix.
> >
> > Changes only affecting documentation:
> > * Removed the MBUF_INVALID_PORT definition from rte_mbuf.h; it is
> > already defined in rte_mbuf_core.h.
> > This removal also reestablished the description of the
> > rte_pktmbuf_reset() function.
> > * Corrected the comment related to RTE_MBUF_MAX_NB_SEGS.
> > * Corrected the comment related to PKT_TX_QINQ_PKT.
> >
> > Changes regarding missing RTE_ prefix:
> > * Converted the MBUF_RAW_ALLOC_CHECK() macro to an
> > __rte_mbuf_raw_sanity_check() inline function.
> > Added backwards compatible macro with the original name.
> > * Renamed the MBUF_INVALID_PORT definition to RTE_MBUF_PORT_INVALID.
> > Added backwards compatible definition with the original name.
> >
> > v2:
> > * Use RTE_MBUF_PORT_INVALID instead of MBUF_INVALID_PORT in rte_mbuf.c.
> >
> > v3:
> > * The functions/macros used in __rte_mbuf_raw_sanity_check() require
> > RTE_ENABLE_ASSERT or RTE_LIBRTE_MBUF_DEBUG, or they don't use the mbuf
> > parameter, which generates a compiler waning. So mark the mbuf parameter
> > __rte_unused if none of them are defined.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 7 ----
> > lib/librte_mbuf/rte_mbuf.c | 4 +-
> > lib/librte_mbuf/rte_mbuf.h | 55 +++++++++++++++++++---------
> > lib/librte_mbuf/rte_mbuf_core.h | 9 +++--
> > 4 files changed, 45 insertions(+), 30 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 279eccb04..88d7d0761 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -294,13 +294,6 @@ Deprecation Notices
> > - https://patches.dpdk.org/patch/71457/
> > - https://patches.dpdk.org/patch/71456/
> >
> > -* rawdev: The rawdev APIs which take a device-specific structure as
> > - parameter directly, or indirectly via a "private" pointer inside another
> > - structure, will be modified to take an additional parameter of the
> > - structure size. The affected APIs will include ``rte_rawdev_info_get``,
> > - ``rte_rawdev_configure``, ``rte_rawdev_queue_conf_get`` and
> > - ``rte_rawdev_queue_setup``.
> > -
> > * acl: ``RTE_ACL_CLASSIFY_NUM`` enum value will be removed.
> > This enum value is not used inside DPDK, while it prevents to add new
> > classify algorithms without causing an ABI breakage.
>
> I think this change is not related.
>
> This makes me think that a deprecation notice could be done for the
> old names without the RTE_ prefix, to be removed in 21.11.
>
>
> > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> > index 8a456e5e6..53a015311 100644
> > --- a/lib/librte_mbuf/rte_mbuf.c
> > +++ b/lib/librte_mbuf/rte_mbuf.c
> > @@ -104,7 +104,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
> > /* init some constant fields */
> > m->pool = mp;
> > m->nb_segs = 1;
> > - m->port = MBUF_INVALID_PORT;
> > + m->port = RTE_MBUF_PORT_INVALID;
> > rte_mbuf_refcnt_set(m, 1);
> > m->next = NULL;
> > }
> > @@ -207,7 +207,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
> > /* init some constant fields */
> > m->pool = mp;
> > m->nb_segs = 1;
> > - m->port = MBUF_INVALID_PORT;
> > + m->port = RTE_MBUF_PORT_INVALID;
> > m->ol_flags = EXT_ATTACHED_MBUF;
> > rte_mbuf_refcnt_set(m, 1);
> > m->next = NULL;
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 7259575a7..406d3abb2 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -554,12 +554,36 @@ __rte_experimental
> > int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
> > const char **reason);
> >
> > -#define MBUF_RAW_ALLOC_CHECK(m) do { \
> > - RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1); \
> > - RTE_ASSERT((m)->next == NULL); \
> > - RTE_ASSERT((m)->nb_segs == 1); \
> > - __rte_mbuf_sanity_check(m, 0); \
> > -} while (0)
> > +#if defined(RTE_ENABLE_ASSERT) || defined(RTE_LIBRTE_MBUF_DEBUG)
>
> I don't see why this #if is needed. Wouldn't it work to have only
> one function definition with the __rte_unused attribute?
>
> > +/**
> > + * Sanity checks on a reinitialized mbuf.
> > + *
> > + * Check the consistency of the given reinitialized mbuf.
> > + * The function will cause a panic if corruption is detected.
> > + *
> > + * Check that the mbuf is properly reinitialized (refcnt=1, next=NULL,
> > + * nb_segs=1), as done by rte_pktmbuf_prefree_seg().
> > + *
>
> Maybe indicate that these checks are only done when debug is on.
>
> > + * @param m
> > + * The mbuf to be checked.
> > + */
> > +static __rte_always_inline void
> > +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m)
> > +{
> > + RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> > + RTE_ASSERT(m->next == NULL);
> > + RTE_ASSERT(m->nb_segs == 1);
> > + __rte_mbuf_sanity_check(m, 0);
> > +}
> > +#else
> > +static __rte_always_inline void
> > +__rte_mbuf_raw_sanity_check(const struct rte_mbuf *m __rte_unused)
> > +{
> > + /* Nothing here. */
> > +}
> > +#endif
> > +/** For backwards compatibility. */
> > +#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
>
> It looks that MBUF_RAW_ALLOC_CHECK() is also used in drivers/net/sfc,
> I think it should be updated too.
>
> >
> > /**
> > * Allocate an uninitialized mbuf from mempool *mp*.
> > @@ -586,7 +610,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> >
> > if (rte_mempool_get(mp, (void **)&m) < 0)
> > return NULL;
> > - MBUF_RAW_ALLOC_CHECK(m);
> > + __rte_mbuf_raw_sanity_check(m);
> > return m;
> > }
> >
> > @@ -609,10 +633,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
> > {
> > RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
> > (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
> > - RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> > - RTE_ASSERT(m->next == NULL);
> > - RTE_ASSERT(m->nb_segs == 1);
> > - __rte_mbuf_sanity_check(m, 0);
> > + __rte_mbuf_raw_sanity_check(m);
> > rte_mempool_put(m->pool, m);
> > }
> >
> > @@ -858,8 +879,6 @@ static inline void rte_pktmbuf_reset_headroom(struct rte_mbuf *m)
> > * @param m
> > * The packet mbuf to be reset.
> > */
> > -#define MBUF_INVALID_PORT UINT16_MAX
> > -
> > static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> > {
> > m->next = NULL;
> > @@ -868,7 +887,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> > m->vlan_tci = 0;
> > m->vlan_tci_outer = 0;
> > m->nb_segs = 1;
> > - m->port = MBUF_INVALID_PORT;
> > + m->port = RTE_MBUF_PORT_INVALID;
> >
> > m->ol_flags &= EXT_ATTACHED_MBUF;
> > m->packet_type = 0;
> > @@ -931,22 +950,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
> > switch (count % 4) {
> > case 0:
> > while (idx != count) {
> > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > rte_pktmbuf_reset(mbufs[idx]);
> > idx++;
> > /* fall-through */
> > case 3:
> > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > rte_pktmbuf_reset(mbufs[idx]);
> > idx++;
> > /* fall-through */
> > case 2:
> > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > rte_pktmbuf_reset(mbufs[idx]);
> > idx++;
> > /* fall-through */
> > case 1:
> > - MBUF_RAW_ALLOC_CHECK(mbufs[idx]);
> > + __rte_mbuf_raw_sanity_check(mbufs[idx]);
> > rte_pktmbuf_reset(mbufs[idx]);
> > idx++;
> > /* fall-through */
> > diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h
> > index 8cd7137ac..4ac5609e3 100644
> > --- a/lib/librte_mbuf/rte_mbuf_core.h
> > +++ b/lib/librte_mbuf/rte_mbuf_core.h
> > @@ -272,7 +272,7 @@ extern "C" {
> > * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
> > */
> > #define PKT_TX_QINQ (1ULL << 49)
> > -/* this old name is deprecated */
> > +/** This old name is deprecated. */
> > #define PKT_TX_QINQ_PKT PKT_TX_QINQ
> >
> > /**
> > @@ -686,7 +686,7 @@ struct rte_mbuf_ext_shared_info {
> > };
> > };
> >
> > -/**< Maximum number of nb_segs allowed. */
> > +/** Maximum number of nb_segs allowed. */
> > #define RTE_MBUF_MAX_NB_SEGS UINT16_MAX
> >
> > /**
> > @@ -714,7 +714,10 @@ struct rte_mbuf_ext_shared_info {
> > #define RTE_MBUF_DIRECT(mb) \
> > (!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
> >
> > -#define MBUF_INVALID_PORT UINT16_MAX
> > +/** NULL value for the uint16_t port type. */
> > +#define RTE_MBUF_PORT_INVALID UINT16_MAX
>
> I don't really like talking about "NULL". What do you think instead of
> this wording?
>
> /** Uninitialized or unspecified port */
>
> > +/** For backwards compatibility. */
> > +#define MBUF_INVALID_PORT RTE_MBUF_PORT_INVALID
> >
> > /**
> > * A macro that points to an offset into the data in the mbuf.
>
> Thanks,
> Olivier
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 0/8] eal: cleanup resources on shutdown
@ 2020-10-19 22:24 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-19 22:24 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, David Marchand, ferruh.yigit, bruce.richardson, andrew.rybchenko
That's a pity this patchset is not concluded.
Please Stephen, could you respin with a fix?
03/05/2020 19:21, David Marchand:
> On Wed, Apr 29, 2020 at 1:58 AM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > Started using valgrind with DPDK, and there are lots of leftover
> > memory and file descriptors. This makes it hard to find application
> > leaks versus DPDK leaks.
> >
> > The DPDK has a function that applications can use to tell it
> > to cleanup resources on shutdown (rte_eal_cleanup). But the
> > current coverage of that API is spotty. Many internal parts of
> > DPDK leave files and allocated memory behind.
> >
> > This patch set is a first step at getting the sub-parts of
> > DPDK to cleanup after themselves. These are the easier ones,
> > the harder and more critical ones are in the drivers
> > and the memory subsystem.
> >
> > There are no new exposed API or ABI changes here.
> >
> > v3
> > - fix a couple of minor checkpatch complaints
> >
> > v2
> > - rebase after 20.05 file renames
> > - incorporate review comment feedback
> > - hold off some of the more involved patches for later
>
> Same segfault as v1.
>
> $ ./devtools/test-null.sh ./build/app/dpdk-testpmd 0x3 --plop
> ./build/app/dpdk-testpmd: unrecognized option '--plop'
> EAL: Detected 8 lcore(s)
> EAL: Detected 1 NUMA nodes
>
> Usage: ./build/app/dpdk-testpmd [options]
>
> (snip)
>
> EAL: FATAL: Invalid 'command line' arguments.
> EAL: Invalid 'command line' arguments.
> EAL: Error - exiting with code: 1
> Cause: Cannot init EAL: Invalid argument
> ./devtools/test-null.sh: line 32: 23134 Broken pipe (
> sleep 1 && echo stop )
> 23135 Segmentation fault (core dumped) | $testpmd -c
> $coremask --no-huge -m 20 $libs -w 0:0.0 --vdev net_null1 --vdev
> net_null2 $eal_options -- --no-mlockall --total-num-mbufs=2048
> $testpmd_options -ia
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
@ 2020-10-19 17:53 0% ` Honnappa Nagarahalli
1 sibling, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-10-19 17:53 UTC (permalink / raw)
To: Ruifeng Wang, Kevin Traynor, Medvedkin, Vladimir, Bruce Richardson
Cc: dev, nd, Honnappa Nagarahalli, nd
<snip>
> > >>
> > >> Hi Ruifeng,
> > >>
> > >> On 15/09/2020 17:02, Bruce Richardson wrote:
> > >>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
> > >>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
> > >>>> be exposed to the user.
> > >>>> Hide the unneeded exposure of structure fields for better ABI
> > >>>> maintainability.
> > >>>>
> > >>>> Suggested-by: David Marchand <david.marchand@redhat.com>
> > >>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > >>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
<snip>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate ABI version
@ 2020-10-19 9:41 9% ` David Marchand
2020-10-27 12:13 4% ` David Marchand
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: David Marchand @ 2020-10-19 9:41 UTC (permalink / raw)
To: dev; +Cc: thomas, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko
The glue libraries are tightly bound to the mlx drivers of a dpdk version
and are packaged with them.
Keeping a separate ABI version prevents us from installing two versions of
dpdk.
Maintaining this separate version just adds confusion.
Align the glue library ABI version to the global ABI version.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
drivers/common/mlx5/linux/meson.build | 2 +-
drivers/common/mlx5/linux/mlx5_glue.h | 1 -
drivers/net/mlx4/meson.build | 2 +-
drivers/net/mlx4/mlx4_glue.h | 1 -
4 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 9ef8e181d7..483df0e181 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -6,7 +6,7 @@ includes += include_directories('.')
static_ibverbs = (get_option('ibverbs_link') == 'static')
dlopen_ibverbs = (get_option('ibverbs_link') == 'dlopen')
LIB_GLUE_BASE = 'librte_pmd_mlx5_glue.so'
-LIB_GLUE_VERSION = '20.02.0'
+LIB_GLUE_VERSION = abi_version
LIB_GLUE = LIB_GLUE_BASE + '.' + LIB_GLUE_VERSION
if dlopen_ibverbs
dpdk_conf.set('RTE_IBVERBS_LINK_DLOPEN', 1)
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index 42b2f61523..ace36c6b07 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -131,7 +131,6 @@ struct mlx5dv_var { uint32_t page_id; uint32_t length; off_t mmap_off;
#define IBV_ACCESS_RELAXED_ORDERING 0
#endif
-/* LIB_GLUE_VERSION must be updated every time this structure is modified. */
struct mlx5_glue {
const char *version;
int (*fork_init)(void);
diff --git a/drivers/net/mlx4/meson.build b/drivers/net/mlx4/meson.build
index 5a25e11a7b..395776a495 100644
--- a/drivers/net/mlx4/meson.build
+++ b/drivers/net/mlx4/meson.build
@@ -11,7 +11,7 @@ endif
static_ibverbs = (get_option('ibverbs_link') == 'static')
dlopen_ibverbs = (get_option('ibverbs_link') == 'dlopen')
LIB_GLUE_BASE = 'librte_pmd_mlx4_glue.so'
-LIB_GLUE_VERSION = '18.02.0'
+LIB_GLUE_VERSION = abi_version
LIB_GLUE = LIB_GLUE_BASE + '.' + LIB_GLUE_VERSION
if dlopen_ibverbs
dpdk_conf.set('RTE_IBVERBS_LINK_DLOPEN', 1)
diff --git a/drivers/net/mlx4/mlx4_glue.h b/drivers/net/mlx4/mlx4_glue.h
index 5d9e985495..96d5cb16b4 100644
--- a/drivers/net/mlx4/mlx4_glue.h
+++ b/drivers/net/mlx4/mlx4_glue.h
@@ -23,7 +23,6 @@
#define MLX4_GLUE_VERSION ""
#endif
-/* LIB_GLUE_VERSION must be updated every time this structure is modified. */
struct mlx4_glue {
const char *version;
int (*fork_init)(void);
--
2.23.0
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
2020-10-16 17:13 0% ` Andrew Rybchenko
@ 2020-10-19 9:37 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-19 9:37 UTC (permalink / raw)
To: Andrew Rybchenko, Andrew Rybchenko, Neil Horman, Thomas Monjalon,
Ferruh Yigit
Cc: dev, Ivan Ilchenko
On 16/10/2020 18:13, Andrew Rybchenko wrote:
> On 10/16/20 2:20 PM, Kinsella, Ray wrote:
>> On 15/10/2020 14:30, Andrew Rybchenko wrote:
>>> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>>>
>>> Change rte_eth_dev_stop() return value from void to int
>>> and return negative errno values in case of error conditions.
>>> Also update the usage of the function in ethdev according to
>>> the new return type.
>>>
>>> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>>> ---
>>> doc/guides/rel_notes/deprecation.rst | 1 -
>>> doc/guides/rel_notes/release_20_11.rst | 3 +++
>>> lib/librte_ethdev/rte_ethdev.c | 27 +++++++++++++++++++-------
>>> lib/librte_ethdev/rte_ethdev.h | 5 ++++-
>>> 4 files changed, 27 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>>> index d1f5ed39db..2e04e24374 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -127,7 +127,6 @@ Deprecation Notices
>>> negative errno values to indicate various error conditions (e.g.
>>> invalid port ID, unsupported operation, failed operation):
>>>
>>> - - ``rte_eth_dev_stop``
>>> - ``rte_eth_dev_close``
>>>
>>> * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
>>> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
>>> index f8686a50db..c8c30937fa 100644
>>> --- a/doc/guides/rel_notes/release_20_11.rst
>>> +++ b/doc/guides/rel_notes/release_20_11.rst
>>> @@ -355,6 +355,9 @@ API Changes
>>> * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
>>> instead of ``rte_vhost_driver_start`` by crypto applications.
>>>
>>> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
>>> + ``int`` to provide a way to report various error conditions.
>>> +
>>>
>>> ABI Changes
>>> -----------
>>> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
>>> index d9b82df073..b8cf04ef4d 100644
>>> --- a/lib/librte_ethdev/rte_ethdev.c
>>> +++ b/lib/librte_ethdev/rte_ethdev.c
>>> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
>>> struct rte_eth_dev *dev;
>>> struct rte_eth_dev_info dev_info;
>>> int diag;
>>> - int ret;
>>> + int ret, ret_stop;
>>>
>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>>>
>>> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
>>> RTE_ETHDEV_LOG(ERR,
>>> "Error during restoring configuration for device (port %u): %s\n",
>>> port_id, rte_strerror(-ret));
>>> - rte_eth_dev_stop(port_id);
>>> + ret_stop = rte_eth_dev_stop(port_id);
>>> + if (ret_stop != 0) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Failed to stop device (port %u): %s\n",
>>> + port_id, rte_strerror(-ret_stop));
>>> + }
>>> +
>>> return ret;
>>> }
>>>
>>> @@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
>>> return 0;
>>> }
>>>
>>> -void
>>> +int
>>> rte_eth_dev_stop(uint16_t port_id)
>>> {
>>> struct rte_eth_dev *dev;
>>>
>>> - RTE_ETH_VALID_PORTID_OR_RET(port_id);
>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> dev = &rte_eth_devices[port_id];
>>>
>>> - RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>>> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
>>>
>>> if (dev->data->dev_started == 0) {
>>> RTE_ETHDEV_LOG(INFO,
>>> "Device with port_id=%"PRIu16" already stopped\n",
>>> port_id);
>>> - return;
>>> + return 0;
>>> }
>>>
>>> dev->data->dev_started = 0;
>>> (*dev->dev_ops->dev_stop)(dev);
>>> rte_ethdev_trace_stop(port_id);
>>> +
>>> + return 0;
>>> }
>>>
>>> int
>>> @@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
>>>
>>> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
>>>
>>> - rte_eth_dev_stop(port_id);
>>> + ret = rte_eth_dev_stop(port_id);
>>> + if (ret != 0) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Failed to stop device (port %u) before reset: %s - ignore\n",
>>> + port_id, rte_strerror(-ret));
>> ABI change is 100%,
>> Just question the logic of continuing here to do a reset, if you failed to stop the device.
>
> In the case of reset I'm sure that we should ignore stop failure here.
> Typically reset is required to recover from bad state etc and stop
> failure in such condition could definitely happen.
Ok - thanks for that.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 00/22] Add DLB PMD
@ 2020-10-17 19:03 3% ` Timothy McDaniel
2020-10-23 18:32 3% ` [dpdk-dev] [PATCH v6 00/23] " Timothy McDaniel
1 sibling, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-17 19:03 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, jerinj
The following patch series adds support for a new eventdev PMD. The DLB
PMD adds support for the Intel Dynamic Load Balancer (DLB) hardware.
The DLB is a PCIe device that provides load-balanced, prioritized
scheduling of core-to-core communication. The device consists of
queues and arbiters that connect producer and consumer cores, and
implements load-balanced queueing features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock-free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
distribution.
The DLB hardware supports both load balanced and directed ports and
queues. Unlike other eventdev devices already in the repo, not all
DLB ports and queues are equally capable. In particular, directed
ports are limited to a single link, and must be connected to a directed
queue.
Additionally, even though LDB ports may link multiple queues, the
number of queues that may be linked is limited by hardware. Another
difference is that DLB does not have a straightforward way of carrying
the flow_id in the queue elements (QE) that the hardware operates on.
While reviewing the code, please be aware that this PMD has full
control over the DLB hardware. Intel will be extending the DLB PMD
in the future (not as part of this first series) with a mode that we
refer to as the bifurcated PMD. The bifurcated PMD communicates with a
kernel driver to configure the device, ports, and queues, and memory
maps device MMIO so datapath operations occur purely in user-space.
The framework to support both the PF PMD and bifurcated PMD exists in
this patchset, and is why the iface.[ch] layer is present.
Major changes in v5 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- implement changes requested in code reviews by Gage Eads and Mike Chen
- fix a memzone leak
- convert to use eal rte-cpuflags patch from Liang Ma
Major changes in v4 after dpdk reviews and additional internal reviews
by colleagues at Intel:
================
- Remove make infrastructure
- shared code (pf/base) is now added incrementally
- flexible interface (iface.[ch]) is now added incrementally
- removed calls to rte_panic
- do not call pthread_create directly
- remove unused internal API, os_time
- convert rte_atomic to __atomic builtins
- broke out eventdev ABI changes, test/api changes, and new internal PCI
named probe API
- relocated enqueue logic to enqueue patch
Major Changes in V3:
================
- Fixed a memory corruption issue due to not allocating enough CQ
memory for depths < 8. Hardware requires minimum allocation to be
at least 8 entries.
- Address review comments from Gage and Mattias.
- Remove versioning
- minor formatting changes
Major changes in V2:
================
- Correct ABI break that was present in V1.
- Address some of the review comments received from Mattias.
I will address the remaining items identified by Mattias in the next
patch delivery.
- General code cleanup based on internal code reviews
Depends-on: patch-79539 ("eal: add new x86 cpuid support for WAITPKG")
Timothy McDaniel (22):
event/dlb: add documentation and meson infrastructure
event/dlb: add dynamic logging
event/dlb: add private data structures and constants
event/dlb: add definitions shared with LKM or shared code
event/dlb: add inline functions
event/dlb: add probe
event/dlb: add xstats
event/dlb: add infos get and configure
event/dlb: add queue and port default conf
event/dlb: add queue setup
event/dlb: add port setup
event/dlb: add port link
event/dlb: add port unlink and port unlinks in progress
event/dlb: add eventdev start
event/dlb: add enqueue and its burst variants
event/dlb: add dequeue and its burst variants
event/dlb: add eventdev stop and close
event/dlb: add PMD's token pop public interface
event/dlb: add PMD self-tests
event/dlb: add queue and port release
event/dlb: add timeout ticks entry point
doc: Add new DLB eventdev driver to relnotes
MAINTAINERS | 5 +
app/test/test_eventdev.c | 7 +
config/rte_config.h | 8 +-
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/dlb.rst | 341 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/event/dlb/dlb.c | 4129 ++++++++++++++
drivers/event/dlb/dlb_iface.c | 79 +
drivers/event/dlb/dlb_iface.h | 82 +
drivers/event/dlb/dlb_inline_fns.h | 79 +
drivers/event/dlb/dlb_log.h | 25 +
drivers/event/dlb/dlb_priv.h | 513 ++
drivers/event/dlb/dlb_selftest.c | 1551 +++++
drivers/event/dlb/dlb_user.h | 814 +++
drivers/event/dlb/dlb_xstats.c | 1222 ++++
drivers/event/dlb/meson.build | 15 +
drivers/event/dlb/pf/base/dlb_hw_types.h | 334 ++
drivers/event/dlb/pf/base/dlb_osdep.h | 326 ++
drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 441 ++
drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
drivers/event/dlb/pf/base/dlb_regs.h | 2368 ++++++++
drivers/event/dlb/pf/base/dlb_resource.c | 6902 +++++++++++++++++++++++
drivers/event/dlb/pf/base/dlb_resource.h | 876 +++
drivers/event/dlb/pf/dlb_main.c | 591 ++
drivers/event/dlb/pf/dlb_main.h | 52 +
drivers/event/dlb/pf/dlb_pf.c | 746 +++
drivers/event/dlb/rte_pmd_dlb.c | 38 +
drivers/event/dlb/rte_pmd_dlb.h | 72 +
drivers/event/dlb/rte_pmd_dlb_event_version.map | 9 +
drivers/event/meson.build | 4 +
32 files changed, 21797 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/eventdevs/dlb.rst
create mode 100644 drivers/event/dlb/dlb.c
create mode 100644 drivers/event/dlb/dlb_iface.c
create mode 100644 drivers/event/dlb/dlb_iface.h
create mode 100644 drivers/event/dlb/dlb_inline_fns.h
create mode 100644 drivers/event/dlb/dlb_log.h
create mode 100644 drivers/event/dlb/dlb_priv.h
create mode 100644 drivers/event/dlb/dlb_selftest.c
create mode 100644 drivers/event/dlb/dlb_user.h
create mode 100644 drivers/event/dlb/dlb_xstats.c
create mode 100644 drivers/event/dlb/meson.build
create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
create mode 100644 drivers/event/dlb/pf/dlb_main.c
create mode 100644 drivers/event/dlb/pf/dlb_main.h
create mode 100644 drivers/event/dlb/pf/dlb_pf.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.c
create mode 100644 drivers/event/dlb/rte_pmd_dlb.h
create mode 100644 drivers/event/dlb/rte_pmd_dlb_event_version.map
--
2.6.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
2020-10-16 11:20 3% ` Kinsella, Ray
@ 2020-10-16 17:13 0% ` Andrew Rybchenko
2020-10-19 9:37 0% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-10-16 17:13 UTC (permalink / raw)
To: Kinsella, Ray, Andrew Rybchenko, Neil Horman, Thomas Monjalon,
Ferruh Yigit
Cc: dev, Ivan Ilchenko
On 10/16/20 2:20 PM, Kinsella, Ray wrote:
>
> On 15/10/2020 14:30, Andrew Rybchenko wrote:
>> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>>
>> Change rte_eth_dev_stop() return value from void to int
>> and return negative errno values in case of error conditions.
>> Also update the usage of the function in ethdev according to
>> the new return type.
>>
>> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 1 -
>> doc/guides/rel_notes/release_20_11.rst | 3 +++
>> lib/librte_ethdev/rte_ethdev.c | 27 +++++++++++++++++++-------
>> lib/librte_ethdev/rte_ethdev.h | 5 ++++-
>> 4 files changed, 27 insertions(+), 9 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index d1f5ed39db..2e04e24374 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -127,7 +127,6 @@ Deprecation Notices
>> negative errno values to indicate various error conditions (e.g.
>> invalid port ID, unsupported operation, failed operation):
>>
>> - - ``rte_eth_dev_stop``
>> - ``rte_eth_dev_close``
>>
>> * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
>> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
>> index f8686a50db..c8c30937fa 100644
>> --- a/doc/guides/rel_notes/release_20_11.rst
>> +++ b/doc/guides/rel_notes/release_20_11.rst
>> @@ -355,6 +355,9 @@ API Changes
>> * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
>> instead of ``rte_vhost_driver_start`` by crypto applications.
>>
>> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
>> + ``int`` to provide a way to report various error conditions.
>> +
>>
>> ABI Changes
>> -----------
>> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
>> index d9b82df073..b8cf04ef4d 100644
>> --- a/lib/librte_ethdev/rte_ethdev.c
>> +++ b/lib/librte_ethdev/rte_ethdev.c
>> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
>> struct rte_eth_dev *dev;
>> struct rte_eth_dev_info dev_info;
>> int diag;
>> - int ret;
>> + int ret, ret_stop;
>>
>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>>
>> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
>> RTE_ETHDEV_LOG(ERR,
>> "Error during restoring configuration for device (port %u): %s\n",
>> port_id, rte_strerror(-ret));
>> - rte_eth_dev_stop(port_id);
>> + ret_stop = rte_eth_dev_stop(port_id);
>> + if (ret_stop != 0) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Failed to stop device (port %u): %s\n",
>> + port_id, rte_strerror(-ret_stop));
>> + }
>> +
>> return ret;
>> }
>>
>> @@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
>> return 0;
>> }
>>
>> -void
>> +int
>> rte_eth_dev_stop(uint16_t port_id)
>> {
>> struct rte_eth_dev *dev;
>>
>> - RTE_ETH_VALID_PORTID_OR_RET(port_id);
>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> dev = &rte_eth_devices[port_id];
>>
>> - RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
>>
>> if (dev->data->dev_started == 0) {
>> RTE_ETHDEV_LOG(INFO,
>> "Device with port_id=%"PRIu16" already stopped\n",
>> port_id);
>> - return;
>> + return 0;
>> }
>>
>> dev->data->dev_started = 0;
>> (*dev->dev_ops->dev_stop)(dev);
>> rte_ethdev_trace_stop(port_id);
>> +
>> + return 0;
>> }
>>
>> int
>> @@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
>>
>> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
>>
>> - rte_eth_dev_stop(port_id);
>> + ret = rte_eth_dev_stop(port_id);
>> + if (ret != 0) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Failed to stop device (port %u) before reset: %s - ignore\n",
>> + port_id, rte_strerror(-ret));
> ABI change is 100%,
> Just question the logic of continuing here to do a reset, if you failed to stop the device.
In the case of reset I'm sure that we should ignore stop failure here.
Typically reset is required to recover from bad state etc and stop
failure in such condition could definitely happen.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model
@ 2020-10-16 15:41 3% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-16 15:41 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, rasland, elibr, ozsh, asafp, Eli Britstein, Ori Kam,
Viacheslav Ovsiienko, Neil Horman, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko
On 16/10/2020 13:51, Gregory Etelson wrote:
> From: Eli Britstein <elibr@mellanox.com>
>
> rte_flow API provides the building blocks for vendor-agnostic flow
> classification offloads. The rte_flow "patterns" and "actions"
> primitives are fine-grained, thus enabling DPDK applications the
> flexibility to offload network stacks and complex pipelines.
> Applications wishing to offload tunneled traffic are required to use
> the rte_flow primitives, such as group, meta, mark, tag, and others to
> model their high-level objects. The hardware model design for
> high-level software objects is not trivial. Furthermore, an optimal
> design is often vendor-specific.
>
> When hardware offloads tunneled traffic in multi-group logic,
> partially offloaded packets may arrive to the application after they
> were modified in hardware. In this case, the application may need to
> restore the original packet headers. Consider the following sequence:
> The application decaps a packet in one group and jumps to a second
> group where it tries to match on a 5-tuple, that will miss and send
> the packet to the application. In this case, the application does not
> receive the original packet but a modified one. Also, in this case,
> the application cannot match on the outer header fields, such as VXLAN
> vni and 5-tuple.
>
> There are several possible ways to use rte_flow "patterns" and
> "actions" to resolve the issues above. For example:
> 1 Mapping headers to a hardware registers using the
> rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
> 2 Apply the decap only at the last offload stage after all the
> "patterns" were matched and the packet will be fully offloaded.
> Every approach has its pros and cons and is highly dependent on the
> hardware vendor. For example, some hardware may have a limited number
> of registers while other hardware could not support inner actions and
> must decap before accessing inner headers.
>
> The tunnel offload model resolves these issues. The model goals are:
> 1 Provide a unified application API to offload tunneled traffic that
> is capable to match on outer headers after decap.
> 2 Allow the application to restore the outer header of partially
> offloaded packets.
>
> The tunnel offload model does not introduce new elements to the
> existing RTE flow model and is implemented as a set of helper
> functions.
>
> For the application to work with the tunnel offload API it
> has to adjust flow rules in multi-table tunnel offload in the
> following way:
> 1 Remove explicit call to decap action and replace it with PMD actions
> obtained from rte_flow_tunnel_decap_and_set() helper.
> 2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
> other rules in the tunnel offload sequence.
>
> VXLAN Code example:
>
> Assume application needs to do inner NAT on the VXLAN packet.
> The first rule in group 0:
>
> flow create <port id> ingress group 0
> pattern eth / ipv4 / udp dst is 4789 / vxlan / end
> actions {pmd actions} / jump group 3 / end
>
> The first VXLAN packet that arrives matches the rule in group 0 and
> jumps to group 3. In group 3 the packet will miss since there is no
> flow to match and will be sent to the application. Application will
> call rte_flow_get_restore_info() to get the packet outer header.
>
> Application will insert a new rule in group 3 to match outer and inner
> headers:
>
> flow create <port id> ingress group 3
> pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
> udp dst 4789 / vxlan vni is 10 /
> ipv4 dst is 184.1.2.3 / end
> actions set_ipv4_dst 186.1.1.1 / queue index 3 / end
>
> Resulting of the rules will be that VXLAN packet with vni=10, outer
> IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
> decapped on queue 3 with IPv4 dst=186.1.1.1
>
> Note: The packet in group 3 is considered decapped. All actions in
> that group will be done on the header that was inner before decap. The
> application may specify an outer header to be matched on. It's PMD
> responsibility to translate these items to outer metadata.
>
> API usage:
>
> /**
> * 1. Initiate RTE flow tunnel object
> */
> const struct rte_flow_tunnel tunnel = {
> .type = RTE_FLOW_ITEM_TYPE_VXLAN,
> .tun_id = 10,
> }
>
> /**
> * 2. Obtain PMD tunnel actions
> *
> * pmd_actions is an intermediate variable application uses to
> * compile actions array
> */
> struct rte_flow_action **pmd_actions;
> rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
> &num_pmd_actions, &error);
> /**
> * 3. offload the first rule
> * matching on VXLAN traffic and jumps to group 3
> * (implicitly decaps packet)
> */
> app_actions = jump group 3
> rule_items = app_items; /** eth / ipv4 / udp / vxlan */
> rule_actions = { pmd_actions, app_actions };
> attr.group = 0;
> flow_1 = rte_flow_create(port_id, &attr,
> rule_items, rule_actions, &error);
>
> /**
> * 4. after flow creation application does not need to keep the
> * tunnel action resources.
> */
> rte_flow_tunnel_action_release(port_id, pmd_actions,
> num_pmd_actions);
> /**
> * 5. After partially offloaded packet miss because there was no
> * matching rule handle miss on group 3
> */
> struct rte_flow_restore_info info;
> rte_flow_get_restore_info(port_id, mbuf, &info, &error);
>
> /**
> * 6. Offload NAT rule:
> */
> app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
> vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
> app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }
>
> rte_flow_tunnel_match(&info.tunnel, &pmd_items,
> &num_pmd_items, &error);
> rule_items = {pmd_items, app_items};
> rule_actions = app_actions;
> attr.group = info.group_id;
> flow_2 = rte_flow_create(port_id, &attr,
> rule_items, rule_actions, &error);
>
> /**
> * 7. Release PMD items after rule creation
> */
> rte_flow_tunnel_item_release(port_id,
> pmd_items, num_pmd_items);
>
> References
> 1. https://mails.dpdk.org/archives/dev/2020-June/index.html
>
> Signed-off-by: Eli Britstein <elibr@mellanox.com>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>
> ---
> v5:
> * rebase to next-net
>
> v6:
> * update the patch comment
> * update tunnel offload section in rte_flow.rst
> ---
> doc/guides/prog_guide/rte_flow.rst | 78 +++++++++
> doc/guides/rel_notes/release_20_11.rst | 5 +
> lib/librte_ethdev/rte_ethdev_version.map | 5 +
> lib/librte_ethdev/rte_flow.c | 112 +++++++++++++
> lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++++++
> lib/librte_ethdev/rte_flow_driver.h | 32 ++++
> 6 files changed, 427 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 7fb5ec9059..8dc048c6f4 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3131,6 +3131,84 @@ operations include:
> - Duplication of a complete flow rule description.
> - Pattern item or action name retrieval.
>
> +Tunneled traffic offload
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +rte_flow API provides the building blocks for vendor-agnostic flow
> +classification offloads. The rte_flow "patterns" and "actions"
> +primitives are fine-grained, thus enabling DPDK applications the
> +flexibility to offload network stacks and complex pipelines.
> +Applications wishing to offload tunneled traffic are required to use
> +the rte_flow primitives, such as group, meta, mark, tag, and others to
> +model their high-level objects. The hardware model design for
> +high-level software objects is not trivial. Furthermore, an optimal
> +design is often vendor-specific.
> +
> +When hardware offloads tunneled traffic in multi-group logic,
> +partially offloaded packets may arrive to the application after they
> +were modified in hardware. In this case, the application may need to
> +restore the original packet headers. Consider the following sequence:
> +The application decaps a packet in one group and jumps to a second
> +group where it tries to match on a 5-tuple, that will miss and send
> +the packet to the application. In this case, the application does not
> +receive the original packet but a modified one. Also, in this case,
> +the application cannot match on the outer header fields, such as VXLAN
> +vni and 5-tuple.
> +
> +There are several possible ways to use rte_flow "patterns" and
> +"actions" to resolve the issues above. For example:
> +
> +1 Mapping headers to a hardware registers using the
> +rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
> +
> +2 Apply the decap only at the last offload stage after all the
> +"patterns" were matched and the packet will be fully offloaded.
> +
> +Every approach has its pros and cons and is highly dependent on the
> +hardware vendor. For example, some hardware may have a limited number
> +of registers while other hardware could not support inner actions and
> +must decap before accessing inner headers.
> +
> +The tunnel offload model resolves these issues. The model goals are:
> +
> +1 Provide a unified application API to offload tunneled traffic that
> +is capable to match on outer headers after decap.
> +
> +2 Allow the application to restore the outer header of partially
> +offloaded packets.
> +
> +The tunnel offload model does not introduce new elements to the
> +existing RTE flow model and is implemented as a set of helper
> +functions.
> +
> +For the application to work with the tunnel offload API it
> +has to adjust flow rules in multi-table tunnel offload in the
> +following way:
> +
> +1 Remove explicit call to decap action and replace it with PMD actions
> +obtained from rte_flow_tunnel_decap_and_set() helper.
> +
> +2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
> +other rules in the tunnel offload sequence.
> +
> +The model requirements:
> +
> +Software application must initialize
> +rte_tunnel object with tunnel parameters before calling
> +rte_flow_tunnel_decap_set() & rte_flow_tunnel_match().
> +
> +PMD actions array obtained in rte_flow_tunnel_decap_set() must be
> +released by application with rte_flow_action_release() call.
> +
> +PMD items array obtained with rte_flow_tunnel_match() must be released
Should be rte_flow_tunnel_item_release ?
> +by application with rte_flow_item_release() call. Application can
> +release PMD items and actions after rule was created. However, if the
> +application needs to create additional rule for the same tunnel it
> +will need to obtain PMD items again.
> +
> +Application cannot destroy rte_tunnel object before it releases all
> +PMD actions & PMD items referencing that tunnel.
> +
> Caveats
> -------
>
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index 9155b468d6..f125ce79dd 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -121,6 +121,11 @@ New Features
> * Flow rule verification was updated to accept private PMD
> items and actions.
>
> +* **Added generic API to offload tunneled traffic and restore missed packet.**
> +
> + * Added a new hardware independent helper API to RTE flow library that
> + offloads tunneled traffic and restores missed packets.
> +
> * **Updated Cisco enic driver.**
>
> * Added support for VF representors with single-queue Tx/Rx and flow API
> diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
> index f64c379ac2..8ddda2547f 100644
> --- a/lib/librte_ethdev/rte_ethdev_version.map
> +++ b/lib/librte_ethdev/rte_ethdev_version.map
> @@ -239,6 +239,11 @@ EXPERIMENTAL {
> rte_flow_shared_action_destroy;
> rte_flow_shared_action_query;
> rte_flow_shared_action_update;
> + rte_flow_tunnel_decap_set;
> + rte_flow_tunnel_match;
> + rte_flow_get_restore_info;
> + rte_flow_tunnel_action_decap_release;
> + rte_flow_tunnel_item_release;
> };
>
> INTERNAL {
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index b74ea5593a..380c5cae2c 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -1143,3 +1143,115 @@ rte_flow_shared_action_query(uint16_t port_id,
> data, error);
> return flow_err(port_id, ret, error);
> }
> +
> +int
> +rte_flow_tunnel_decap_set(uint16_t port_id,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_action **actions,
> + uint32_t *num_of_actions,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->tunnel_decap_set)) {
> + return flow_err(port_id,
> + ops->tunnel_decap_set(dev, tunnel, actions,
> + num_of_actions, error),
> + error);
> + }
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_match(uint16_t port_id,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_item **items,
> + uint32_t *num_of_items,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->tunnel_match)) {
> + return flow_err(port_id,
> + ops->tunnel_match(dev, tunnel, items,
> + num_of_items, error),
> + error);
> + }
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_get_restore_info(uint16_t port_id,
> + struct rte_mbuf *m,
> + struct rte_flow_restore_info *restore_info,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->get_restore_info)) {
> + return flow_err(port_id,
> + ops->get_restore_info(dev, m, restore_info,
> + error),
> + error);
> + }
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> + struct rte_flow_action *actions,
> + uint32_t num_of_actions,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->action_release)) {
> + return flow_err(port_id,
> + ops->action_release(dev, actions,
> + num_of_actions, error),
> + error);
> + }
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_tunnel_item_release(uint16_t port_id,
> + struct rte_flow_item *items,
> + uint32_t num_of_items,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->item_release)) {
> + return flow_err(port_id,
> + ops->item_release(dev, items,
> + num_of_items, error),
> + error);
> + }
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 48395284b5..a8eac4deb8 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -3620,6 +3620,201 @@ rte_flow_shared_action_query(uint16_t port_id,
> void *data,
> struct rte_flow_error *error);
>
> +/* Tunnel has a type and the key information. */
> +struct rte_flow_tunnel {
> + /**
> + * Tunnel type, for example RTE_FLOW_ITEM_TYPE_VXLAN,
> + * RTE_FLOW_ITEM_TYPE_NVGRE etc.
> + */
> + enum rte_flow_item_type type;
> + uint64_t tun_id; /**< Tunnel identification. */
> +
> + RTE_STD_C11
> + union {
> + struct {
> + rte_be32_t src_addr; /**< IPv4 source address. */
> + rte_be32_t dst_addr; /**< IPv4 destination address. */
> + } ipv4;
> + struct {
> + uint8_t src_addr[16]; /**< IPv6 source address. */
> + uint8_t dst_addr[16]; /**< IPv6 destination address. */
> + } ipv6;
> + };
> + rte_be16_t tp_src; /**< Tunnel port source. */
> + rte_be16_t tp_dst; /**< Tunnel port destination. */
> + uint16_t tun_flags; /**< Tunnel flags. */
> +
> + bool is_ipv6; /**< True for valid IPv6 fields. Otherwise IPv4. */
> +
> + /**
> + * the following members are required to restore packet
> + * after miss
> + */
> + uint8_t tos; /**< TOS for IPv4, TC for IPv6. */
> + uint8_t ttl; /**< TTL for IPv4, HL for IPv6. */
> + uint32_t label; /**< Flow Label for IPv6. */
> +};
> +
> +/**
> + * Indicate that the packet has a tunnel.
> + */
> +#define RTE_FLOW_RESTORE_INFO_TUNNEL (1ULL << 0)
> +
> +/**
> + * Indicate that the packet has a non decapsulated tunnel header.
> + */
> +#define RTE_FLOW_RESTORE_INFO_ENCAPSULATED (1ULL << 1)
> +
> +/**
> + * Indicate that the packet has a group_id.
> + */
> +#define RTE_FLOW_RESTORE_INFO_GROUP_ID (1ULL << 2)
> +
> +/**
> + * Restore information structure to communicate the current packet processing
> + * state when some of the processing pipeline is done in hardware and should
> + * continue in software.
> + */
> +struct rte_flow_restore_info {
> + /**
> + * Bitwise flags (RTE_FLOW_RESTORE_INFO_*) to indicate validation of
> + * other fields in struct rte_flow_restore_info.
> + */
> + uint64_t flags;
> + uint32_t group_id; /**< Group ID where packed missed */
> + struct rte_flow_tunnel tunnel; /**< Tunnel information. */
> +};
> +
> +/**
> + * Allocate an array of actions to be used in rte_flow_create, to implement
> + * tunnel-decap-set for the given tunnel.
> + * Sample usage:
> + * actions vxlan_decap / tunnel-decap-set(tunnel properties) /
> + * jump group 0 / end
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] tunnel
> + * Tunnel properties.
> + * @param[out] actions
> + * Array of actions to be allocated by the PMD. This array should be
> + * concatenated with the actions array provided to rte_flow_create.
> + * @param[out] num_of_actions
> + * Number of actions allocated.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_decap_set(uint16_t port_id,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_action **actions,
> + uint32_t *num_of_actions,
> + struct rte_flow_error *error);
> +
> +/**
> + * Allocate an array of items to be used in rte_flow_create, to implement
> + * tunnel-match for the given tunnel.
> + * Sample usage:
> + * pattern tunnel-match(tunnel properties) / outer-header-matches /
> + * inner-header-matches / end
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] tunnel
> + * Tunnel properties.
> + * @param[out] items
> + * Array of items to be allocated by the PMD. This array should be
> + * concatenated with the items array provided to rte_flow_create.
> + * @param[out] num_of_items
> + * Number of items allocated.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_match(uint16_t port_id,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_item **items,
> + uint32_t *num_of_items,
> + struct rte_flow_error *error);
> +
> +/**
> + * Populate the current packet processing state, if exists, for the given mbuf.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] m
> + * Mbuf struct.
> + * @param[out] info
> + * Restore information. Upon success contains the HW state.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_get_restore_info(uint16_t port_id,
> + struct rte_mbuf *m,
> + struct rte_flow_restore_info *info,
> + struct rte_flow_error *error);
> +
> +/**
> + * Release the action array as allocated by rte_flow_tunnel_decap_set.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] actions
> + * Array of actions to be released.
> + * @param[in] num_of_actions
> + * Number of elements in actions array.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> + struct rte_flow_action *actions,
> + uint32_t num_of_actions,
> + struct rte_flow_error *error);
> +
> +/**
> + * Release the item array as allocated by rte_flow_tunnel_match.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] items
> + * Array of items to be released.
> + * @param[in] num_of_items
> + * Number of elements in item array.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_tunnel_item_release(uint16_t port_id,
> + struct rte_flow_item *items,
> + uint32_t num_of_items,
> + struct rte_flow_error *error);
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_ethdev/rte_flow_driver.h b/lib/librte_ethdev/rte_flow_driver.h
> index 58f56b0262..bd5ffc0bb1 100644
> --- a/lib/librte_ethdev/rte_flow_driver.h
> +++ b/lib/librte_ethdev/rte_flow_driver.h
> @@ -131,6 +131,38 @@ struct rte_flow_ops {
> const struct rte_flow_shared_action *shared_action,
> void *data,
> struct rte_flow_error *error);
> + /** See rte_flow_tunnel_decap_set() */
> + int (*tunnel_decap_set)
> + (struct rte_eth_dev *dev,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_action **pmd_actions,
> + uint32_t *num_of_actions,
> + struct rte_flow_error *err);
> + /** See rte_flow_tunnel_match() */
> + int (*tunnel_match)
> + (struct rte_eth_dev *dev,
> + struct rte_flow_tunnel *tunnel,
> + struct rte_flow_item **pmd_items,
> + uint32_t *num_of_items,
> + struct rte_flow_error *err);
Should be rte_flow_get_restore_info
> + /** See rte_flow_get_rte_flow_restore_info() */> + int (*get_restore_info)
> + (struct rte_eth_dev *dev,
> + struct rte_mbuf *m,
> + struct rte_flow_restore_info *info,
> + struct rte_flow_error *err);
Should be rte_flow_tunnel_action_decap_release
> + /** See rte_flow_action_tunnel_decap_release() */
> + int (*action_release)
> + (struct rte_eth_dev *dev,
> + struct rte_flow_action *pmd_actions,
> + uint32_t num_of_actions,
> + struct rte_flow_error *err);
Should rte_flow_tunnel_item_release?
> + /** See rte_flow_item_release() */
> + int (*item_release)
> + (struct rte_eth_dev *dev,
> + struct rte_flow_item *pmd_items,
> + uint32_t num_of_items,
> + struct rte_flow_error *err);
> };
>
> /**
>
ABI Changes Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v9 1/6] ethdev: introduce Rx buffer split
2020-10-16 11:21 4% ` Ferruh Yigit
@ 2020-10-16 13:08 0% ` Slava Ovsiienko
0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-16 13:08 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: NBU-Contact-Thomas Monjalon, stephen, olivier.matz, jerinjacobk,
maxime.coquelin, david.marchand, arybchenko
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 16, 2020 14:21
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> stephen@networkplumber.org; olivier.matz@6wind.com;
> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
> david.marchand@redhat.com; arybchenko@solarflare.com
> Subject: Re: [PATCH v9 1/6] ethdev: introduce Rx buffer split
>
> On 10/16/2020 11:22 AM, Viacheslav Ovsiienko wrote:
> > The DPDK datapath in the transmit direction is very flexible.
> > An application can build the multi-segment packet and manages almost
> > all data aspects - the memory pools where segments are allocated from,
> > the segment lengths, the memory attributes like external buffers,
> > registered for DMA, etc.
> >
[snip]
> > +
> > +* **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> > +* **[uses] rte_eth_rxconf**: ``rx_conf.rx_seg, rx_conf.rx_nseg``.
> > +* **[implements] datapath**: ``Buffer Split functionality``.
> > +* **[provides] rte_eth_dev_info**:
> ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> > +* **[provides] eth_dev_ops**: ``rxq_info_get:buffer_split``.
>
> Previously you mentioned this is because 'rxq_info_get()' can provide
> buffer_split information, but with current implementation it doesn't and there
> is no filed in the struct to report such.
>
> I suggest either add it now, while you can :) [with a techboard approval], or
> remove above documentation of it.
>
> <...>
>
Mmm, I messed up with rx_burst_mode_get(). Will fix, thanks.
> > /**
> > + * Ethernet device Rx buffer segmentation capabilities.
> > + */
> > +__rte_experimental
> > +struct rte_eth_rxseg_capa {
> > + __extension__
> > + uint32_t max_nseg:16; /**< Maximum amount of segments to split. */
> > + uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/
> > + uint32_t offset_allowed:1; /**< Supports buffer offsets. */
> > + uint32_t offset_align_log2:4; /**< Required offset alignment. */ };
>
> Now we are fiddling details, but,
>
> I am not fun of the bitfields [1], but I assumed Thomas' request was to enable
> expanding capabilities later without breaking the ABI, which makes senses and
> suits to this kind of capability struct, if this is correct why made the 'max_nseg'
> a bitfield too?
>
> Why not,
> uint16_t max_nseg;
> uint16_t multi_pools:1
> uint16_t offset_allowed:1;
> uint16_t offset_align_log2:4;
> < This still leaves 10 bits to expand without ABI break>
>
> [1]
> unles very space critical use case, otherwise they just add more code to extract
> the same value, and not as simple as a simple variable :)
It seems not to be the case, there is the listing of the rte_eth_rx_queue_check_split():
8963 4b67 440FB784 movzwl 188(%rsp),%r8d ; [SO] max_nseg is fetched as regular uint16_t
8963 24BC0000
8963 00
8964 4b70 664539C1 cmpw %r8w,%r9w
8965 4b74 0F87A402 ja .L1749
8965 0000
I would prefer to keep uint32_t - it is more generic, IMO.
With best regards, Slava
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v9 1/6] ethdev: introduce Rx buffer split
@ 2020-10-16 11:21 4% ` Ferruh Yigit
2020-10-16 13:08 0% ` Slava Ovsiienko
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-16 11:21 UTC (permalink / raw)
To: Viacheslav Ovsiienko, dev
Cc: thomas, stephen, olivier.matz, jerinjacobk, maxime.coquelin,
david.marchand, arybchenko
On 10/16/2020 11:22 AM, Viacheslav Ovsiienko wrote:
> The DPDK datapath in the transmit direction is very flexible.
> An application can build the multi-segment packet and manages
> almost all data aspects - the memory pools where segments
> are allocated from, the segment lengths, the memory attributes
> like external buffers, registered for DMA, etc.
>
> In the receiving direction, the datapath is much less flexible,
> an application can only specify the memory pool to configure the
> receiving queue and nothing more. In order to extend receiving
> datapath capabilities it is proposed to add the way to provide
> extended information how to split the packets being received.
>
> The new offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT in device
> capabilities is introduced to present the way for PMD to report to
> application about supporting Rx packet split to configurable
> segments. Prior invoking the rte_eth_rx_queue_setup() routine
> application should check RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag.
>
> The following structure is introduced to specify the Rx packet
> segment for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload:
>
> struct rte_eth_rxseg_split {
>
> struct rte_mempool *mp; /* memory pools to allocate segment from */
> uint16_t length; /* segment maximal data length,
> configures "split point" */
> uint16_t offset; /* data offset from beginning
> of mbuf data buffer */
> uint32_t reserved; /* reserved field */
> };
>
> The segment descriptions are added to the rte_eth_rxconf structure:
> rx_seg - pointer the array of segment descriptions, each element
> describes the memory pool, maximal data length, initial
> data offset from the beginning of data buffer in mbuf.
> This array allows to specify the different settings for
> each segment in individual fashion.
> rx_nseg - number of elements in the array
>
> If the extended segment descriptions is provided with these new
> fields the mp parameter of the rte_eth_rx_queue_setup must be
> specified as NULL to avoid ambiguity.
>
> There are two options to specify Rx buffer configuration:
> - mp is not NULL, rx_conf.rx_seg is NULL, rx_conf.rx_nseg is zero,
> it is compatible configuration, follows existing implementation,
> provides single pool and no description for segment sizes
> and offsets.
> - mp is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not
> zero, it provides the extended configuration, individually for
> each segment.
>
> f the Rx queue is configured with new settings the packets being
> received will be split into multiple segments pushed to the mbufs
> with specified attributes. The PMD will split the received packets
> into multiple segments according to the specification in the
> description array.
>
> For example, let's suppose we configured the Rx queue with the
> following segments:
> seg0 - pool0, len0=14B, off0=2
> seg1 - pool1, len1=20B, off1=128B
> seg2 - pool2, len2=20B, off2=0B
> seg3 - pool3, len3=512B, off3=0B
>
> The packet 46 bytes long will look like the following:
> seg0 - 14B long @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
> seg1 - 20B long @ 128 in mbuf from pool1
> seg2 - 12B long @ 0 in mbuf from pool2
>
> The packet 1500 bytes long will look like the following:
> seg0 - 14B @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
> seg1 - 20B @ 128 in mbuf from pool1
> seg2 - 20B @ 0 in mbuf from pool2
> seg3 - 512B @ 0 in mbuf from pool3
> seg4 - 512B @ 0 in mbuf from pool3
> seg5 - 422B @ 0 in mbuf from pool3
>
> The offload RTE_ETH_RX_OFFLOAD_SCATTER must be present and
> configured to support new buffer split feature (if rx_nseg
> is greater than one).
>
> The split limitations imposed by underlying PMD is reported
> in the new introduced rte_eth_dev_info->rx_seg_capa field.
>
> The new approach would allow splitting the ingress packets into
> multiple parts pushed to the memory with different attributes.
> For example, the packet headers can be pushed to the embedded
> data buffers within mbufs and the application data into
> the external buffers attached to mbufs allocated from the
> different memory pools. The memory attributes for the split
> parts may differ either - for example the application data
> may be pushed into the external memory located on the dedicated
> physical device, say GPU or NVMe. This would improve the DPDK
> receiving datapath flexibility with preserving compatibility
> with existing API.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
<...>
> +.. _nic_features_buffer_split:
> +
> +Buffer Split on Rx
> +------------------
> +
> +Scatters the packets being received on specified boundaries to segmented mbufs.
> +
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> +* **[uses] rte_eth_rxconf**: ``rx_conf.rx_seg, rx_conf.rx_nseg``.
> +* **[implements] datapath**: ``Buffer Split functionality``.
> +* **[provides] rte_eth_dev_info**: ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``.
> +* **[provides] eth_dev_ops**: ``rxq_info_get:buffer_split``.
Previously you mentioned this is because 'rxq_info_get()' can provide
buffer_split information, but with current implementation it doesn't and there
is no filed in the struct to report such.
I suggest either add it now, while you can :) [with a techboard approval], or
remove above documentation of it.
<...>
> /**
> + * Ethernet device Rx buffer segmentation capabilities.
> + */
> +__rte_experimental
> +struct rte_eth_rxseg_capa {
> + __extension__
> + uint32_t max_nseg:16; /**< Maximum amount of segments to split. */
> + uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/
> + uint32_t offset_allowed:1; /**< Supports buffer offsets. */
> + uint32_t offset_align_log2:4; /**< Required offset alignment. */
> +};
Now we are fiddling details, but,
I am not fun of the bitfields [1], but I assumed Thomas' request was to enable
expanding capabilities later without breaking the ABI, which makes senses and
suits to this kind of capability struct, if this is correct why made the
'max_nseg' a bitfield too?
Why not,
uint16_t max_nseg;
uint16_t multi_pools:1
uint16_t offset_allowed:1;
uint16_t offset_align_log2:4;
< This still leaves 10 bits to expand without ABI break>
[1]
unles very space critical use case, otherwise they just add more code to extract
the same value, and not as simple as a simple variable :)
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
2020-10-15 13:30 4% ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
2020-10-16 9:22 0% ` Ferruh Yigit
@ 2020-10-16 11:20 3% ` Kinsella, Ray
2020-10-16 17:13 0% ` Andrew Rybchenko
1 sibling, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-16 11:20 UTC (permalink / raw)
To: Andrew Rybchenko, Neil Horman, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko
Cc: dev, Ivan Ilchenko
On 15/10/2020 14:30, Andrew Rybchenko wrote:
> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>
> Change rte_eth_dev_stop() return value from void to int
> and return negative errno values in case of error conditions.
> Also update the usage of the function in ethdev according to
> the new return type.
>
> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> doc/guides/rel_notes/deprecation.rst | 1 -
> doc/guides/rel_notes/release_20_11.rst | 3 +++
> lib/librte_ethdev/rte_ethdev.c | 27 +++++++++++++++++++-------
> lib/librte_ethdev/rte_ethdev.h | 5 ++++-
> 4 files changed, 27 insertions(+), 9 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index d1f5ed39db..2e04e24374 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -127,7 +127,6 @@ Deprecation Notices
> negative errno values to indicate various error conditions (e.g.
> invalid port ID, unsupported operation, failed operation):
>
> - - ``rte_eth_dev_stop``
> - ``rte_eth_dev_close``
>
> * ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f8686a50db..c8c30937fa 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -355,6 +355,9 @@ API Changes
> * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
> instead of ``rte_vhost_driver_start`` by crypto applications.
>
> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
> + ``int`` to provide a way to report various error conditions.
> +
>
> ABI Changes
> -----------
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index d9b82df073..b8cf04ef4d 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> int diag;
> - int ret;
> + int ret, ret_stop;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>
> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
> RTE_ETHDEV_LOG(ERR,
> "Error during restoring configuration for device (port %u): %s\n",
> port_id, rte_strerror(-ret));
> - rte_eth_dev_stop(port_id);
> + ret_stop = rte_eth_dev_stop(port_id);
> + if (ret_stop != 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Failed to stop device (port %u): %s\n",
> + port_id, rte_strerror(-ret_stop));
> + }
> +
> return ret;
> }
>
> @@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
> return 0;
> }
>
> -void
> +int
> rte_eth_dev_stop(uint16_t port_id)
> {
> struct rte_eth_dev *dev;
>
> - RTE_ETH_VALID_PORTID_OR_RET(port_id);
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
>
> - RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
>
> if (dev->data->dev_started == 0) {
> RTE_ETHDEV_LOG(INFO,
> "Device with port_id=%"PRIu16" already stopped\n",
> port_id);
> - return;
> + return 0;
> }
>
> dev->data->dev_started = 0;
> (*dev->dev_ops->dev_stop)(dev);
> rte_ethdev_trace_stop(port_id);
> +
> + return 0;
> }
>
> int
> @@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
>
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
>
> - rte_eth_dev_stop(port_id);
> + ret = rte_eth_dev_stop(port_id);
> + if (ret != 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Failed to stop device (port %u) before reset: %s - ignore\n",
> + port_id, rte_strerror(-ret));
ABI change is 100%,
Just question the logic of continuing here to do a reset, if you failed to stop the device.
> + }
> ret = dev->dev_ops->dev_reset(dev);
>
> return eth_err(port_id, ret);
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index a61ca115a0..b85861cf2b 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -2277,8 +2277,11 @@ int rte_eth_dev_start(uint16_t port_id);
> *
> * @param port_id
> * The port identifier of the Ethernet device.
> + * @return
> + * - 0: Success, Ethernet device stopped.
> + * - <0: Error code of the driver device stop function.
> */
> -void rte_eth_dev_stop(uint16_t port_id);
> +int rte_eth_dev_stop(uint16_t port_id);
>
> /**
> * Link up an Ethernet device.
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
2020-10-15 13:30 4% ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
@ 2020-10-16 9:22 0% ` Ferruh Yigit
2020-10-16 11:20 3% ` Kinsella, Ray
1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-16 9:22 UTC (permalink / raw)
To: Andrew Rybchenko, Ray Kinsella, Neil Horman, Thomas Monjalon,
Andrew Rybchenko
Cc: dev, Ivan Ilchenko
On 10/15/2020 2:30 PM, Andrew Rybchenko wrote:
> From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
>
> Change rte_eth_dev_stop() return value from void to int
> and return negative errno values in case of error conditions.
> Also update the usage of the function in ethdev according to
> the new return type.
>
> Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
<...>
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f8686a50db..c8c30937fa 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -355,6 +355,9 @@ API Changes
> * vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
> instead of ``rte_vhost_driver_start`` by crypto applications.
>
> +* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
> + ``int`` to provide a way to report various error conditions.
> +
>
If there will be a new version, there is a ethdev block already in this section
can you please move the paragraph up there?
> ABI Changes
> -----------
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index d9b82df073..b8cf04ef4d 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> int diag;
> - int ret;
> + int ret, ret_stop;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
>
> @@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
> RTE_ETHDEV_LOG(ERR,
> "Error during restoring configuration for device (port %u): %s\n",
> port_id, rte_strerror(-ret));
> - rte_eth_dev_stop(port_id);
> + ret_stop = rte_eth_dev_stop(port_id);
> + if (ret_stop != 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Failed to stop device (port %u): %s\n",
> + port_id, rte_strerror(-ret_stop));
> + }
> +
Again, if there will be a new version already,
This is the 'rte_eth_dev_start()' function and error log is "Failed to stop
device .." :)
What do you think about adding a little more detail, like "failed to stop back
on error" etc...
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] performance degradation with fpic
@ 2020-10-16 8:35 3% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-16 8:35 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Thomas Monjalon, Ali Alnubani, dev, Asaf Penso, david.marchand,
arybchenko, ferruh.yigit, honnappa.nagarahalli, jerinj
On Thu, Oct 15, 2020 at 02:44:49PM -0700, Stephen Hemminger wrote:
> On Thu, 15 Oct 2020 19:14:48 +0200
> Thomas Monjalon <thomas@monjalon.net> wrote:
>
> > 15/10/2020 19:08, Bruce Richardson:
> > > On Thu, Oct 15, 2020 at 04:00:44PM +0000, Ali Alnubani wrote:
> > > > We have been seeing in some cases that the DPDK forwarding performance
> > > > is up to 9% lower when DPDK is built as static with meson compared to a
> > > > build with makefiles.
> > > >
> > > > The same degradation can be reproduced with makefiles on older DPDK
> > > > releases when building with EXTAR_CFLAGS set to “-fPIC”, it can also be
> > > > resolved in meson when passing “pic: false” to meson’s static_library
> > > > call (more tweaking needs to be done to prevent building shared
> > > > libraries because this change breaks them).
> > [...]
> > > > Should we disable PIC in static builds?
> > >
> > > thanks for reporting, though it's strange that you see such a big impact.
> > > In my previous tests with i40e driver I never noticed a difference between
> > > make and meson builds, and I and some others here have been using meson
> > > builds for any performance work for over a year now. That being said let me
> > > reverify what I see on my end.
> > >
> > > In terms of solutions, disabling the -fPIC flag globally implies that we
> > > can no longer build static and shared libs from the same sources, so we
> > > would need to revert to doing either a static or a shared library build
> > > but not both. If the issue is limited to only some drivers or some cases,
> > > we can perhaps add in a build option to have no-fpic-static builds, to be
> > > used in a cases where it is problematic.
> >
> > I assume only some Rx/Tx functions are impacted.
> > We probably need such disabling option per-file.
> >
> > > However, at this point, I think we need a little more investigation. Is
> > > there any testing you can do to see if it's just in your driver, or in
> > > perhaps a mempool driver/lib that the issue appears, or if it's just a
> > > global slowdown? Do you see the impact with both clang and gcc? I'll
> > > retest things a bit tomorrow on my end to see what I see.
> >
> > Yes we need to know which libs or files are impacted by -fPIC.
>
> The issue is that all shared libraries need to be built with PIC.
> So it is a question of static vs shared library build.
Well, partially yes, but really using fPIC should only have a very small
difference in drivers. Therefore I'd like to know what's causing this
massive drop because, while disabling fPIC in the static builds (perhaps
per-component to avoid doubling the build time) will improve perf in the
static case, it will still leave a perf drop when a user switches to shared
libs. Since we want to move to a model where people are using shared
libraries and can update seamlessly due to constant ABI, I therefore think
we need to root cause this so we can fix the shared lib builds too - since
disabling fPIC is not an option there.
/Bruce
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v7 03/20] eal: rename lcore word choices
@ 2020-10-15 22:57 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-15 22:57 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Anatoly Burakov
Replace master lcore with main lcore and
replace slave lcore with worker lcore.
Keep the old functions and macros but mark them as deprecated
for this release.
The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/rel_notes/deprecation.rst | 19 -------
doc/guides/rel_notes/release_20_11.rst | 11 ++++
lib/librte_eal/common/eal_common_dynmem.c | 10 ++--
lib/librte_eal/common/eal_common_launch.c | 36 ++++++------
lib/librte_eal/common/eal_common_lcore.c | 8 +--
lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
lib/librte_eal/common/eal_options.h | 2 +
lib/librte_eal/common/eal_private.h | 6 +-
lib/librte_eal/common/rte_random.c | 2 +-
lib/librte_eal/common/rte_service.c | 2 +-
lib/librte_eal/freebsd/eal.c | 28 +++++-----
lib/librte_eal/freebsd/eal_thread.c | 32 +++++------
lib/librte_eal/include/rte_eal.h | 4 +-
lib/librte_eal/include/rte_eal_trace.h | 4 +-
lib/librte_eal/include/rte_launch.h | 60 ++++++++++----------
lib/librte_eal/include/rte_lcore.h | 35 ++++++++----
lib/librte_eal/linux/eal.c | 28 +++++-----
lib/librte_eal/linux/eal_memory.c | 10 ++--
lib/librte_eal/linux/eal_thread.c | 32 +++++------
lib/librte_eal/rte_eal_version.map | 2 +-
lib/librte_eal/windows/eal.c | 16 +++---
lib/librte_eal/windows/eal_thread.c | 30 +++++-----
22 files changed, 230 insertions(+), 211 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 604f198059c5..1eb8bd3643f1 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
for returning key match count. It will ease handling of no-match case.
-* eal: To be more inclusive in choice of naming, the DPDK project
- will replace uses of master/slave in the API's and command line arguments.
-
- References to master/slave in relation to lcore will be renamed
- to initial/worker. The function ``rte_get_master_lcore()``
- will be renamed to ``rte_get_initial_lcore()``.
- For the 20.11 release, both names will be present and the
- old function will be marked with the deprecated tag.
- The old function will be removed in a future version.
-
- The iterator for worker lcores will also change:
- ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
- ``RTE_LCORE_FOREACH_WORKER``.
-
- The ``master-lcore`` argument to testpmd will be replaced
- with ``initial-lcore``. The old ``master-lcore`` argument
- will produce a runtime notification in 20.11 release, and
- be removed completely in a future release.
-
* eal: The terms blacklist and whitelist to describe devices used
by DPDK will be replaced in the 20.11 relase.
This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 708ebb01c85d..c1a907390a79 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -430,6 +430,17 @@ API Changes
* sched: Removed ``tb_rate``, ``tc_rate``, ``tc_period`` and ``tb_size``
from ``struct rte_sched_subport_params``.
+* eal: Changed the function ``rte_get_master_lcore()`` is
+ replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+ The iterator for worker lcores will also change:
+ ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+ ``RTE_LCORE_FOREACH_WORKER``.
+
+ The ``master-lcore`` argument to testpmd will be replaced
+ with ``main-lcore``. The old ``master-lcore`` argument
+ will produce a runtime notification in 20.11 release, and
+ be removed completely in a future release.
ABI Changes
-----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
total_size -= default_size;
}
#else
- /* in 32-bit mode, allocate all of the memory only on master
+ /* in 32-bit mode, allocate all of the memory only on main
* lcore socket
*/
total_size = internal_conf->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
struct rte_config *cfg = rte_eal_get_configuration();
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
- master_lcore_socket =
- rte_lcore_to_socket_id(cfg->master_lcore);
+ main_lcore_socket =
+ rte_lcore_to_socket_id(cfg->main_lcore);
- if (master_lcore_socket != socket)
+ if (main_lcore_socket != socket)
continue;
/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
* Wait until a lcore finished its job.
*/
int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
{
- if (lcore_config[slave_id].state == WAIT)
+ if (lcore_config[worker_id].state == WAIT)
return 0;
- while (lcore_config[slave_id].state != WAIT &&
- lcore_config[slave_id].state != FINISHED)
+ while (lcore_config[worker_id].state != WAIT &&
+ lcore_config[worker_id].state != FINISHED)
rte_pause();
rte_rmb();
/* we are in finished state, go to wait state */
- lcore_config[slave_id].state = WAIT;
- return lcore_config[slave_id].ret;
+ lcore_config[worker_id].state = WAIT;
+ return lcore_config[worker_id].ret;
}
/*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
*/
int
rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
- enum rte_rmt_call_master_t call_master)
+ enum rte_rmt_call_main_t call_main)
{
int lcore_id;
- int master = rte_get_master_lcore();
+ int main_lcore = rte_get_main_lcore();
/* check state of lcores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (lcore_config[lcore_id].state != WAIT)
return -EBUSY;
}
/* send messages to cores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_remote_launch(f, arg, lcore_id);
}
- if (call_master == CALL_MASTER) {
- lcore_config[master].ret = f(arg);
- lcore_config[master].state = FINISHED;
+ if (call_main == CALL_MAIN) {
+ lcore_config[main_lcore].ret = f(arg);
+ lcore_config[main_lcore].state = FINISHED;
}
return 0;
}
/*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
*/
enum rte_lcore_state_t
rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
{
unsigned lcore_id;
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_wait_lcore(lcore_id);
}
}
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
#include "eal_private.h"
#include "eal_thread.h"
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
{
- return rte_eal_get_configuration()->master_lcore;
+ return rte_eal_get_configuration()->main_lcore;
}
unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
return cfg->lcore_role[lcore_id] == ROLE_RTE;
}
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
{
i++;
if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
while (i < RTE_MAX_LCORE) {
if (!rte_lcore_is_enabled(i) ||
- (skip_master && (i == rte_get_master_lcore()))) {
+ (skip_main && (i == rte_get_main_lcore()))) {
i++;
if (wrap)
i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
{OPT_TRACE_BUF_SIZE, 1, NULL, OPT_TRACE_BUF_SIZE_NUM },
{OPT_TRACE_MODE, 1, NULL, OPT_TRACE_MODE_NUM },
{OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM },
+ {OPT_MAIN_LCORE, 1, NULL, OPT_MAIN_LCORE_NUM },
{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
{OPT_NO_HPET, 0, NULL, OPT_NO_HPET_NUM },
{OPT_NO_HUGE, 0, NULL, OPT_NO_HUGE_NUM },
@@ -144,7 +145,7 @@ struct device_option {
static struct device_option_list devopt_list =
TAILQ_HEAD_INITIALIZER(devopt_list);
-static int master_lcore_parsed;
+static int main_lcore_parsed;
static int mem_parsed;
static int core_parsed;
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
j++, idx++) {
if ((1 << j) & val) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (master_lcore_parsed &&
- cfg->master_lcore == lcore) {
+ if (main_lcore_parsed &&
+ cfg->main_lcore == lcore) {
RTE_LOG(ERR, EAL,
- "lcore %u is master lcore, cannot use as service core\n",
+ "lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
min = idx;
for (idx = min; idx <= max; idx++) {
if (cfg->lcore_role[idx] != ROLE_SERVICE) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (cfg->master_lcore == lcore &&
- master_lcore_parsed) {
+ if (cfg->main_lcore == lcore &&
+ main_lcore_parsed) {
RTE_LOG(ERR, EAL,
- "Error: lcore %u is master lcore, cannot use as service core\n",
+ "Error: lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
return 0;
}
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
struct rte_config *cfg = rte_eal_get_configuration();
errno = 0;
- cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+ cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
if (errno || parsing_end[0] != 0)
return -1;
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- master_lcore_parsed = 1;
+ main_lcore_parsed = 1;
- /* ensure master core is not used as service core */
- if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+ /* ensure main core is not used as service core */
+ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
RTE_LOG(ERR, EAL,
- "Error: Master lcore is used as a service core\n");
+ "Error: Main lcore is used as a service core\n");
return -1;
}
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
break;
case OPT_MASTER_LCORE_NUM:
- if (eal_parse_master_lcore(optarg) < 0) {
+ fprintf(stderr,
+ "Option --" OPT_MASTER_LCORE
+ " is deprecated use " OPT_MAIN_LCORE "\n");
+ /* fallthrough */
+ case OPT_MAIN_LCORE_NUM:
+ if (eal_parse_main_lcore(optarg) < 0) {
RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_MASTER_LCORE "\n");
+ OPT_MAIN_LCORE "\n");
return -1;
}
break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
RTE_CPU_AND(cpuset, cpuset, &default_set);
- /* if no remaining cpu, use master lcore cpu affinity */
+ /* if no remaining cpu, use main lcore cpu affinity */
if (!CPU_COUNT(cpuset)) {
- memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+ memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
sizeof(*cpuset));
}
}
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
if (internal_conf->process_type == RTE_PROC_AUTO)
internal_conf->process_type = eal_proc_type_detect();
- /* default master lcore is the first one */
- if (!master_lcore_parsed) {
- cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ /* default main lcore is the first one */
+ if (!main_lcore_parsed) {
+ cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+ lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
}
compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
- RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+ if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+ RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
return -1;
}
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
" '( )' can be omitted for single element group,\n"
" '@' can be omitted if cpus and lcores have the same value\n"
" -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
- " --"OPT_MASTER_LCORE" ID Core ID that is used as master\n"
+ " --"OPT_MAIN_LCORE" ID Core ID that is used as main\n"
" --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
" -n CHANNELS Number of memory channels\n"
" -m MB Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
OPT_TRACE_BUF_SIZE_NUM,
#define OPT_TRACE_MODE "trace-mode"
OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE "main-lcore"
+ OPT_MAIN_LCORE_NUM,
#define OPT_MASTER_LCORE "master-lcore"
OPT_MASTER_LCORE_NUM,
#define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
*/
struct lcore_config {
pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
+ int pipe_main2worker[2]; /**< communication pipe with main */
+ int pipe_worker2main[2]; /**< communication pipe with main */
lcore_function_t * volatile f; /**< function to call */
void * volatile arg; /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
* The global RTE configuration structure.
*/
struct rte_config {
- uint32_t master_lcore; /**< Id of the master lcore */
+ uint32_t main_lcore; /**< Id of the main lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
lcore_id = rte_lcore_id();
if (unlikely(lcore_id == LCORE_ID_ANY))
- lcore_id = rte_get_master_lcore();
+ lcore_id = rte_get_main_lcore();
return &rand_states[lcore_id];
}
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
struct rte_config *cfg = rte_eal_get_configuration();
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_config[i].core_role == ROLE_SERVICE) {
- if ((unsigned int)i == cfg->master_lcore)
+ if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
- config->master_lcore, thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+ config->main_lcore, thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
rte_thread_setname(lcore_config[i].thread_id, thread_name);
ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
/**
* Initialize the Environment Abstraction Layer (EAL).
*
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
* as possible in the application's main() function.
*
* The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
*
* When the multi-partition feature is supported, depending on the
* configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_eal_trace_thread_remote_launch,
RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
- unsigned int slave_id, int rc),
+ unsigned int worker_id, int rc),
rte_trace_point_emit_ptr(f);
rte_trace_point_emit_ptr(arg);
- rte_trace_point_emit_u32(slave_id);
+ rte_trace_point_emit_u32(worker_id);
rte_trace_point_emit_int(rc);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
/**
* Launch a function on another lcore.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
* is in the WAIT state (this is true after the first call to
* rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
*
* When the remote lcore receives the message, it switches to
* the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
* the return value of f is stored in a local variable to be read using
* rte_eal_wait_lcore().
*
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
* nothing about the completion of f.
*
* Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
* The function to be called.
* @param arg
* The argument for the function.
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore on which the function should be executed.
* @return
* - 0: Success. Execution of function f started on the remote lcore.
* - (-EBUSY): The remote lcore is not in a WAIT state.
*/
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
/**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
* launched on all logical cores.
*/
-enum rte_rmt_call_master_t {
- SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
- CALL_MASTER, /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+ SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+ CALL_MAIN, /**< lcore handler executed by main core. */
};
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
/**
* Launch a function on all lcores.
*
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
* rte_eal_remote_launch() for each lcore.
*
* @param f
* The function to be called.
* @param arg
* The argument for the function.
- * @param call_master
- * If call_master set to SKIP_MASTER, the MASTER lcore does not call
- * the function. If call_master is set to CALL_MASTER, the function
- * is also called on master before returning. In any case, the master
+ * @param call_main
+ * If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ * the function. If call_main is set to CALL_MAIN, the function
+ * is also called on main before returning. In any case, the main
* lcore returns as soon as it finished its job and knows nothing
* about the completion of f on the other lcores.
* @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
* case, no message is sent to any of the lcores.
*/
int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
- enum rte_rmt_call_master_t call_master);
+ enum rte_rmt_call_main_t call_main);
/**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
* The state of the lcore.
*/
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
/**
* Wait until an lcore finishes its job.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
* switch to the WAIT state. If the lcore is in RUNNING state, wait until
* the lcore finishes its job and moves to the FINISHED state.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
- * - 0: If the lcore identified by the slave_id is in a WAIT state.
+ * - 0: If the lcore identified by the worker_id is in a WAIT state.
* - The value that was returned by the previous remote launch
- * function call if the lcore identified by the slave_id was in a
+ * function call if the lcore identified by the worker_id was in a
* FINISHED or RUNNING state. In this case, it changes the state
* of the lcore to WAIT.
*/
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
/**
* Wait until all lcores finish their jobs.
*
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
* rte_eal_wait_lcore() for every lcore. The return values are
* ignored.
*
* After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
*/
void rte_eal_mp_wait_lcore(void);
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
}
/**
- * Get the id of the master lcore
+ * Get the id of the main lcore
*
* @return
- * the id of the master lcore
+ * the id of the main lcore
*/
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ * the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+ return rte_get_main_lcore();
+}
/**
* Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
*
* @param i
* The current lcore (reference).
- * @param skip_master
- * If true, do not return the ID of the master lcore.
+ * @param skip_main
+ * If true, do not return the ID of the main lcore.
* @param wrap
* If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
* return RTE_MAX_LCORE.
* @return
* The next lcore_id or RTE_MAX_LCORE if not found.
*/
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
/**
* Macro to browse all running lcores.
*/
#define RTE_LCORE_FOREACH(i) \
for (i = rte_get_next_lcore(-1, 0, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 0, 0))
/**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
*/
-#define RTE_LCORE_FOREACH_SLAVE(i) \
+#define RTE_LCORE_FOREACH_WORKER(i) \
for (i = rte_get_next_lcore(-1, 1, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 1, 0))
+#define RTE_LCORE_FOREACH_SLAVE(l) \
+ RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
/**
* Callback prototype for initializing lcores.
*
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
- config->master_lcore, (uintptr_t)thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ config->main_lcore, (uintptr_t)thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
ret = rte_thread_setname(lcore_config[i].thread_id,
thread_name);
if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
/* the allocation logic is a little bit convoluted, but here's how it
* works, in a nutshell:
* - if user hasn't specified on which sockets to allocate memory via
- * --socket-mem, we allocate all of our memory on master core socket.
+ * --socket-mem, we allocate all of our memory on main core socket.
* - if user has specified sockets to allocate memory on, there may be
* some "unused" memory left (e.g. if user has specified --socket-mem
* such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
for (i = 0; i < rte_socket_count(); i++) {
int hp_sizes = (int) internal_conf->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
struct rte_config *cfg = rte_eal_get_configuration();
bool skip;
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
skip = active_sockets != 0 &&
internal_conf->socket_mem[socket_id] == 0;
/* ...or if we didn't specifically request memory on *any*
- * socket, and this is not master lcore
+ * socket, and this is not main lcore
*/
- master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
- skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+ main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+ skip |= active_sockets == 0 && socket_id != main_lcore_socket;
if (skip) {
RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index f56de02d8f6c..cd41167b2121 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -73,7 +73,7 @@ DPDK_21 {
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
- rte_get_master_lcore;
+ rte_get_main_lcore;
rte_get_next_lcore;
rte_get_tsc_hz;
rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index 141f22adb7dc..6334aca03df2 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -355,8 +355,8 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
bscan = rte_bus_scan();
if (bscan < 0) {
@@ -365,16 +365,16 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (_pipe(lcore_config[i].pipe_master2slave,
+ if (_pipe(lcore_config[i].pipe_main2worker,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
- if (_pipe(lcore_config[i].pipe_slave2master,
+ if (_pipe(lcore_config[i].pipe_worker2main,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
@@ -399,10 +399,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
return fctret;
}
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
#include "eal_windows.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
return -EBUSY;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(m2s, &c, 1);
+ n = _write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = _read(s2m, &c, 1);
+ n = _read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
int n, ret;
unsigned int lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
/* wait command */
do {
- n = _read(m2s, &c, 1);
+ n = _read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(s2m, &c, 1);
+ n = _write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
--
2.27.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 13:07 0% ` Andrew Rybchenko
2020-10-15 13:57 0% ` Slava Ovsiienko
@ 2020-10-15 20:22 0% ` Slava Ovsiienko
1 sibling, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 20:22 UTC (permalink / raw)
To: Andrew Rybchenko, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Jerin Jacob, Andrew Rybchenko
Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
David Marchand
Hi,
Evening update:
- addressed code comments
- provided the union of segmentation description with dedicated feature structures according Jerin's proposal
- added the reporting of split limitation
With best regards, Slava
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 16:07
> To: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Jerin Jacob <jerinjacobk@gmail.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Cc: dpdk-dev <dev@dpdk.org>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
>
> On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> > 15/10/2020 13:49, Slava Ovsiienko:
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> >>>
> >>> <...>
> >>>
> >>>>>>>> If we see some of the features of such kind or other PMDs
> >>>>>>>> adopts the split feature - we'll try to find the common root
> >>>>>>>> and consider the way how
> >>>>>> to report it.
> >>>>>>>
> >>>>>>> My only concern with that approach will be ABI break again if
> >>>>>>> something needs to exposed over rte_eth_dev_info().
> >>>>>
> >>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in
> >>>>> the rte_eth_dev_info to avoid ABI break?
> >>>>
> >>>> Works for me. If we add an additional reserved field.
> >>>>
> >>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
> >>>> and fill meat when it is required if other ethdev maintainers are OK.
> >>>> I will be required for feature complete.
> >>>>
> >>>
> >>> Sounds good to me.
> >
> > OK for me.
>
> OK as well, but I dislike the idea with pointer in dev_info.
> It sounds like it breaks existing practice.
> We should either reserve enough space or simply add dedicated API call to
> report Rx seg capabilities.
>
> >
> >> OK, let's introduce the pointer in the rte_eth_dev_info and define
> >> struct rte_eth_rxseg_limitations as experimental.
> >> Will it be allowed to update this one later (after 20.11)?
> >> Is ABI break is allowed for the case?
> >
> > If it is experimental, you can change it at anytime.
> >
> > Ideally, we could try to have a first version of the limitations
> > during 20.11-rc2.
>
> Yes, please.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
2020-10-15 18:07 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 18:27 4% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-15 18:27 UTC (permalink / raw)
To: Timothy McDaniel
Cc: Jerin Jacob, Mattias Rönnblom, Liang Ma, Peter Mccarthy,
Nipun Gupta, Pavan Nikhilesh, dpdk-dev, Erik Gabriel Carrillo,
Gage Eads, Van Haaren, Harry, Hemant Agrawal, Richardson, Bruce
On Thu, Oct 15, 2020 at 11:36 PM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The announcement made in 20.08 is no longer required.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 13 -------------
Acked-by: Jerin Jacob <jerinj@marvell.com>
Series squashed and applied to dpdk-next-eventdev/for-main. Thanks.
> 1 file changed, 13 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index efd7710..08f1c04 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -189,19 +189,6 @@ Deprecation Notices
> ``rte_cryptodev_scheduler_worker_detach`` and
> ``rte_cryptodev_scheduler_workers_get`` accordingly.
>
> -* eventdev: Following structures will be modified to support DLB PMD
> - and future extensions:
> -
> - - ``rte_event_dev_info``
> - - ``rte_event_dev_config``
> - - ``rte_event_port_conf``
> -
> - Patches containing justification, documentation, and proposed modifications
> - can be found at:
> -
> - - https://patches.dpdk.org/patch/71457/
> - - https://patches.dpdk.org/patch/71456/
> -
> * sched: To allow more traffic classes, flexible mapping of pipe queues to
> traffic classes, and subport level configuration of pipes and queues
> changes will be made to macros, data structures and API functions defined
> --
> 2.6.4
>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-15 18:07 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 18:07 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 18:07 13% ` Timothy McDaniel
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
The eventdev ABI changes announced in 20.08 have been implemented
in 20.11. This commit announces the implementation of those changes, and
lists the data structures that were modified.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7878e8e..0f8ee2a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -352,6 +352,14 @@ ABI Changes
* ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
+* ``eventdev`` changes
+
+ * Following structures are modified to support DLB/DLB2 PMDs
+ and future extensions:
+
+ * ``rte_event_dev_info``
+ * ``rte_event_dev_config``
+ * ``rte_event_port_conf``
Known Issues
------------
--
2.6.4
^ permalink raw reply [relevance 13%]
* [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-15 18:07 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-15 18:07 4% ` Timothy McDaniel
2020-10-15 18:27 4% ` Jerin Jacob
2020-10-15 18:07 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
The announcement made in 20.08 is no longer required.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index efd7710..08f1c04 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -189,19 +189,6 @@ Deprecation Notices
``rte_cryptodev_scheduler_worker_detach`` and
``rte_cryptodev_scheduler_workers_get`` accordingly.
-* eventdev: Following structures will be modified to support DLB PMD
- and future extensions:
-
- - ``rte_event_dev_info``
- - ``rte_event_dev_config``
- - ``rte_event_port_conf``
-
- Patches containing justification, documentation, and proposed modifications
- can be found at:
-
- - https://patches.dpdk.org/patch/71457/
- - https://patches.dpdk.org/patch/71456/
-
* sched: To allow more traffic classes, flexible mapping of pipe queues to
traffic classes, and subport level configuration of pipes and queues
changes will be made to macros, data structures and API functions defined
--
2.6.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-15 18:07 1% ` Timothy McDaniel
2020-10-15 18:07 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 18:07 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs. Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.
The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
of the event (QE) payload. Note that the DLB2 hardware is capable of
carrying the flow_id.
Following is a detailed description of the changes that have been made.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.
struct rte_event_dev_info {
uint32_t max_event_port_links;
/**< Maximum number of queues that can be linked to a single event
* port by this device.
*/
uint8_t max_single_link_event_port_queue_pairs;
/**< Maximum number of event ports and queues that are optimized for
* (and only capable of) single-link configurations supported by this
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
}
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
/** Event device configuration structure */
struct rte_event_dev_config {
uint8_t nb_single_link_event_port_queues;
/**< Number of event ports and queues that will be singly-linked to
* each other. These are a subset of the overall event ports and
* queues; this value cannot exceed *nb_event_ports* or
* *nb_event_queues*. If the device has ports and queues that are
* optimized for single-link usage, this field is a hint for how many
* to allocate; otherwise, regular event ports and queues can be used.
*/
}
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to other, with the remaining bits available for future assignment.
* Event port configuration bitmap flags */
#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
/**< Configure the port not to release outstanding events in
* rte_event_dev_dequeue_burst(). If set, all events received through
* the port must be explicitly released with RTE_EVENT_OP_RELEASE or
* RTE_EVENT_OP_FORWARD. Must be unset if the device is not
* RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
*/
#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
/**< This event port links only to a single event queue.
*
* @see rte_event_port_setup(), rte_event_port_link()
*/
#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* The implicit release disable attribute of the port
*/
struct rte_event_port_conf {
uint32_t event_port_cfg;
/**< Port cfg flags(EVENT_PORT_CFG_) */
}
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
app/test-eventdev/evt_common.h | 11 ++++
app/test-eventdev/test_order_atq.c | 28 +++++++---
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 +++++++---
app/test/test_eventdev.c | 4 +-
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++-
drivers/event/sw/sw_evdev_selftest.c | 6 +-
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
examples/l3fwd/l3fwd_event_generic.c | 7 ++-
examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 65 +++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
26 files changed, 213 insertions(+), 64 deletions(-)
diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
true : false;
}
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+ struct rte_event_dev_info dev_info;
+
+ rte_event_dev_info_get(dev_id, &dev_info);
+ return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+ true : false;
+}
+
static inline int
evt_service_setup(uint32_t service_id)
{
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
.dequeue_timeout_ns = opt->deq_tmo_nsec,
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = info.max_num_events,
.nb_event_queue_flows = opt->nb_flows,
.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
}
static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.sub_event_type == 0) { /* stage 0 from producer */
order_atq_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
}
static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].sub_event_type == 0) { /*stage 0 */
order_atq_process_stage_0(&ev[i]);
} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_atq_worker_burst(arg);
- else
- return order_atq_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_atq_worker_burst(arg, true);
+ else
+ return order_atq_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_atq_worker(arg, true);
+ else
+ return order_atq_worker(arg, false);
+ }
}
static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
const uint32_t flow = (uintptr_t)m % nb_flows;
/* Maintain seq number per flow */
m->seqn = producer_flow_seq[flow]++;
+ m->udata64 = flow;
ev.flow_id = flow;
ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
}
static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
}
static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev[i]);
} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_queue_worker_burst(arg);
- else
- return order_queue_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_queue_worker_burst(arg, true);
+ else
+ return order_queue_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_queue_worker(arg, true);
+ else
+ return order_queue_worker(arg, false);
+ }
}
static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
if (!(info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
pconf.enqueue_depth = info.max_event_port_enqueue_depth;
- pconf.disable_implicit_release = 1;
+ pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
- pconf.disable_implicit_release = 0;
+ pconf.event_port_cfg = 0;
}
ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index f7383ca..95f03c8 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
port_conf->enqueue_depth =
DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
};
}
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 33cb502..6f242aa 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = edev->max_num_events;
port_conf->dequeue_depth = 1;
port_conf->enqueue_depth = 1;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 256b6a5..b31c26e 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
- .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+ .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
dev_info->max_num_events = (1ULL << 20);
dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
RTE_EVENT_DEV_CAP_BURST_MODE |
- RTE_EVENT_DEV_CAP_EVENT_QOS;
+ RTE_EVENT_DEV_CAP_EVENT_QOS |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 32 * 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index e310c8c..0d8013a 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -179,7 +179,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
}
p->inflight_max = conf->new_event_threshold;
- p->implicit_release = !conf->disable_implicit_release;
+ p->implicit_release = !(conf->event_port_cfg &
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
/* check if ring exists, same as rx_worker above */
snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -501,7 +502,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
@@ -608,7 +609,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
};
*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
.new_event_threshold = 1024,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (num_ports > MAX_PORTS)
return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
* only be initialized once - and this needs to be set for multiple runs
*/
conf.new_event_threshold = 512;
- conf.disable_implicit_release = disable_implicit_release;
+ conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (rte_event_port_setup(evdev, 0, &conf) < 0) {
printf("Error setting up RX port\n");
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 1,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
.schedule_type = cdata.queue_type,
.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
.nb_atomic_flows = 1024,
- .nb_atomic_order_sequences = 1024,
+ .nb_atomic_order_sequences = 1024,
};
struct rte_event_queue_conf tx_q_conf = {
.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
disable_implicit_release = (dev_info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
- wkr_p_conf.disable_implicit_release = disable_implicit_release;
+ wkr_p_conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (dev_info.max_num_events < config.nb_events_limit)
config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index 86287b4..cc27bbc 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
return ret;
}
- pc->disable_implicit_release = 0;
+ pc->event_port_cfg = 0;
ret = rte_event_port_setup(dev_id, port_id, pc);
if (ret) {
RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 557198f..322453c 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -438,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_queues > info.max_event_queues) {
- RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
- dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+ if (dev_conf->nb_event_queues > info.max_event_queues +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_queues,
+ info.max_event_queues,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_queues -
+ dev_conf->nb_single_link_event_port_queues >
+ info.max_event_queues) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+ dev_id, dev_conf->nb_event_queues,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_queues);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_queues) {
+ RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_queues);
return -EINVAL;
}
@@ -449,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_ports > info.max_event_ports) {
- RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
- dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+ if (dev_conf->nb_event_ports > info.max_event_ports +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_ports,
+ info.max_event_ports,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_ports -
+ dev_conf->nb_single_link_event_port_queues
+ > info.max_event_ports) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+ dev_id, dev_conf->nb_event_ports,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_ports);
+ return -EINVAL;
+ }
+
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_ports) {
+ RTE_EDEV_LOG_ERR(
+ "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_ports);
return -EINVAL;
}
@@ -738,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- if (port_conf && port_conf->disable_implicit_release &&
+ if (port_conf &&
+ (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
!(dev->data->event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
RTE_EDEV_LOG_ERR(
@@ -831,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
break;
+ case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+ {
+ uint32_t config;
+
+ config = dev->data->ports_cfg[port_id].event_port_cfg;
+ *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+ break;
+ }
default:
return -EINVAL;
};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
* single queue to each port or map a single queue to many port.
*/
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
* event port by this device.
* A device that does not support bulk enqueue will set this as 1.
*/
+ uint8_t max_event_port_links;
+ /**< Maximum number of queues that can be linked to a single event
+ * port by this device.
+ */
int32_t max_num_events;
/**< A *closed system* event dev has a limit on the number of events it
* can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
*/
uint32_t event_dev_cap;
/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+ uint8_t max_single_link_event_port_queue_pairs;
+ /**< Maximum number of event ports and queues that are optimized for
+ * (and only capable of) single-link configurations supported by this
+ * device. These ports and queues are not accounted for in
+ * max_event_ports or max_event_queues.
+ */
};
/**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
*/
uint32_t event_dev_cfg;
/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+ uint8_t nb_single_link_event_port_queues;
+ /**< Number of event ports and queues that will be singly-linked to
+ * each other. These are a subset of the overall event ports and
+ * queues; this value cannot exceed *nb_event_ports* or
+ * *nb_event_queues*. If the device has ports and queues that are
+ * optimized for single-link usage, this field is a hint for how many
+ * to allocate; otherwise, regular event ports and queues can be used.
+ */
};
/**
@@ -519,7 +543,6 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf);
-
/* Event queue specific APIs */
/* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
/* Event port specific APIs */
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ * @see rte_event_port_setup(), rte_event_port_link()
+ */
+
/** Event port configuration structure */
struct rte_event_port_conf {
int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
* which previously supplied to rte_event_dev_configure().
* Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
*/
- uint8_t disable_implicit_release;
- /**< Configure the port not to release outstanding events in
- * rte_event_dev_dequeue_burst(). If true, all events received through
- * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
- * RTE_EVENT_OP_FORWARD. Must be false when the device is not
- * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
- */
+ uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
};
/**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
* The new event threshold of the port
*/
#define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
return -ENXIO;
}
-
/**
* @internal
* Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+ rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
rte_trace_point_emit_int(rc);
)
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_int(rc);
)
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_ptr(conf_cb);
rte_trace_point_emit_int(rc);
)
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
# added in 20.05
__rte_eventdev_trace_configure;
__rte_eventdev_trace_queue_setup;
- __rte_eventdev_trace_port_setup;
__rte_eventdev_trace_port_link;
__rte_eventdev_trace_port_unlink;
__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
__rte_eventdev_trace_crypto_adapter_queue_pair_del;
__rte_eventdev_trace_crypto_adapter_start;
__rte_eventdev_trace_crypto_adapter_stop;
+
+ # changed in 20.11
+ __rte_eventdev_trace_port_setup;
};
--
2.6.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
@ 2020-10-15 18:07 9% ` Timothy McDaniel
2020-10-15 18:07 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
` (2 more replies)
2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 18:07 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.
The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.
Due to the above, we would like to propose the following enhancements.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.
Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang
Added blub to relnotes announcing the changes contained in this patch
Removed ABI deprecation announcement
Resolved patch apply issues when applying to eventdev-next
Combined ABI patch and app/examples patch to remove dependencies
Testing showed no performance impact due to the flow_id template code
added to test app.
[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html
Timothy McDaniel (3):
eventdev: eventdev: express DLB/DLB2 PMD constraints
doc: remove eventdev ABI change announcement
doc: announce new eventdev ABI changes
app/test-eventdev/evt_common.h | 11 ++++
app/test-eventdev/test_order_atq.c | 28 +++++++---
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 +++++++---
app/test/test_eventdev.c | 4 +-
doc/guides/rel_notes/deprecation.rst | 13 -----
doc/guides/rel_notes/release_20_11.rst | 8 +++
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++-
drivers/event/sw/sw_evdev_selftest.c | 6 +-
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
examples/l3fwd/l3fwd_event_generic.c | 7 ++-
examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 65 +++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
28 files changed, 221 insertions(+), 77 deletions(-)
--
2.6.4
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
2020-10-15 17:31 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 17:31 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
@ 2020-10-15 17:31 13% ` Timothy McDaniel
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
The eventdev ABI changes announced in 20.08 have been implemented
in 20.11. This commit announces the implementation of those changes, and
lists the data structures that were modified.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
doc/guides/rel_notes/release_20_11.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7878e8e..0f8ee2a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -352,6 +352,14 @@ ABI Changes
* ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
+* ``eventdev`` changes
+
+ * Following structures are modified to support DLB/DLB2 PMDs
+ and future extensions:
+
+ * ``rte_event_dev_info``
+ * ``rte_event_dev_config``
+ * ``rte_event_port_conf``
Known Issues
------------
--
2.6.4
^ permalink raw reply [relevance 13%]
* [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
@ 2020-10-15 17:31 1% ` Timothy McDaniel
2020-10-15 17:31 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 17:31 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs. Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.
The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
of the event (QE) payload. Note that the DLB2 hardware is capable of
carrying the flow_id.
Following is a detailed description of the changes that have been made.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertize its capabilities so that applications can take
the appropriate actions based on those capabilities.
struct rte_event_dev_info {
uint32_t max_event_port_links;
/**< Maximum number of queues that can be linked to a single event
* port by this device.
*/
uint8_t max_single_link_event_port_queue_pairs;
/**< Maximum number of event ports and queues that are optimized for
* (and only capable of) single-link configurations supported by this
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
}
2) Add a new field to the rte_event_dev_config struct. This field allows the
application to specify how many of its ports are limited to a single link,
or will be used in single link mode.
/** Event device configuration structure */
struct rte_event_dev_config {
uint8_t nb_single_link_event_port_queues;
/**< Number of event ports and queues that will be singly-linked to
* each other. These are a subset of the overall event ports and
* queues; this value cannot exceed *nb_event_ports* or
* *nb_event_queues*. If the device has ports and queues that are
* optimized for single-link usage, this field is a hint for how many
* to allocate; otherwise, regular event ports and queues can be used.
*/
}
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assiged to one bit, and a port-is-single-link-only attribute is
assigned to other, with the remaining bits available for future assignment.
* Event port configuration bitmap flags */
#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
/**< Configure the port not to release outstanding events in
* rte_event_dev_dequeue_burst(). If set, all events received through
* the port must be explicitly released with RTE_EVENT_OP_RELEASE or
* RTE_EVENT_OP_FORWARD. Must be unset if the device is not
* RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
*/
#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
/**< This event port links only to a single event queue.
*
* @see rte_event_port_setup(), rte_event_port_link()
*/
#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* The implicit release disable attribute of the port
*/
struct rte_event_port_conf {
uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
}
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
app/test-eventdev/evt_common.h | 11 ++++
app/test-eventdev/test_order_atq.c | 28 +++++++---
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 +++++++---
app/test/test_eventdev.c | 4 +-
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++-
drivers/event/sw/sw_evdev_selftest.c | 6 +-
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
examples/l3fwd/l3fwd_event_generic.c | 7 ++-
examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 65 +++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
26 files changed, 213 insertions(+), 64 deletions(-)
diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
true : false;
}
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+ struct rte_event_dev_info dev_info;
+
+ rte_event_dev_info_get(dev_id, &dev_info);
+ return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+ true : false;
+}
+
static inline int
evt_service_setup(uint32_t service_id)
{
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
.dequeue_timeout_ns = opt->deq_tmo_nsec,
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = info.max_num_events,
.nb_event_queue_flows = opt->nb_flows,
.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
}
static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.sub_event_type == 0) { /* stage 0 from producer */
order_atq_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
}
static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].sub_event_type == 0) { /*stage 0 */
order_atq_process_stage_0(&ev[i]);
} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_atq_worker_burst(arg);
- else
- return order_atq_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_atq_worker_burst(arg, true);
+ else
+ return order_atq_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_atq_worker(arg, true);
+ else
+ return order_atq_worker(arg, false);
+ }
}
static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
const uint32_t flow = (uintptr_t)m % nb_flows;
/* Maintain seq number per flow */
m->seqn = producer_flow_seq[flow]++;
+ m->udata64 = flow;
ev.flow_id = flow;
ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
}
static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
}
static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev[i]);
} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_queue_worker_burst(arg);
- else
- return order_queue_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_queue_worker_burst(arg, true);
+ else
+ return order_queue_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_queue_worker(arg, true);
+ else
+ return order_queue_worker(arg, false);
+ }
}
static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
if (!(info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
pconf.enqueue_depth = info.max_event_port_enqueue_depth;
- pconf.disable_implicit_release = 1;
+ pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
- pconf.disable_implicit_release = 0;
+ pconf.event_port_cfg = 0;
}
ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index f7383ca..95f03c8 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
port_conf->enqueue_depth =
DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
};
}
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 33cb502..6f242aa 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = edev->max_num_events;
port_conf->dequeue_depth = 1;
port_conf->enqueue_depth = 1;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 256b6a5..b31c26e 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
- .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+ .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
dev_info->max_num_events = (1ULL << 20);
dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
RTE_EVENT_DEV_CAP_BURST_MODE |
- RTE_EVENT_DEV_CAP_EVENT_QOS;
+ RTE_EVENT_DEV_CAP_EVENT_QOS |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 32 * 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index e310c8c..0d8013a 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -179,7 +179,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
}
p->inflight_max = conf->new_event_threshold;
- p->implicit_release = !conf->disable_implicit_release;
+ p->implicit_release = !(conf->event_port_cfg &
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
/* check if ring exists, same as rx_worker above */
snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -501,7 +502,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
@@ -608,7 +609,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
};
*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
.new_event_threshold = 1024,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (num_ports > MAX_PORTS)
return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
* only be initialized once - and this needs to be set for multiple runs
*/
conf.new_event_threshold = 512;
- conf.disable_implicit_release = disable_implicit_release;
+ conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (rte_event_port_setup(evdev, 0, &conf) < 0) {
printf("Error setting up RX port\n");
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 1,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
.schedule_type = cdata.queue_type,
.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
.nb_atomic_flows = 1024,
- .nb_atomic_order_sequences = 1024,
+ .nb_atomic_order_sequences = 1024,
};
struct rte_event_queue_conf tx_q_conf = {
.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
disable_implicit_release = (dev_info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
- wkr_p_conf.disable_implicit_release = disable_implicit_release;
+ wkr_p_conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (dev_info.max_num_events < config.nb_events_limit)
config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index 86287b4..cc27bbc 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
return ret;
}
- pc->disable_implicit_release = 0;
+ pc->event_port_cfg = 0;
ret = rte_event_port_setup(dev_id, port_id, pc);
if (ret) {
RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 557198f..322453c 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -438,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_queues > info.max_event_queues) {
- RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
- dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+ if (dev_conf->nb_event_queues > info.max_event_queues +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_queues,
+ info.max_event_queues,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_queues -
+ dev_conf->nb_single_link_event_port_queues >
+ info.max_event_queues) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+ dev_id, dev_conf->nb_event_queues,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_queues);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_queues) {
+ RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_queues);
return -EINVAL;
}
@@ -449,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_ports > info.max_event_ports) {
- RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
- dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+ if (dev_conf->nb_event_ports > info.max_event_ports +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_ports,
+ info.max_event_ports,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_ports -
+ dev_conf->nb_single_link_event_port_queues
+ > info.max_event_ports) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+ dev_id, dev_conf->nb_event_ports,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_ports);
+ return -EINVAL;
+ }
+
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_ports) {
+ RTE_EDEV_LOG_ERR(
+ "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_ports);
return -EINVAL;
}
@@ -738,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- if (port_conf && port_conf->disable_implicit_release &&
+ if (port_conf &&
+ (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
!(dev->data->event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
RTE_EDEV_LOG_ERR(
@@ -831,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
break;
+ case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+ {
+ uint32_t config;
+
+ config = dev->data->ports_cfg[port_id].event_port_cfg;
+ *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+ break;
+ }
default:
return -EINVAL;
};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
* single queue to each port or map a single queue to many port.
*/
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
* event port by this device.
* A device that does not support bulk enqueue will set this as 1.
*/
+ uint8_t max_event_port_links;
+ /**< Maximum number of queues that can be linked to a single event
+ * port by this device.
+ */
int32_t max_num_events;
/**< A *closed system* event dev has a limit on the number of events it
* can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
*/
uint32_t event_dev_cap;
/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+ uint8_t max_single_link_event_port_queue_pairs;
+ /**< Maximum number of event ports and queues that are optimized for
+ * (and only capable of) single-link configurations supported by this
+ * device. These ports and queues are not accounted for in
+ * max_event_ports or max_event_queues.
+ */
};
/**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
*/
uint32_t event_dev_cfg;
/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+ uint8_t nb_single_link_event_port_queues;
+ /**< Number of event ports and queues that will be singly-linked to
+ * each other. These are a subset of the overall event ports and
+ * queues; this value cannot exceed *nb_event_ports* or
+ * *nb_event_queues*. If the device has ports and queues that are
+ * optimized for single-link usage, this field is a hint for how many
+ * to allocate; otherwise, regular event ports and queues can be used.
+ */
};
/**
@@ -519,7 +543,6 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf);
-
/* Event queue specific APIs */
/* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
/* Event port specific APIs */
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ * @see rte_event_port_setup(), rte_event_port_link()
+ */
+
/** Event port configuration structure */
struct rte_event_port_conf {
int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
* which previously supplied to rte_event_dev_configure().
* Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
*/
- uint8_t disable_implicit_release;
- /**< Configure the port not to release outstanding events in
- * rte_event_dev_dequeue_burst(). If true, all events received through
- * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
- * RTE_EVENT_OP_FORWARD. Must be false when the device is not
- * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
- */
+ uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
};
/**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
* The new event threshold of the port
*/
#define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
return -ENXIO;
}
-
/**
* @internal
* Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+ rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
rte_trace_point_emit_int(rc);
)
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_int(rc);
)
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_ptr(conf_cb);
rte_trace_point_emit_int(rc);
)
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
# added in 20.05
__rte_eventdev_trace_configure;
__rte_eventdev_trace_queue_setup;
- __rte_eventdev_trace_port_setup;
__rte_eventdev_trace_port_link;
__rte_eventdev_trace_port_unlink;
__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
__rte_eventdev_trace_crypto_adapter_queue_pair_del;
__rte_eventdev_trace_crypto_adapter_start;
__rte_eventdev_trace_crypto_adapter_stop;
+
+ # changed in 20.11
+ __rte_eventdev_trace_port_setup;
};
--
2.6.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
2020-10-15 17:31 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-15 17:31 4% ` Timothy McDaniel
2020-10-15 17:31 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
The announcement made in 20.08 is no longer required.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index efd7710..08f1c04 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -189,19 +189,6 @@ Deprecation Notices
``rte_cryptodev_scheduler_worker_detach`` and
``rte_cryptodev_scheduler_workers_get`` accordingly.
-* eventdev: Following structures will be modified to support DLB PMD
- and future extensions:
-
- - ``rte_event_dev_info``
- - ``rte_event_dev_config``
- - ``rte_event_port_conf``
-
- Patches containing justification, documentation, and proposed modifications
- can be found at:
-
- - https://patches.dpdk.org/patch/71457/
- - https://patches.dpdk.org/patch/71456/
-
* sched: To allow more traffic classes, flexible mapping of pipe queues to
traffic classes, and subport level configuration of pipes and queues
changes will be made to macros, data structures and API functions defined
--
2.6.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-15 17:31 9% ` Timothy McDaniel
2020-10-15 17:31 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
` (2 more replies)
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-15 17:31 UTC (permalink / raw)
Cc: jerinj, mattias.ronnblom, liang.j.ma, peter.mccarthy,
nipun.gupta, pbhagavatula, dev, erik.g.carrillo, gage.eads,
harry.van.haaren, hemant.agrawal, bruce.richardson
This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.
The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.
Due to the above, we would like to propose the following enhancements.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.
Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang
Added blub to relnotes announcing the changes contained in this patch
Removed ABI deprecation announcement
Resolved patch apply issues when applying to eventdev-next
Combined ABI patch and app/examples patch to remove bi-directional dependency
Testing showed no performance impact due to the flow_id template code
added to test app.
[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html
Timothy McDaniel (3):
eventdev: eventdev: express DLB/DLB2 PMD constraints
doc: remove eventdev ABI change announcement
doc: announce new eventdev ABI changes
app/test-eventdev/evt_common.h | 11 ++++
app/test-eventdev/test_order_atq.c | 28 +++++++---
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 +++++++---
app/test/test_eventdev.c | 4 +-
doc/guides/rel_notes/deprecation.rst | 13 -----
doc/guides/rel_notes/release_20_11.rst | 8 +++
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++-
drivers/event/sw/sw_evdev_selftest.c | 6 +-
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
examples/l3fwd/l3fwd_event_generic.c | 7 ++-
examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 65 +++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
28 files changed, 221 insertions(+), 77 deletions(-)
--
2.6.4
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
2020-10-15 15:32 0% ` Luca Boccassi
@ 2020-10-15 15:34 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-10-15 15:34 UTC (permalink / raw)
To: Luca Boccassi; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas
On Thu, Oct 15, 2020 at 04:32:35PM +0100, Luca Boccassi wrote:
> On Thu, 2020-10-15 at 15:03 +0100, Bruce Richardson wrote:
> > On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> > > On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > > > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > > > improvements in standardizing the naming of the various components in DPDK,
> > > > > > and their associated feature-enabled macros.
> > > > > >
> > > > > > Following this patch, each library will have the name in format,
> > > > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > > > build will have the form 'RTE_LIB_<NAME>'.
> > > > > >
> > > > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > > > 'crypto' etc.
> > > > > >
> > > > > > To avoid too many changes at once for end applications, the old macro names
> > > > > > will still be provided in the build in this release, but will be removed
> > > > > > subsequently.
> > > > > >
> > > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > >
> > > > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > > > ---
> > > > > > app/test-bbdev/meson.build | 4 ++--
> > > > > > app/test-crypto-perf/meson.build | 2 +-
> > > > > > app/test-pmd/meson.build | 12 ++++++------
> > > > > > app/test/meson.build | 8 ++++----
> > > > > > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > > > > > drivers/baseband/meson.build | 1 -
> > > > > > drivers/bus/meson.build | 1 -
> > > > > > drivers/common/meson.build | 1 -
> > > > > > drivers/common/mlx5/meson.build | 1 -
> > > > > > drivers/common/qat/meson.build | 1 -
> > > > > > drivers/compress/meson.build | 1 -
> > > > > > drivers/compress/octeontx/meson.build | 2 +-
> > > > > > drivers/crypto/meson.build | 1 -
> > > > > > drivers/crypto/null/meson.build | 2 +-
> > > > > > drivers/crypto/octeontx/meson.build | 2 +-
> > > > > > drivers/crypto/octeontx2/meson.build | 2 +-
> > > > > > drivers/crypto/scheduler/meson.build | 2 +-
> > > > > > drivers/crypto/virtio/meson.build | 2 +-
> > > > > > drivers/event/dpaa/meson.build | 2 +-
> > > > > > drivers/event/dpaa2/meson.build | 2 +-
> > > > > > drivers/event/meson.build | 1 -
> > > > > > drivers/event/octeontx/meson.build | 2 +-
> > > > > > drivers/event/octeontx2/meson.build | 2 +-
> > > > > > drivers/mempool/meson.build | 1 -
> > > > > > drivers/meson.build | 9 ++++-----
> > > > > > drivers/net/meson.build | 1 -
> > > > > > drivers/net/mlx4/meson.build | 2 +-
> > > > > > drivers/raw/ifpga/meson.build | 2 +-
> > > > > > drivers/raw/meson.build | 1 -
> > > > > > drivers/regex/meson.build | 1 -
> > > > > > drivers/vdpa/meson.build | 1 -
> > > > > > examples/bond/meson.build | 2 +-
> > > > > > examples/ethtool/meson.build | 2 +-
> > > > > > examples/ioat/meson.build | 2 +-
> > > > > > examples/l2fwd-crypto/meson.build | 2 +-
> > > > > > examples/ntb/meson.build | 2 +-
> > > > > > examples/vm_power_manager/meson.build | 6 +++---
> > > > > > lib/librte_ethdev/meson.build | 1 -
> > > > > > lib/librte_graph/meson.build | 2 --
> > > > > > lib/meson.build | 3 ++-
> > > > > > 40 files changed, 47 insertions(+), 55 deletions(-)
> > > > >
> > > > > Does this change the share object file names too, or only the macros?
> > > > >
> > > >
> > > > It does indeed change the object name files, which is a little bit
> > > > concerning. However, the consensus based on the RFC seemed to be that the
> > > > benefit is likely worth the change. If we want, we can look to use symlinks
> > > > to the old names on install, but I think that just delays the pain since I
> > > > would expect few to actually change their build to the new names until the
> > > > old ones and the symlinks completely go away.
> > > >
> > > > /Bruce
> > >
> > > It is a backward incompatible change, so we need to provide symlinks,
> > > right? On upgrade, programs linked to librte_old.so will fail to start.
> > > Or was this targeted at 20.11 thus piggy-backing on the ABI change
> > > which forces a re-link?
> > >
> > More of the latter, and the fact that changing the build system involved a
> > few library renames anyway for those using make. Since the ABI is changing
> > this release, and all the libs have a new major version number there is no
> > requirement for libs linked against an older version to work, and since
> > pkg-config should now be used for linking the actual names should not be
> > a concern.
> >
> > That's the thinking anyway. :-)
> >
> > /Bruce
>
> Ok that makes sense, I wasn't sure if this series was targeted for
> 20.11 or for later. In that case,
>
> Acked-by: Luca Boccassi <bluca@debian.org>
>
Yes, if it doesn't make 20.11 we'll have to re-evaluate and be stricter
with the compatibility constraints. It might not be worth doing post-20.11.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
2020-10-15 14:03 3% ` Bruce Richardson
@ 2020-10-15 15:32 0% ` Luca Boccassi
2020-10-15 15:34 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Luca Boccassi @ 2020-10-15 15:32 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas
On Thu, 2020-10-15 at 15:03 +0100, Bruce Richardson wrote:
> On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> > On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > > improvements in standardizing the naming of the various components in DPDK,
> > > > > and their associated feature-enabled macros.
> > > > >
> > > > > Following this patch, each library will have the name in format,
> > > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > > build will have the form 'RTE_LIB_<NAME>'.
> > > > >
> > > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > > 'crypto' etc.
> > > > >
> > > > > To avoid too many changes at once for end applications, the old macro names
> > > > > will still be provided in the build in this release, but will be removed
> > > > > subsequently.
> > > > >
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > >
> > > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > > ---
> > > > > app/test-bbdev/meson.build | 4 ++--
> > > > > app/test-crypto-perf/meson.build | 2 +-
> > > > > app/test-pmd/meson.build | 12 ++++++------
> > > > > app/test/meson.build | 8 ++++----
> > > > > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > > > > drivers/baseband/meson.build | 1 -
> > > > > drivers/bus/meson.build | 1 -
> > > > > drivers/common/meson.build | 1 -
> > > > > drivers/common/mlx5/meson.build | 1 -
> > > > > drivers/common/qat/meson.build | 1 -
> > > > > drivers/compress/meson.build | 1 -
> > > > > drivers/compress/octeontx/meson.build | 2 +-
> > > > > drivers/crypto/meson.build | 1 -
> > > > > drivers/crypto/null/meson.build | 2 +-
> > > > > drivers/crypto/octeontx/meson.build | 2 +-
> > > > > drivers/crypto/octeontx2/meson.build | 2 +-
> > > > > drivers/crypto/scheduler/meson.build | 2 +-
> > > > > drivers/crypto/virtio/meson.build | 2 +-
> > > > > drivers/event/dpaa/meson.build | 2 +-
> > > > > drivers/event/dpaa2/meson.build | 2 +-
> > > > > drivers/event/meson.build | 1 -
> > > > > drivers/event/octeontx/meson.build | 2 +-
> > > > > drivers/event/octeontx2/meson.build | 2 +-
> > > > > drivers/mempool/meson.build | 1 -
> > > > > drivers/meson.build | 9 ++++-----
> > > > > drivers/net/meson.build | 1 -
> > > > > drivers/net/mlx4/meson.build | 2 +-
> > > > > drivers/raw/ifpga/meson.build | 2 +-
> > > > > drivers/raw/meson.build | 1 -
> > > > > drivers/regex/meson.build | 1 -
> > > > > drivers/vdpa/meson.build | 1 -
> > > > > examples/bond/meson.build | 2 +-
> > > > > examples/ethtool/meson.build | 2 +-
> > > > > examples/ioat/meson.build | 2 +-
> > > > > examples/l2fwd-crypto/meson.build | 2 +-
> > > > > examples/ntb/meson.build | 2 +-
> > > > > examples/vm_power_manager/meson.build | 6 +++---
> > > > > lib/librte_ethdev/meson.build | 1 -
> > > > > lib/librte_graph/meson.build | 2 --
> > > > > lib/meson.build | 3 ++-
> > > > > 40 files changed, 47 insertions(+), 55 deletions(-)
> > > >
> > > > Does this change the share object file names too, or only the macros?
> > > >
> > >
> > > It does indeed change the object name files, which is a little bit
> > > concerning. However, the consensus based on the RFC seemed to be that the
> > > benefit is likely worth the change. If we want, we can look to use symlinks
> > > to the old names on install, but I think that just delays the pain since I
> > > would expect few to actually change their build to the new names until the
> > > old ones and the symlinks completely go away.
> > >
> > > /Bruce
> >
> > It is a backward incompatible change, so we need to provide symlinks,
> > right? On upgrade, programs linked to librte_old.so will fail to start.
> > Or was this targeted at 20.11 thus piggy-backing on the ABI change
> > which forces a re-link?
> >
> More of the latter, and the fact that changing the build system involved a
> few library renames anyway for those using make. Since the ABI is changing
> this release, and all the libs have a new major version number there is no
> requirement for libs linked against an older version to work, and since
> pkg-config should now be used for linking the actual names should not be
> a concern.
>
> That's the thinking anyway. :-)
>
> /Bruce
Ok that makes sense, I wasn't sure if this series was targeted for
20.11 or for later. In that case,
Acked-by: Luca Boccassi <bluca@debian.org>
--
Kind regards,
Luca Boccassi
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 11:09 0% ` Andrew Rybchenko
@ 2020-10-15 14:39 0% ` Slava Ovsiienko
0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 14:39 UTC (permalink / raw)
To: Andrew Rybchenko, Jerin Jacob
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
Hi, Andrew
> >> At least there are few simple limitations which are easy to
> >> express:
> >> 1. Maximum number of segments
> > We have scatter capability and we do not report the maximal number of
> > segments, it is on PMD own. We could add the field to the
> > rte_eth_dev_info, but not sure whether we have something special to report
> there even for mlx5 case.
>
> There is always a limitation in programming and HW. Nothing is unlimited.
> Limits could be high, but still exist.
> Number of descriptors? Width of field in HW interface?
> Maximum length of the config message to HW?
> All above could limit it directly or indirectly.
None of above is applicable to mlx5 buffer split feature - it just adjusts the Rx buffer pointers
and segment sizes, no anything beyond generic limitation - the queue descriptor numbers
and mbuf buffer size. Suppose the most of HW by other vendors is capable to support
buffer split feature with similar generic limitations.
>
> >> 2. Possibility to use the last segment many times if required
> >> (I was suggesting to use scatter for it, but you rejected
> >> the idea - may be time to reconsider :) )
> >
> > Mmm, sorry I do not follow, it might be I did not understand/missed your
> idea.
> > Some of the last segment attributes are used multiple times to scatter
> > the rest of the data in fashion very close to the existing scattering
> > approach - at least, pool and buffer size from this pool are used. The
> > beginning of the packet scattered according to the new descriptions,
> > the rest of the packet - according to the existing regular scattering
> > with pool settings from the last segment description.
>
> I believe that the possibility to split into a fixed segments
> (BUFFER_SPLIT) and possibility to use a mempool (just mp or last segment)
> many times if a packet does not fit (SCATTER) it is *different* features.
Sorry, what do you mean "use mempool many times"? Allocate multiple
mbufs from the same mempool and build the chain of them?
We have SCATTER offload and many PMDs advertise that.
Scattering is actually the split, the split happens on some well-defined points
to the mbufs from the same pool. BUFFER_SPLIT just extends SCATTER
capabilities by providing the split point arbitrary settings and multiple
pools.
> I can easily imagine HW which could do BUFFER_SPLIT to fixed segments, but
> cannot use the last segment many times (i.e. no classical SCATTER).
Sorry, what do you mean "BUFFER_SPLIT to fixed segments" ?
This new offload BUFFER_SPLIT is intended to push data to flexible segments,
potentially allocated from the different pools. The HW can be constrained
with pool number (say it supports some pool alloc/free hardware accelerator
for single pool only), in this case it will not be able to support BUFFER_SPLIT
in multiple pool config, but using the single pool does not arise the problem.
It seems I missed something, could you, please, provide an example,
how would you like to see the usage last segment many times for BUFFER_SPLIT?
How the packet should be split, in mbufs with what (last segment inherited) attributes?
>
> >
> > 3. Maximum offset
> >> Frankly speaking I'm not sure why it cannot be handled on
> >> PMD level (i.e. provide descriptors with offset taken into
> >> account or guarantee that HW mempool objects initialized
> >> correctly with required headroom). May be in some corner
> >> cases when the same HW mempool is shared by various
> >> segments with different offset requirements.
> >
> > HW offsets are beyond the feature scope, the offsets in the segment
> > description is supposed to be added to the native pool offsets (if any).
>
> Are you saying that offsets are not passed to HW and just handled by PMD to
> provide correct IOVA addresses to put data to? If so, it is an implementation
> detail which is specific to mlx5. If so, no specific limitations except data room,
> size and offset consistency.
> But it could be passed to a HW and it could be, for example, just 8 bits for the
> value.
Yes, it could. But there should be other vendors be involved, not known for now
who is going to support BUFFER_SPLIT and in which way. We should not invent
some theoretical limitations and merge the dead code. And, please note -
Tx segmentation has been living for 10 years successfully without any limitations,
no one cares about, there is no any request to report. Likewise is expected for Rx.
>
> >
> >> 4. Offset alignment
> >> 5. Maximum/minimum length of a segment 6. Length alignment
> > In which form? Mask of lsbs ? 0 means no limitations ?
>
> log2, i.e. 0 => 1 (no limitations) 1 => 2 (even only),
> 6 => 64 (64-byte cache line aligned) etc.
>
Yes, possible option.
> >
> >>
> >> I realize that 3, 4 and 5 could be per segment number.
> >> If it is really that complex, report common denominator which is
> >> guaranteed to work. If we have no checks on ethdev layer, application
> >> can ignore it if it knows better
> >
> > Currently it is not clear at all what kind of limitations should be
> > reported, we could include all of mentioned/proposed ones, and no one
> > will report there -
> > mlx5 has no any reasonable limitations to report for now.
> >
> > Should we reserve some pointer field in the rte_eth_dev_info to report
> > the limitations? (Limitation description should contain variable size
> > array, depending on the number of segments, so pointer seems to be
> appropriate).
> > It would allow us to avoid ABI break, and present the limitation structure
> once it is defined.
>
> I will let other ethdev maintainers to make a decision here.
> My vote would be to report limitations mentioned above.
> It looks like Jerin is also interested in limitations reporting. Not sure if my form
> looks OK or no.
For now I tend to think we could reserve some pointer for BUFFER_SPLIT limitations and that's it.
Reporting some silly generic limitations from mlx5 means introducing the dead code in my opinion.
If we'll see the actual request from applications to check and handle limitations (actually applications
are very limited in this matter - they expect the split point to be set at very strong defined place
of the packet format).
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
2020-10-15 14:26 7% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
@ 2020-10-15 14:38 4% ` McDaniel, Timothy
0 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-10-15 14:38 UTC (permalink / raw)
To: Jerin Jacob
Cc: dpdk-dev, Carrillo, Erik G, Eads, Gage, Van Haaren, Harry,
Hemant Agrawal
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, October 15, 2020 9:26 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Eads,
> Gage <gage.eads@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
>
> On Thu, Oct 15, 2020 at 3:04 AM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > This series implements the eventdev ABI changes required by
> > the DLB and DLB2 PMDs. This ABI change was announced in the
> > 20.08 release notes [1]. This patch was initially part of
> > the V1 DLB PMD patchset.
>
> Hi @McDaniel, Timothy ,
>
> Following things missing in this patch set before it needs to merge:
> - Update doc/guides/rel_notes/release_20_11.rst for "API Changes"
> and/or "ABI Changes" section
> - Update doc/guides/rel_notes/deprecation.rst to remove the this patch
> specific depreciation note
> - Merge patch 1 and 2 to a single patch it has a compilation error if
> we build patch1 alone
> - Update the git commit to give more data on the combined patch.
> - rebase the patch to http://browse.dpdk.org/next/dpdk-next-eventdev/,
> it still git-am apply issues.
>
> After fixing the above, I will merge this RC1. Please send ASAP.
>
I will get on this straight away. Thanks.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-14 21:36 2% ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-14 21:36 6% ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
@ 2020-10-15 14:26 7% ` Jerin Jacob
2020-10-15 14:38 4% ` McDaniel, Timothy
2 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 14:26 UTC (permalink / raw)
To: Timothy McDaniel
Cc: dpdk-dev, Erik Gabriel Carrillo, Gage Eads, Van Haaren, Harry,
Hemant Agrawal
On Thu, Oct 15, 2020 at 3:04 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This series implements the eventdev ABI changes required by
> the DLB and DLB2 PMDs. This ABI change was announced in the
> 20.08 release notes [1]. This patch was initially part of
> the V1 DLB PMD patchset.
Hi @McDaniel, Timothy ,
Following things missing in this patch set before it needs to merge:
- Update doc/guides/rel_notes/release_20_11.rst for "API Changes"
and/or "ABI Changes" section
- Update doc/guides/rel_notes/deprecation.rst to remove the this patch
specific depreciation note
- Merge patch 1 and 2 to a single patch it has a compilation error if
we build patch1 alone
- Update the git commit to give more data on the combined patch.
- rebase the patch to http://browse.dpdk.org/next/dpdk-next-eventdev/,
it still git-am apply issues.
After fixing the above, I will merge this RC1. Please send ASAP.
>
> The DLB hardware does not conform exactly to the eventdev interface.
> 1) It has a limit on the number of queues that may be linked to a port.
> 2) Some ports are further restricted to a maximum of 1 linked queue.
> 3) It does not (currently) have the ability to carry the flow_id as part
> of the event (QE) payload.
>
> Due to the above, we would like to propose the following enhancements.
>
> 1) Add new fields to the rte_event_dev_info struct. These fields allow
> the device to advertise its capabilities so that applications can take
> the appropriate actions based on those capabilities.
>
> 2) Add a new field to the rte_event_dev_config struct. This field allows
> the application to specify how many of its ports are limited to a single
> link, or will be used in single link mode.
>
> 3) Replace the dedicated implicit_release_disabled field with a bit field
> of explicit port capabilities. The implicit_release_disable functionality
> is assigned to one bit, and a port-is-single-link-only attribute is
> assigned to another, with the remaining bits available for future
> assignment.
>
> Note that it was requested that we split this app/test
> changes out from the eventdev ABI patch. As a result,
> neither of these patches will build without the other
> also being applied.
>
> Major changes since V1:
> Reworded commit message, as requested
> Fixed errors reported by clang
>
> Testing showed no performance impact due to the flow_id template code
> added to test app.
>
> [1] http://mails.dpdk.org/archives/dev/2020-August/177261.html
>
>
> Timothy McDaniel (2):
> eventdev: eventdev: express DLB/DLB2 PMD constraints
> eventdev: update app and examples for new eventdev ABI
>
>
>
> Timothy McDaniel (2):
> eventdev: eventdev: express DLB/DLB2 PMD constraints
> eventdev: update app and examples for new eventdev ABI
>
> app/test-eventdev/evt_common.h | 11 ++++
> app/test-eventdev/test_order_atq.c | 28 ++++++---
> app/test-eventdev/test_order_common.c | 1 +
> app/test-eventdev/test_order_queue.c | 29 +++++++---
> app/test/test_eventdev.c | 4 +-
> drivers/event/dpaa/dpaa_eventdev.c | 3 +-
> drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
> drivers/event/dsw/dsw_evdev.c | 3 +-
> drivers/event/octeontx/ssovf_evdev.c | 5 +-
> drivers/event/octeontx2/otx2_evdev.c | 3 +-
> drivers/event/opdl/opdl_evdev.c | 3 +-
> drivers/event/skeleton/skeleton_eventdev.c | 5 +-
> drivers/event/sw/sw_evdev.c | 8 ++-
> drivers/event/sw/sw_evdev_selftest.c | 6 +-
> .../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
> examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
> examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
> examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
> examples/l3fwd/l3fwd_event_generic.c | 7 ++-
> examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
> lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
> lib/librte_eventdev/rte_eventdev.c | 66 +++++++++++++++++++---
> lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
> lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
> lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
> lib/librte_eventdev/rte_eventdev_version.map | 4 +-
> 26 files changed, 214 insertions(+), 64 deletions(-)
>
> --
> 2.6.4
>
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
2020-10-15 13:05 3% ` Luca Boccassi
@ 2020-10-15 14:03 3% ` Bruce Richardson
2020-10-15 15:32 0% ` Luca Boccassi
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2020-10-15 14:03 UTC (permalink / raw)
To: Luca Boccassi; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas
On Thu, Oct 15, 2020 at 02:05:37PM +0100, Luca Boccassi wrote:
> On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> > On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > > improvements in standardizing the naming of the various components in DPDK,
> > > > and their associated feature-enabled macros.
> > > >
> > > > Following this patch, each library will have the name in format,
> > > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > > build will have the form 'RTE_LIB_<NAME>'.
> > > >
> > > > Similarly, for libraries, the equivalent name formats and macros are:
> > > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > > 'crypto' etc.
> > > >
> > > > To avoid too many changes at once for end applications, the old macro names
> > > > will still be provided in the build in this release, but will be removed
> > > > subsequently.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > >
> > > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > > ---
> > > > app/test-bbdev/meson.build | 4 ++--
> > > > app/test-crypto-perf/meson.build | 2 +-
> > > > app/test-pmd/meson.build | 12 ++++++------
> > > > app/test/meson.build | 8 ++++----
> > > > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > > > drivers/baseband/meson.build | 1 -
> > > > drivers/bus/meson.build | 1 -
> > > > drivers/common/meson.build | 1 -
> > > > drivers/common/mlx5/meson.build | 1 -
> > > > drivers/common/qat/meson.build | 1 -
> > > > drivers/compress/meson.build | 1 -
> > > > drivers/compress/octeontx/meson.build | 2 +-
> > > > drivers/crypto/meson.build | 1 -
> > > > drivers/crypto/null/meson.build | 2 +-
> > > > drivers/crypto/octeontx/meson.build | 2 +-
> > > > drivers/crypto/octeontx2/meson.build | 2 +-
> > > > drivers/crypto/scheduler/meson.build | 2 +-
> > > > drivers/crypto/virtio/meson.build | 2 +-
> > > > drivers/event/dpaa/meson.build | 2 +-
> > > > drivers/event/dpaa2/meson.build | 2 +-
> > > > drivers/event/meson.build | 1 -
> > > > drivers/event/octeontx/meson.build | 2 +-
> > > > drivers/event/octeontx2/meson.build | 2 +-
> > > > drivers/mempool/meson.build | 1 -
> > > > drivers/meson.build | 9 ++++-----
> > > > drivers/net/meson.build | 1 -
> > > > drivers/net/mlx4/meson.build | 2 +-
> > > > drivers/raw/ifpga/meson.build | 2 +-
> > > > drivers/raw/meson.build | 1 -
> > > > drivers/regex/meson.build | 1 -
> > > > drivers/vdpa/meson.build | 1 -
> > > > examples/bond/meson.build | 2 +-
> > > > examples/ethtool/meson.build | 2 +-
> > > > examples/ioat/meson.build | 2 +-
> > > > examples/l2fwd-crypto/meson.build | 2 +-
> > > > examples/ntb/meson.build | 2 +-
> > > > examples/vm_power_manager/meson.build | 6 +++---
> > > > lib/librte_ethdev/meson.build | 1 -
> > > > lib/librte_graph/meson.build | 2 --
> > > > lib/meson.build | 3 ++-
> > > > 40 files changed, 47 insertions(+), 55 deletions(-)
> > >
> > > Does this change the share object file names too, or only the macros?
> > >
> >
> > It does indeed change the object name files, which is a little bit
> > concerning. However, the consensus based on the RFC seemed to be that the
> > benefit is likely worth the change. If we want, we can look to use symlinks
> > to the old names on install, but I think that just delays the pain since I
> > would expect few to actually change their build to the new names until the
> > old ones and the symlinks completely go away.
> >
> > /Bruce
>
> It is a backward incompatible change, so we need to provide symlinks,
> right? On upgrade, programs linked to librte_old.so will fail to start.
> Or was this targeted at 20.11 thus piggy-backing on the ABI change
> which forces a re-link?
>
More of the latter, and the fact that changing the build system involved a
few library renames anyway for those using make. Since the ABI is changing
this release, and all the libs have a new major version number there is no
requirement for libs linked against an older version to work, and since
pkg-config should now be used for linking the actual names should not be
a concern.
That's the thinking anyway. :-)
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 13:07 0% ` Andrew Rybchenko
@ 2020-10-15 13:57 0% ` Slava Ovsiienko
2020-10-15 20:22 0% ` Slava Ovsiienko
1 sibling, 0 replies; 200+ results
From: Slava Ovsiienko @ 2020-10-15 13:57 UTC (permalink / raw)
To: Andrew Rybchenko, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Jerin Jacob, Andrew Rybchenko
Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
David Marchand
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 16:07
> To: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Jerin Jacob <jerinjacobk@gmail.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Cc: dpdk-dev <dev@dpdk.org>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
>
> On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> > 15/10/2020 13:49, Slava Ovsiienko:
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> >>>
> >>> <...>
> >>>
> >>>>>>>> If we see some of the features of such kind or other PMDs
> >>>>>>>> adopts the split feature - we'll try to find the common root
> >>>>>>>> and consider the way how
> >>>>>> to report it.
> >>>>>>>
> >>>>>>> My only concern with that approach will be ABI break again if
> >>>>>>> something needs to exposed over rte_eth_dev_info().
> >>>>>
> >>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in
> >>>>> the rte_eth_dev_info to avoid ABI break?
> >>>>
> >>>> Works for me. If we add an additional reserved field.
> >>>>
> >>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
> >>>> and fill meat when it is required if other ethdev maintainers are OK.
> >>>> I will be required for feature complete.
> >>>>
> >>>
> >>> Sounds good to me.
> >
> > OK for me.
>
> OK as well, but I dislike the idea with pointer in dev_info.
> It sounds like it breaks existing practice.
Moreover, if we are going to have multiple features using Rx segmentation
we should provide multiple structures with limitations - at least, one per feature.
> We should either reserve enough space or simply add dedicated API call to
> report Rx seg capabilities.
>
It seems we are trying to embrace everything in very generic way 😊
Just curious - how did we managed to survive without limitations on Tx direction?
No one told us how many segments PMD supports on Tx, what is the limitations
for offsets and alignments, it seems there is no limits for tx segment size at all.
How could it happen? Tx limitations do not exist? Just no one cared about the Tx limitations?
As for Rx limitations - there are no reasonable ones for now. We'll invent
the way to report the limitations (and it seems to be unbalanced - we should provide
the same to Tx), the next step is to provide at least one PMD using that,
and in this way to make mlx5 PMD to report silly values - "I have no reasonable
limitations beyond meaningful buffer size under pool_buf_size/UINT16_MAX)".
IMO, If some HW does not support arbitrary split (suppose, it is not common case,
most of HW is very flexible about specifying Rx buffers) the BUFFER_SPLIT feature
should not be advertised at all, because it would be not very useful - application
is intended to work over specific protocol, it knows where it wants to set split point
(defined by packet format). Hence, application is not so interested about offsets,
alignments, etc - it just checks whether PMD provides requested split points or not.
That's why just simple documenting was initially intended, there are just no
a lot of limitations expected, likewise Tx direction shows that.
Yes, generally speaking, there are no doubts it would be nice to report the limitations, but:
- not expected to have many (documenting can the few exceptions)
- no nice way is found how to report - pointer? API?
- complicated to present for various features (variable size array, multiple features)
- not known which limitations are actually needed, just some theoretical ones
So, we see the large white area, should we invent something not well-defined to cover one,
or let's wait for actual request to check limitations that can't be handled by documenting
and internal PMD checking/validation?
With best regards, Slava
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int
@ 2020-10-15 13:30 4% ` Andrew Rybchenko
2020-10-16 9:22 0% ` Ferruh Yigit
2020-10-16 11:20 3% ` Kinsella, Ray
0 siblings, 2 replies; 200+ results
From: Andrew Rybchenko @ 2020-10-15 13:30 UTC (permalink / raw)
To: Ray Kinsella, Neil Horman, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko
Cc: dev, Ivan Ilchenko
From: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Change rte_eth_dev_stop() return value from void to int
and return negative errno values in case of error conditions.
Also update the usage of the function in ethdev according to
the new return type.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 1 -
doc/guides/rel_notes/release_20_11.rst | 3 +++
lib/librte_ethdev/rte_ethdev.c | 27 +++++++++++++++++++-------
lib/librte_ethdev/rte_ethdev.h | 5 ++++-
4 files changed, 27 insertions(+), 9 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d1f5ed39db..2e04e24374 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -127,7 +127,6 @@ Deprecation Notices
negative errno values to indicate various error conditions (e.g.
invalid port ID, unsupported operation, failed operation):
- - ``rte_eth_dev_stop``
- ``rte_eth_dev_close``
* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f8686a50db..c8c30937fa 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -355,6 +355,9 @@ API Changes
* vhost: Add a new function ``rte_vhost_crypto_driver_start`` to be called
instead of ``rte_vhost_driver_start`` by crypto applications.
+* ethdev: changed ``rte_eth_dev_stop`` return value from ``void`` to
+ ``int`` to provide a way to report various error conditions.
+
ABI Changes
-----------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index d9b82df073..b8cf04ef4d 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1661,7 +1661,7 @@ rte_eth_dev_start(uint16_t port_id)
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
int diag;
- int ret;
+ int ret, ret_stop;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1695,7 +1695,13 @@ rte_eth_dev_start(uint16_t port_id)
RTE_ETHDEV_LOG(ERR,
"Error during restoring configuration for device (port %u): %s\n",
port_id, rte_strerror(-ret));
- rte_eth_dev_stop(port_id);
+ ret_stop = rte_eth_dev_stop(port_id);
+ if (ret_stop != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Failed to stop device (port %u): %s\n",
+ port_id, rte_strerror(-ret_stop));
+ }
+
return ret;
}
@@ -1708,26 +1714,28 @@ rte_eth_dev_start(uint16_t port_id)
return 0;
}
-void
+int
rte_eth_dev_stop(uint16_t port_id)
{
struct rte_eth_dev *dev;
- RTE_ETH_VALID_PORTID_OR_RET(port_id);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
- RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_stop, -ENOTSUP);
if (dev->data->dev_started == 0) {
RTE_ETHDEV_LOG(INFO,
"Device with port_id=%"PRIu16" already stopped\n",
port_id);
- return;
+ return 0;
}
dev->data->dev_started = 0;
(*dev->dev_ops->dev_stop)(dev);
rte_ethdev_trace_stop(port_id);
+
+ return 0;
}
int
@@ -1783,7 +1791,12 @@ rte_eth_dev_reset(uint16_t port_id)
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
- rte_eth_dev_stop(port_id);
+ ret = rte_eth_dev_stop(port_id);
+ if (ret != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Failed to stop device (port %u) before reset: %s - ignore\n",
+ port_id, rte_strerror(-ret));
+ }
ret = dev->dev_ops->dev_reset(dev);
return eth_err(port_id, ret);
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index a61ca115a0..b85861cf2b 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -2277,8 +2277,11 @@ int rte_eth_dev_start(uint16_t port_id);
*
* @param port_id
* The port identifier of the Ethernet device.
+ * @return
+ * - 0: Success, Ethernet device stopped.
+ * - <0: Error code of the driver device stop function.
*/
-void rte_eth_dev_stop(uint16_t port_id);
+int rte_eth_dev_stop(uint16_t port_id);
/**
* Link up an Ethernet device.
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v6 2/5] ethdev: add new attributes to hairpin config
@ 2020-10-15 13:08 4% ` Bing Zhao
0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-15 13:08 UTC (permalink / raw)
To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
bernard.iremonger, beilei.xing, wenzhuo.lu
Cc: dev
To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.
`tx_explicit` means if the application itself will insert the Tx part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin Tx queue and peer Rx queue will be
bound automatically during the device start stage.
Different Tx and Rx queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.
In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no Tx flow rules need to be inserted manually
by the application.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v6: Using unnecessary comment and using "Rx" & "Tx"
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
doc/guides/prog_guide/rte_flow.rst | 3 +++
doc/guides/rel_notes/release_20_11.rst | 7 +++++++
lib/librte_ethdev/rte_ethdev.c | 8 ++++----
lib/librte_ethdev/rte_ethdev.h | 27 ++++++++++++++++++++++++++-
4 files changed, 40 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 55497c9..3df005a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2618,6 +2618,9 @@ set, unpredictable value will be seen depending on driver implementation. For
loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
the other path depending on HW capability.
+In hairpin case with Tx explicit flow mode, metadata could (not mandatory) be
+used to connect the Rx and Tx flows if it can be propagated from Rx to Tx path.
+
.. _table_rte_flow_action_set_meta:
.. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 02bf7ca..2f23e6f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -92,6 +92,7 @@ New Features
* **Updated the ethdev library to support hairpin between two ports.**
New APIs are introduced to support binding / unbinding 2 ports hairpin.
+ Hairpin Tx part flow rules can be inserted explicitly.
* **Updated Broadcom bnxt driver.**
@@ -396,6 +397,12 @@ ABI Changes
Applications should use the new values for identification of existing
extensions in the packet header.
+ * ``struct rte_eth_hairpin_conf`` has two new members:
+
+ * ``uint32_t tx_explicit:1;``
+ * ``uint32_t manual_bind:1;``
+
+
Known Issues
------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 57cf4a7..bcbee30 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -2003,13 +2003,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_rx_2_tx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_rx_2_tx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Rx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
@@ -2174,13 +2174,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_tx_2_rx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_tx_2_rx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Tx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 10eb626..a8e5cdc 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
* A structure used to configure hairpin binding.
*/
struct rte_eth_hairpin_conf {
- uint16_t peer_count; /**< The number of peers. */
+ uint32_t peer_count:16; /**< The number of peers. */
+
+ /**
+ * Explicit Tx flow rule mode.
+ * One hairpin pair of queues should have the same attribute.
+ *
+ * - When set, the user should be responsible for inserting the hairpin
+ * Tx part flows and removing them.
+ * - When clear, the PMD will try to handle the Tx part of the flows,
+ * e.g., by splitting one flow into two parts.
+ */
+ uint32_t tx_explicit:1;
+
+ /**
+ * Manually bind hairpin queues.
+ * One hairpin pair of queues should have the same attribute.
+ *
+ * - When set, to enable hairpin, the user should call the hairpin bind
+ * function after all the queues are set up properly and the ports are
+ * started. Also, the hairpin unbind function should be called
+ * accordingly before stopping a port that with hairpin configured.
+ * - When clear, the PMD will try to enable the hairpin with the queues
+ * configured automatically during port start.
+ */
+ uint32_t manual_bind:1;
+ uint32_t reserved:14; /**< Reserved bits. */
struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
};
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 12:49 0% ` Thomas Monjalon
@ 2020-10-15 13:07 0% ` Andrew Rybchenko
2020-10-15 13:57 0% ` Slava Ovsiienko
2020-10-15 20:22 0% ` Slava Ovsiienko
0 siblings, 2 replies; 200+ results
From: Andrew Rybchenko @ 2020-10-15 13:07 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Jerin Jacob, Slava Ovsiienko,
Andrew Rybchenko
Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
David Marchand
On 10/15/20 3:49 PM, Thomas Monjalon wrote:
> 15/10/2020 13:49, Slava Ovsiienko:
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
>>>
>>> <...>
>>>
>>>>>>>> If we see some of the features of such kind or other PMDs adopts
>>>>>>>> the split feature - we'll try to find the common root and consider
>>>>>>>> the way how
>>>>>> to report it.
>>>>>>>
>>>>>>> My only concern with that approach will be ABI break again if
>>>>>>> something needs to exposed over rte_eth_dev_info().
>>>>>
>>>>> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
>>>>> rte_eth_dev_info to avoid ABI break?
>>>>
>>>> Works for me. If we add an additional reserved field.
>>>>
>>>> Due to RC1 time constraint, I am OK to leave it as a reserved filed
>>>> and fill meat when it is required if other ethdev maintainers are OK.
>>>> I will be required for feature complete.
>>>>
>>>
>>> Sounds good to me.
>
> OK for me.
OK as well, but I dislike the idea with pointer in dev_info.
It sounds like it breaks existing practice.
We should either reserve enough space or simply add
dedicated API call to report Rx seg capabilities.
>
>> OK, let's introduce the pointer in the rte_eth_dev_info and
>> define struct rte_eth_rxseg_limitations as experimental.
>> Will it be allowed to update this one later (after 20.11)?
>> Is ABI break is allowed for the case?
>
> If it is experimental, you can change it at anytime.
>
> Ideally, we could try to have a first version of the limitations
> during 20.11-rc2.
Yes, please.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines
@ 2020-10-15 13:05 3% ` Luca Boccassi
2020-10-15 14:03 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Luca Boccassi @ 2020-10-15 13:05 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, david.marchand, arybchenko, ferruh.yigit, thomas
On Thu, 2020-10-15 at 12:18 +0100, Bruce Richardson wrote:
> On Thu, Oct 15, 2020 at 11:30:29AM +0100, Luca Boccassi wrote:
> > On Wed, 2020-10-14 at 15:13 +0100, Bruce Richardson wrote:
> > > As discussed on the dpdk-dev mailing list[1], we can make some easy
> > > improvements in standardizing the naming of the various components in DPDK,
> > > and their associated feature-enabled macros.
> > >
> > > Following this patch, each library will have the name in format,
> > > 'librte_<name>.so', and the macro indicating that library is enabled in the
> > > build will have the form 'RTE_LIB_<NAME>'.
> > >
> > > Similarly, for libraries, the equivalent name formats and macros are:
> > > 'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
> > > device type taken from the relevant driver subdirectory name, i.e. 'net',
> > > 'crypto' etc.
> > >
> > > To avoid too many changes at once for end applications, the old macro names
> > > will still be provided in the build in this release, but will be removed
> > > subsequently.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > >
> > > [1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
> > > ---
> > > app/test-bbdev/meson.build | 4 ++--
> > > app/test-crypto-perf/meson.build | 2 +-
> > > app/test-pmd/meson.build | 12 ++++++------
> > > app/test/meson.build | 8 ++++----
> > > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > > drivers/baseband/meson.build | 1 -
> > > drivers/bus/meson.build | 1 -
> > > drivers/common/meson.build | 1 -
> > > drivers/common/mlx5/meson.build | 1 -
> > > drivers/common/qat/meson.build | 1 -
> > > drivers/compress/meson.build | 1 -
> > > drivers/compress/octeontx/meson.build | 2 +-
> > > drivers/crypto/meson.build | 1 -
> > > drivers/crypto/null/meson.build | 2 +-
> > > drivers/crypto/octeontx/meson.build | 2 +-
> > > drivers/crypto/octeontx2/meson.build | 2 +-
> > > drivers/crypto/scheduler/meson.build | 2 +-
> > > drivers/crypto/virtio/meson.build | 2 +-
> > > drivers/event/dpaa/meson.build | 2 +-
> > > drivers/event/dpaa2/meson.build | 2 +-
> > > drivers/event/meson.build | 1 -
> > > drivers/event/octeontx/meson.build | 2 +-
> > > drivers/event/octeontx2/meson.build | 2 +-
> > > drivers/mempool/meson.build | 1 -
> > > drivers/meson.build | 9 ++++-----
> > > drivers/net/meson.build | 1 -
> > > drivers/net/mlx4/meson.build | 2 +-
> > > drivers/raw/ifpga/meson.build | 2 +-
> > > drivers/raw/meson.build | 1 -
> > > drivers/regex/meson.build | 1 -
> > > drivers/vdpa/meson.build | 1 -
> > > examples/bond/meson.build | 2 +-
> > > examples/ethtool/meson.build | 2 +-
> > > examples/ioat/meson.build | 2 +-
> > > examples/l2fwd-crypto/meson.build | 2 +-
> > > examples/ntb/meson.build | 2 +-
> > > examples/vm_power_manager/meson.build | 6 +++---
> > > lib/librte_ethdev/meson.build | 1 -
> > > lib/librte_graph/meson.build | 2 --
> > > lib/meson.build | 3 ++-
> > > 40 files changed, 47 insertions(+), 55 deletions(-)
> >
> > Does this change the share object file names too, or only the macros?
> >
>
> It does indeed change the object name files, which is a little bit
> concerning. However, the consensus based on the RFC seemed to be that the
> benefit is likely worth the change. If we want, we can look to use symlinks
> to the old names on install, but I think that just delays the pain since I
> would expect few to actually change their build to the new names until the
> old ones and the symlinks completely go away.
>
> /Bruce
It is a backward incompatible change, so we need to provide symlinks,
right? On upgrade, programs linked to librte_old.so will fail to start.
Or was this targeted at 20.11 thus piggy-backing on the ABI change
which forces a re-link?
--
Kind regards,
Luca Boccassi
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 11:49 3% ` Slava Ovsiienko
@ 2020-10-15 12:49 0% ` Thomas Monjalon
2020-10-15 13:07 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-10-15 12:49 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Slava Ovsiienko, Andrew Rybchenko
Cc: dpdk-dev, Stephen Hemminger, Olivier Matz, Maxime Coquelin,
David Marchand
15/10/2020 13:49, Slava Ovsiienko:
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> > On 10/15/2020 12:26 PM, Jerin Jacob wrote:
> >
> > <...>
> >
> > >>>>> If we see some of the features of such kind or other PMDs adopts
> > >>>>> the split feature - we'll try to find the common root and consider
> > >>>>> the way how
> > >>> to report it.
> > >>>>
> > >>>> My only concern with that approach will be ABI break again if
> > >>>> something needs to exposed over rte_eth_dev_info().
> > >>
> > >> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
> > >> rte_eth_dev_info to avoid ABI break?
> > >
> > > Works for me. If we add an additional reserved field.
> > >
> > > Due to RC1 time constraint, I am OK to leave it as a reserved filed
> > > and fill meat when it is required if other ethdev maintainers are OK.
> > > I will be required for feature complete.
> > >
> >
> > Sounds good to me.
OK for me.
> OK, let's introduce the pointer in the rte_eth_dev_info and
> define struct rte_eth_rxseg_limitations as experimental.
> Will it be allowed to update this one later (after 20.11)?
> Is ABI break is allowed for the case?
If it is experimental, you can change it at anytime.
Ideally, we could try to have a first version of the limitations
during 20.11-rc2.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 11:36 0% ` Ferruh Yigit
@ 2020-10-15 11:49 3% ` Slava Ovsiienko
2020-10-15 12:49 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 11:49 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Olivier Matz, Maxime Coquelin, David Marchand, Andrew Rybchenko
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, October 15, 2020 14:37
> To: Jerin Jacob <jerinjacobk@gmail.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Olivier Matz <olivier.matz@6wind.com>;
> Maxime Coquelin <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
>
> On 10/15/2020 12:26 PM, Jerin Jacob wrote:
>
> <...>
>
> >>>>> If we see some of the features of such kind or other PMDs adopts
> >>>>> the split feature - we'll try to find the common root and consider
> >>>>> the way how
> >>> to report it.
> >>>>
> >>>> My only concern with that approach will be ABI break again if
> >>>> something needs to exposed over rte_eth_dev_info().
> >>
> >> Let's reserve the pointer to struct rte_eth_rxseg_limitations in the
> >> rte_eth_dev_info to avoid ABI break?
> >
> > Works for me. If we add an additional reserved field.
> >
> > Due to RC1 time constraint, I am OK to leave it as a reserved filed
> > and fill meat when it is required if other ethdev maintainers are OK.
> > I will be required for feature complete.
> >
>
> Sounds good to me.
OK, let's introduce the pointer in the rte_eth_dev_info and
define struct rte_eth_rxseg_limitations as experimental.
Will it be allowed to update this one later (after 20.11)?
Is ABI break is allowed for the case?
With best regards, Slava
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 11:26 0% ` Jerin Jacob
@ 2020-10-15 11:36 0% ` Ferruh Yigit
2020-10-15 11:49 3% ` Slava Ovsiienko
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-10-15 11:36 UTC (permalink / raw)
To: Jerin Jacob, Slava Ovsiienko
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Olivier Matz, Maxime Coquelin, David Marchand, Andrew Rybchenko
On 10/15/2020 12:26 PM, Jerin Jacob wrote:
<...>
>>>>> If we see some of the features of such kind or other PMDs adopts the
>>>>> split feature - we'll try to find the common root and consider the way how
>>> to report it.
>>>>
>>>> My only concern with that approach will be ABI break again if
>>>> something needs to exposed over rte_eth_dev_info().
>>
>> Let's reserve the pointer to struct rte_eth_rxseg_limitations
>> in the rte_eth_dev_info to avoid ABI break?
>
> Works for me. If we add an additional reserved field.
>
> Due to RC1 time constraint, I am OK to leave it as a reserved filed and fill
> meat when it is required if other ethdev maintainers are OK.
> I will be required for feature complete.
>
Sounds good to me.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 10:51 3% ` Slava Ovsiienko
@ 2020-10-15 11:26 0% ` Jerin Jacob
2020-10-15 11:36 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 11:26 UTC (permalink / raw)
To: Slava Ovsiienko
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
On Thu, Oct 15, 2020 at 4:21 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
>
> Hi, Jerin
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, October 15, 2020 13:28
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; Stephen Hemminger
> > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > <arybchenko@solarflare.com>
> > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >
> [..snip..]
> >
> > struct rte_eth_rxseg {
> > enum rte_eth_rxseg_mode mode ;
> > union {
> > struct rte_eth_rxseg_mode xxx {
> > struct rte_mempool *mp; /**< Memory pool to allocate
> > segment from. */
> > uint16_t length; /**< Segment data length, configures split
> > point. */
> > uint16_t offset; /**< Data offset from beginning of mbuf data
> > buffer. */
> > uint32_t reserved; /**< Reserved field. */
> > }
> > }
>
> There is an array of rte_eth_rxseg. It would introduce multiple "enum rte_eth_rxseg_mode mode"
> and would cause some ambiguity. About mode selection - please, see below.
> Union seems to be good idea, let's adopt.
Ack. Let's take the only union concept.
>
> >
> > Another mode, Marvell PMD has it(I believe Intel also) i.e When we say:
> >
> > seg0 - pool0, len0=2000B, off0=0
> > seg1 - pool1, len1=2001B, off1=0
> >
> > packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
> > I think, it is better to have mode param in rte_eth_rxseg for avoiding ABI
> > changes.(Just like clean rte_flow APIs)
>
> It is supposed to choose with RTE_ETH_RX_OFFLOAD_xxx flags.
> For packet sorting it should be something like this RTE_ETH_RX_OFFLOAD_SORT.
> PMD reports it supports the feature, the flag is set in rx_conf->offloads
> and rxseg structure is interpreted according to these flags.
Works for me.
>
> Please, note, there is intentionally no check for RTE_ETH_RX_OFFLOAD_xxx
> in rte_eth_dev_rx_queue_setup() - it should be done on PMD side.
>
> >
> > > > If we see some of the features of such kind or other PMDs adopts the
> > > > split feature - we'll try to find the common root and consider the way how
> > to report it.
> > >
> > > My only concern with that approach will be ABI break again if
> > > something needs to exposed over rte_eth_dev_info().
>
> Let's reserve the pointer to struct rte_eth_rxseg_limitations
> in the rte_eth_dev_info to avoid ABI break?
Works for me. If we add an additional reserved field.
Due to RC1 time constraint, I am OK to leave it as a reserved filed and fill
meat when it is required if other ethdev maintainers are OK.
I will be required for feature complete.
>
> With best regards, Slava
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 10:34 3% ` Slava Ovsiienko
@ 2020-10-15 11:09 0% ` Andrew Rybchenko
2020-10-15 14:39 0% ` Slava Ovsiienko
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-10-15 11:09 UTC (permalink / raw)
To: Slava Ovsiienko, Jerin Jacob
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
On 10/15/20 1:34 PM, Slava Ovsiienko wrote:
> Hi, Andrew
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Thursday, October 15, 2020 12:49
>> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Jerin Jacob
>> <jerinjacobk@gmail.com>
>> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
>> <thomas@monjalon.net>; Stephen Hemminger
>> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
>> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
>> <maxime.coquelin@redhat.com>; David Marchand
>> <david.marchand@redhat.com>; Andrew Rybchenko
>> <arybchenko@solarflare.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
>>
>> On 10/15/20 10:43 AM, Slava Ovsiienko wrote:
>>> Hi, Jerin
>>>
>>>> -----Original Message-----
>>>> From: Jerin Jacob <jerinjacobk@gmail.com>
>>>> Sent: Wednesday, October 14, 2020 21:57
>>>> To: Slava Ovsiienko <viacheslavo@nvidia.com>
>>>> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Stephen Hemminger
>>>> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
>>>> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
>>>> <maxime.coquelin@redhat.com>; David Marchand
>>>> <david.marchand@redhat.com>; Andrew Rybchenko
>>>> <arybchenko@solarflare.com>
>>>> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
>>>>
>>>> On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
>>>> <viacheslavo@nvidia.com> wrote:
>>>>>
>>>>> The DPDK datapath in the transmit direction is very flexible.
>>>>> An application can build the multi-segment packet and manages almost
>>>>> all data aspects - the memory pools where segments are allocated
>>>>> from, the segment lengths, the memory attributes like external
>>>>> buffers, registered for DMA, etc.
>>>>>
>>>
>>> [..snip..]
>>>
>>>>> For example, let's suppose we configured the Rx queue with the
>>>>> following segments:
>>>>> seg0 - pool0, len0=14B, off0=2
>>>>> seg1 - pool1, len1=20B, off1=128B
>>>>> seg2 - pool2, len2=20B, off2=0B
>>>>> seg3 - pool3, len3=512B, off3=0B
>>>>
>>>>
>>>> Sorry for chime in late. This API lookout looks good to me.
>>>> But, I am wondering how the application can know the capability or
>>>> "limits" of struct rte_eth_rxseg structure for the specific PMD. The
>>>> other descriptor limit, it's being exposed with struct
>>>> rte_eth_dev_info::rx_desc_lim; If PMD can support a specific pattern
>>>> rather than returning the blanket error, the application should know the
>> limit.
>>>> IMO, it is better to add
>>>> struct rte_eth_rxseg *rxsegs;
>>>> unint16_t nb_max_rxsegs
>>>> in rte_eth_dev_info structure to express the capablity.
>>>> Where the en and offset can define the max offset.
>>>>
>>>> Thoughts?
>>>
>>> Moreover, there might be implied a lot of various limitations -
>>> offsets might be not supported at all or have some requirements for
>>> alignment, the similar requirements might be applied to segment size
>>> (say, ask for some granularity). Currently it is not obvious how to
>>> report all nuances, and it is supposed the limitations of this kind must be
>> documented in PMD chapter. As for mlx5 - it has no special limitations besides
>> common requirements to the regular segments.
>>>
>>> One more point - the split feature might be considered as just one of
>>> possible cases of using these segment descriptions, other features might
>> impose other (unknown for now) limitations.
>>> If we see some of the features of such kind or other PMDs adopts the
>>> split feature - we'll try to find the common root and consider the way how to
>> report it.
>>
>> At least there are few simple limitations which are easy to
>> express:
>> 1. Maximum number of segments
> We have scatter capability and we do not report the maximal number of segments,
> it is on PMD own. We could add the field to the rte_eth_dev_info, but not sure
> whether we have something special to report there even for mlx5 case.
There is always a limitation in programming and HW. Nothing is
unlimited. Limits could be high, but still exist.
Number of descriptors? Width of field in HW interface?
Maximum length of the config message to HW?
All above could limit it directly or indirectly.
>> 2. Possibility to use the last segment many times if required
>> (I was suggesting to use scatter for it, but you rejected
>> the idea - may be time to reconsider :) )
>
> Mmm, sorry I do not follow, it might be I did not understand/missed your idea.
> Some of the last segment attributes are used multiple times to scatter the rest
> of the data in fashion very close to the existing scattering approach - at least,
> pool and buffer size from this pool are used. The beginning of the packet
> scattered according to the new descriptions, the rest of the packet -
> according to the existing regular scattering with pool settings from
> the last segment description.
I believe that the possibility to split into a fixed segments
(BUFFER_SPLIT) and possibility to use a mempool (just mp or
last segment) many times if a packet does not fit (SCATTER)
it is *different* features.
I can easily imagine HW which could do BUFFER_SPLIT to
fixed segments, but cannot use the last segment many times
(i.e. no classical SCATTER).
>
> 3. Maximum offset
>> Frankly speaking I'm not sure why it cannot be handled on
>> PMD level (i.e. provide descriptors with offset taken into
>> account or guarantee that HW mempool objects initialized
>> correctly with required headroom). May be in some corner
>> cases when the same HW mempool is shared by various
>> segments with different offset requirements.
>
> HW offsets are beyond the feature scope, the offsets in the segment
> description is supposed to be added to the native pool offsets (if any).
Are you saying that offsets are not passed to HW and just
handled by PMD to provide correct IOVA addresses to put
data to? If so, it is an implementation detail which is
specific to mlx5. If so, no specific limitations
except data room, size and offset consistency.
But it could be passed to a HW and it could be, for example,
just 8 bits for the value.
>
>> 4. Offset alignment
>> 5. Maximum/minimum length of a segment
>> 6. Length alignment
> In which form? Mask of lsbs ? 0 means no limitations ?
log2, i.e. 0 => 1 (no limitations) 1 => 2 (even only),
6 => 64 (64-byte cache line aligned) etc.
>
>>
>> I realize that 3, 4 and 5 could be per segment number.
>> If it is really that complex, report common denominator which is guaranteed to
>> work. If we have no checks on ethdev layer, application can ignore it if it knows
>> better.
>
> Currently it is not clear at all what kind of limitations should be reported,
> we could include all of mentioned/proposed ones, and no one will report there -
> mlx5 has no any reasonable limitations to report for now.
>
> Should we reserve some pointer field in the rte_eth_dev_info to report
> the limitations? (Limitation description should contain variable size array,
> depending on the number of segments, so pointer seems to be appropriate).
> It would allow us to avoid ABI break, and present the limitation structure once it is defined.
I will let other ethdev maintainers to make a decision here.
My vote would be to report limitations mentioned above.
It looks like Jerin is also interested in limitations
reporting. Not sure if my form looks OK or no.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 10:27 3% ` Jerin Jacob
@ 2020-10-15 10:51 3% ` Slava Ovsiienko
2020-10-15 11:26 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 10:51 UTC (permalink / raw)
To: Jerin Jacob
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
Hi, Jerin
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, October 15, 2020 13:28
> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
>
[..snip..]
>
> struct rte_eth_rxseg {
> enum rte_eth_rxseg_mode mode ;
> union {
> struct rte_eth_rxseg_mode xxx {
> struct rte_mempool *mp; /**< Memory pool to allocate
> segment from. */
> uint16_t length; /**< Segment data length, configures split
> point. */
> uint16_t offset; /**< Data offset from beginning of mbuf data
> buffer. */
> uint32_t reserved; /**< Reserved field. */
> }
> }
There is an array of rte_eth_rxseg. It would introduce multiple "enum rte_eth_rxseg_mode mode"
and would cause some ambiguity. About mode selection - please, see below.
Union seems to be good idea, let's adopt.
>
> Another mode, Marvell PMD has it(I believe Intel also) i.e When we say:
>
> seg0 - pool0, len0=2000B, off0=0
> seg1 - pool1, len1=2001B, off1=0
>
> packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
> I think, it is better to have mode param in rte_eth_rxseg for avoiding ABI
> changes.(Just like clean rte_flow APIs)
It is supposed to choose with RTE_ETH_RX_OFFLOAD_xxx flags.
For packet sorting it should be something like this RTE_ETH_RX_OFFLOAD_SORT.
PMD reports it supports the feature, the flag is set in rx_conf->offloads
and rxseg structure is interpreted according to these flags.
Please, note, there is intentionally no check for RTE_ETH_RX_OFFLOAD_xxx
in rte_eth_dev_rx_queue_setup() - it should be done on PMD side.
>
> > > If we see some of the features of such kind or other PMDs adopts the
> > > split feature - we'll try to find the common root and consider the way how
> to report it.
> >
> > My only concern with that approach will be ABI break again if
> > something needs to exposed over rte_eth_dev_info().
Let's reserve the pointer to struct rte_eth_rxseg_limitations
in the rte_eth_dev_info to avoid ABI break?
With best regards, Slava
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
@ 2020-10-15 10:34 3% ` Slava Ovsiienko
2020-10-15 11:09 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-10-15 10:34 UTC (permalink / raw)
To: Andrew Rybchenko, Jerin Jacob
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
Hi, Andrew
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, October 15, 2020 12:49
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Jerin Jacob
> <jerinjacobk@gmail.com>
> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Stephen Hemminger
> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; David Marchand
> <david.marchand@redhat.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
>
> On 10/15/20 10:43 AM, Slava Ovsiienko wrote:
> > Hi, Jerin
> >
> >> -----Original Message-----
> >> From: Jerin Jacob <jerinjacobk@gmail.com>
> >> Sent: Wednesday, October 14, 2020 21:57
> >> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> >> Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Stephen Hemminger
> >> <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >> Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> >> <maxime.coquelin@redhat.com>; David Marchand
> >> <david.marchand@redhat.com>; Andrew Rybchenko
> >> <arybchenko@solarflare.com>
> >> Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >>
> >> On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> >> <viacheslavo@nvidia.com> wrote:
> >>>
> >>> The DPDK datapath in the transmit direction is very flexible.
> >>> An application can build the multi-segment packet and manages almost
> >>> all data aspects - the memory pools where segments are allocated
> >>> from, the segment lengths, the memory attributes like external
> >>> buffers, registered for DMA, etc.
> >>>
> >
> > [..snip..]
> >
> >>> For example, let's suppose we configured the Rx queue with the
> >>> following segments:
> >>> seg0 - pool0, len0=14B, off0=2
> >>> seg1 - pool1, len1=20B, off1=128B
> >>> seg2 - pool2, len2=20B, off2=0B
> >>> seg3 - pool3, len3=512B, off3=0B
> >>
> >>
> >> Sorry for chime in late. This API lookout looks good to me.
> >> But, I am wondering how the application can know the capability or
> >> "limits" of struct rte_eth_rxseg structure for the specific PMD. The
> >> other descriptor limit, it's being exposed with struct
> >> rte_eth_dev_info::rx_desc_lim; If PMD can support a specific pattern
> >> rather than returning the blanket error, the application should know the
> limit.
> >> IMO, it is better to add
> >> struct rte_eth_rxseg *rxsegs;
> >> unint16_t nb_max_rxsegs
> >> in rte_eth_dev_info structure to express the capablity.
> >> Where the en and offset can define the max offset.
> >>
> >> Thoughts?
> >
> > Moreover, there might be implied a lot of various limitations -
> > offsets might be not supported at all or have some requirements for
> > alignment, the similar requirements might be applied to segment size
> > (say, ask for some granularity). Currently it is not obvious how to
> > report all nuances, and it is supposed the limitations of this kind must be
> documented in PMD chapter. As for mlx5 - it has no special limitations besides
> common requirements to the regular segments.
> >
> > One more point - the split feature might be considered as just one of
> > possible cases of using these segment descriptions, other features might
> impose other (unknown for now) limitations.
> > If we see some of the features of such kind or other PMDs adopts the
> > split feature - we'll try to find the common root and consider the way how to
> report it.
>
> At least there are few simple limitations which are easy to
> express:
> 1. Maximum number of segments
We have scatter capability and we do not report the maximal number of segments,
it is on PMD own. We could add the field to the rte_eth_dev_info, but not sure
whether we have something special to report there even for mlx5 case.
> 2. Possibility to use the last segment many times if required
> (I was suggesting to use scatter for it, but you rejected
> the idea - may be time to reconsider :) )
Mmm, sorry I do not follow, it might be I did not understand/missed your idea.
Some of the last segment attributes are used multiple times to scatter the rest
of the data in fashion very close to the existing scattering approach - at least,
pool and buffer size from this pool are used. The beginning of the packet
scattered according to the new descriptions, the rest of the packet -
according to the existing regular scattering with pool settings from
the last segment description.
3. Maximum offset
> Frankly speaking I'm not sure why it cannot be handled on
> PMD level (i.e. provide descriptors with offset taken into
> account or guarantee that HW mempool objects initialized
> correctly with required headroom). May be in some corner
> cases when the same HW mempool is shared by various
> segments with different offset requirements.
HW offsets are beyond the feature scope, the offsets in the segment
description is supposed to be added to the native pool offsets (if any).
> 4. Offset alignment
> 5. Maximum/minimum length of a segment
> 6. Length alignment
In which form? Mask of lsbs ? 0 means no limitations ?
>
> I realize that 3, 4 and 5 could be per segment number.
> If it is really that complex, report common denominator which is guaranteed to
> work. If we have no checks on ethdev layer, application can ignore it if it knows
> better.
Currently it is not clear at all what kind of limitations should be reported,
we could include all of mentioned/proposed ones, and no one will report there -
mlx5 has no any reasonable limitations to report for now.
Should we reserve some pointer field in the rte_eth_dev_info to report
the limitations? (Limitation description should contain variable size array,
depending on the number of segments, so pointer seems to be appropriate).
It would allow us to avoid ABI break, and present the limitation structure once it is defined.
With best regards, Slava
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
2020-10-15 9:27 3% ` Jerin Jacob
@ 2020-10-15 10:27 3% ` Jerin Jacob
2020-10-15 10:51 3% ` Slava Ovsiienko
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 10:27 UTC (permalink / raw)
To: Slava Ovsiienko
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
On Thu, Oct 15, 2020 at 2:57 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Thu, Oct 15, 2020 at 1:13 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
> >
> > Hi, Jerin
>
> Hi Slava,
>
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Wednesday, October 14, 2020 21:57
> > > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > > <thomas@monjalon.net>; Stephen Hemminger
> > > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > > <maxime.coquelin@redhat.com>; David Marchand
> > > <david.marchand@redhat.com>; Andrew Rybchenko
> > > <arybchenko@solarflare.com>
> > > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> > >
> > > On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> > > <viacheslavo@nvidia.com> wrote:
> > > >
> > > > The DPDK datapath in the transmit direction is very flexible.
> > > > An application can build the multi-segment packet and manages almost
> > > > all data aspects - the memory pools where segments are allocated from,
> > > > the segment lengths, the memory attributes like external buffers,
> > > > registered for DMA, etc.
> > > >
> >
> > [..snip..]
> >
> > > > For example, let's suppose we configured the Rx queue with the
> > > > following segments:
> > > > seg0 - pool0, len0=14B, off0=2
> > > > seg1 - pool1, len1=20B, off1=128B
> > > > seg2 - pool2, len2=20B, off2=0B
> > > > seg3 - pool3, len3=512B, off3=0B
> > >
> > >
> > > Sorry for chime in late. This API lookout looks good to me.
> > > But, I am wondering how the application can know the capability or "limits" of
> > > struct rte_eth_rxseg structure for the specific PMD. The other descriptor limit,
> > > it's being exposed with struct rte_eth_dev_info::rx_desc_lim; If PMD can
> > > support a specific pattern rather than returning the blanket error, the
> > > application should know the limit.
> > > IMO, it is better to add
> > > struct rte_eth_rxseg *rxsegs;
> > > unint16_t nb_max_rxsegs
> > > in rte_eth_dev_info structure to express the capablity.
> > > Where the en and offset can define the max offset.
> > >
> > > Thoughts?
> >
> > Moreover, there might be implied a lot of various limitations - offsets might be not supported at all or
> > have some requirements for alignment, the similar requirements might be applied to segment size
> > (say, ask for some granularity). Currently it is not obvious how to report all nuances, and it is supposed
> > the limitations of this kind must be documented in PMD chapter. As for mlx5 - it has no special
> > limitations besides common requirements to the regular segments.
>
> Reporting the limitation in the documentation will not help for the
> generic applications.
>
> >
> > One more point - the split feature might be considered as just one of possible cases of using
> > these segment descriptions, other features might impose other (unknown for now) limitations.
Also , I agree that w will have multiple use cases with segment descriptors.
In order to make it future proof on the API definion is better to have
from:
struct rte_eth_rxseg {
struct rte_mempool *mp; /**< Memory pool to allocate segment from. */
uint16_t length; /**< Segment data length, configures split point. */
uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */
uint32_t reserved; /**< Reserved field. */
};
something lime below:
struct rte_eth_rxseg {
enum rte_eth_rxseg_mode mode ;
union {
struct rte_eth_rxseg_mode xxx {
struct rte_mempool *mp; /**< Memory pool
to allocate segment from. */
uint16_t length; /**< Segment data
length, configures split point. */
uint16_t offset; /**< Data offset from
beginning of mbuf data buffer. */
uint32_t reserved; /**< Reserved field. */
}
}
Another mode, Marvell PMD has it(I believe Intel also) i.e
When we say:
seg0 - pool0, len0=2000B, off0=0
seg1 - pool1, len1=2001B, off1=0
packet size up to, 2000B goes to pool 0 and if is >=2001 goes to pool1.
I think, it is better to have mode param in rte_eth_rxseg for avoiding
ABI changes.(Just like clean rte_flow APIs)
> > If we see some of the features of such kind or other PMDs adopts the split feature - we'll try to find
> > the common root and consider the way how to report it.
>
> My only concern with that approach will be ABI break again if
> something needs to exposed over rte_eth_dev_info().
> IMO, if we featured needs to completed only when its capabilities are
> exposed in a programmatic manner.
> As of mlx5, if there not limitation then info
> rte_eth_dev_info::rxsegs[x].len, offset etc as UINT16_MAX so
> that application is aware of the state.
>
> >
> > With best regards, Slava
> >
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh
2020-10-14 10:41 26% ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-15 10:16 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:16 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 14/10/2020 11:41, Conor Walsh wrote:
> The core reason for this patch is to reduce the amount of time needed to
> run abi checks. The number of abi checks being run has been reduced to
> only 2 (1 x86_64 and 1 arm). The script can now also take adavtage of
> prebuilt abi references.
>
> Invoke using "./test-meson-builds.sh [-b <build directory>]
> [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
> [-d <directory for abi references>]"
> - <build directory>: directory to store builds (relative or absolute)
> - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
> - <uri for abi references>: http location or directory to get prebuilt
> abi references from
> - <directory for abi references>: directory to store abi references
> (relative or absolute)
> e.g. "./test-meson-builds.sh -a latest"
> If no flags are specified test-meson-builds.sh will run the standard
> meson tests with default options unless environmental variables are
> specified.
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives
2020-10-14 10:41 21% ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-15 10:15 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:15 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 14/10/2020 11:41, Conor Walsh wrote:
> This patch adds a script that generates compressed archives
> containing .dump files which can be used to perform abi
> breakage checking in test-meson-build.sh.
>
> Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
> - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
> e.g. "./gen-abi-tarballs.sh -v latest"
>
> If no tag is specified, the script will default to "latest"
> Using these parameters the script will produce several *.tar.gz
> archives containing .dump files required to do abi breakage checking
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
2020-10-15 10:08 0% ` David Marchand
@ 2020-10-15 10:10 3% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-15 10:10 UTC (permalink / raw)
To: David Marchand
Cc: Declan Doherty, Neil Horman, Anoob Joseph, Fiona Trahe,
Akhil Goyal, Arek Kusztal, Thomas Monjalon, dev
It is 100% ... my mistake, I was checking for ABI snafus.
Ray K
On 15/10/2020 11:08, David Marchand wrote:
> On Thu, Oct 15, 2020 at 11:59 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>>
>> Function versioning to preserve the ABI was added to crytodev in
>> commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
>> ChaCha20-Poly1305"). This is no longer required in the DPDK_21
>> version node.
>
> Is it a duplicate for [1]?
>
> 1: https://git.dpdk.org/next/dpdk-next-crypto/commit/lib/librte_cryptodev?id=e43f809f3a59a06f2bc80a2a6fe0c133f9e401fe
>
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
2020-10-15 9:56 11% [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node Ray Kinsella
@ 2020-10-15 10:08 0% ` David Marchand
2020-10-15 10:10 3% ` Kinsella, Ray
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-10-15 10:08 UTC (permalink / raw)
To: Ray Kinsella
Cc: Declan Doherty, Neil Horman, Anoob Joseph, Fiona Trahe,
Akhil Goyal, Arek Kusztal, Thomas Monjalon, dev
On Thu, Oct 15, 2020 at 11:59 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Function versioning to preserve the ABI was added to crytodev in
> commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
> ChaCha20-Poly1305"). This is no longer required in the DPDK_21
> version node.
Is it a duplicate for [1]?
1: https://git.dpdk.org/next/dpdk-next-crypto/commit/lib/librte_cryptodev?id=e43f809f3a59a06f2bc80a2a6fe0c133f9e401fe
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node
@ 2020-10-15 9:56 11% Ray Kinsella
2020-10-15 10:08 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-10-15 9:56 UTC (permalink / raw)
To: Declan Doherty, Ray Kinsella, Neil Horman, Anoob Joseph,
Fiona Trahe, Akhil Goyal, Arek Kusztal
Cc: thomas, david.marchand, dev
Function versioning to preserve the ABI was added to crytodev in
commit a0f0de06d457 ("cryptodev: fix ABI compatibility for
ChaCha20-Poly1305"). This is no longer required in the DPDK_21
version node.
Fixes: b922dbd38ced ("cryptodev: add ChaCha20-Poly1305 AEAD algorithm")
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
lib/librte_cryptodev/rte_cryptodev.c | 139 +-----------------
lib/librte_cryptodev/rte_cryptodev.h | 33 -----
.../rte_cryptodev_version.map | 6 -
3 files changed, 4 insertions(+), 174 deletions(-)
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..a74daee46 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -36,8 +36,6 @@
#include <rte_errno.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_compat.h>
-#include <rte_function_versioning.h>
#include "rte_crypto.h"
#include "rte_cryptodev.h"
@@ -59,11 +57,6 @@ static struct rte_cryptodev_global cryptodev_globals = {
/* spinlock for crypto device callbacks */
static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
-static const struct rte_cryptodev_capabilities
- cryptodev_undefined_capabilities[] = {
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
static struct rte_cryptodev_capabilities
*capability_copy[RTE_CRYPTO_MAX_DEVS];
static uint8_t is_capability_checked[RTE_CRYPTO_MAX_DEVS];
@@ -291,43 +284,8 @@ rte_crypto_auth_operation_strings[] = {
[RTE_CRYPTO_AUTH_OP_GENERATE] = "generate"
};
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
- const struct rte_cryptodev_sym_capability_idx *idx)
-{
- const struct rte_cryptodev_capabilities *capability;
- struct rte_cryptodev_info dev_info;
- int i = 0;
-
- rte_cryptodev_info_get_v20(dev_id, &dev_info);
-
- while ((capability = &dev_info.capabilities[i++])->op !=
- RTE_CRYPTO_OP_TYPE_UNDEFINED) {
- if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
- continue;
-
- if (capability->sym.xform_type != idx->type)
- continue;
-
- if (idx->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- capability->sym.auth.algo == idx->algo.auth)
- return &capability->sym;
-
- if (idx->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- capability->sym.cipher.algo == idx->algo.cipher)
- return &capability->sym;
-
- if (idx->type == RTE_CRYPTO_SYM_XFORM_AEAD &&
- capability->sym.aead.algo == idx->algo.aead)
- return &capability->sym;
- }
-
- return NULL;
-}
-VERSION_SYMBOL(rte_cryptodev_sym_capability_get, _v20, 20.0);
-
-const struct rte_cryptodev_symmetric_capability __vsym *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
+const struct rte_cryptodev_symmetric_capability *
+rte_cryptodev_sym_capability_get(uint8_t dev_id,
const struct rte_cryptodev_sym_capability_idx *idx)
{
const struct rte_cryptodev_capabilities *capability;
@@ -359,11 +317,6 @@ rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
return NULL;
}
-MAP_STATIC_SYMBOL(const struct rte_cryptodev_symmetric_capability *
- rte_cryptodev_sym_capability_get(uint8_t dev_id,
- const struct rte_cryptodev_sym_capability_idx *idx),
- rte_cryptodev_sym_capability_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_sym_capability_get, _v21, 21);
static int
param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
@@ -1233,89 +1186,8 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
(*dev->dev_ops->stats_reset)(dev);
}
-static void
-get_v20_capabilities(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
- const struct rte_cryptodev_capabilities *capability;
- uint8_t found_invalid_capa = 0;
- uint8_t counter = 0;
-
- for (capability = dev_info->capabilities;
- capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED;
- ++capability, ++counter) {
- if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
- capability->sym.xform_type ==
- RTE_CRYPTO_SYM_XFORM_AEAD
- && capability->sym.aead.algo >=
- RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
- found_invalid_capa = 1;
- counter--;
- }
- }
- is_capability_checked[dev_id] = 1;
- if (!found_invalid_capa)
- return;
- capability_copy[dev_id] = malloc(counter *
- sizeof(struct rte_cryptodev_capabilities));
- if (capability_copy[dev_id] == NULL) {
- /*
- * error case - no memory to store the trimmed
- * list, so have to return an empty list
- */
- dev_info->capabilities =
- cryptodev_undefined_capabilities;
- is_capability_checked[dev_id] = 0;
- } else {
- counter = 0;
- for (capability = dev_info->capabilities;
- capability->op !=
- RTE_CRYPTO_OP_TYPE_UNDEFINED;
- capability++) {
- if (!(capability->op ==
- RTE_CRYPTO_OP_TYPE_SYMMETRIC
- && capability->sym.xform_type ==
- RTE_CRYPTO_SYM_XFORM_AEAD
- && capability->sym.aead.algo >=
- RTE_CRYPTO_AEAD_CHACHA20_POLY1305)) {
- capability_copy[dev_id][counter++] =
- *capability;
- }
- }
- dev_info->capabilities =
- capability_copy[dev_id];
- }
-}
-
-void __vsym
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
-{
- struct rte_cryptodev *dev;
-
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
- return;
- }
-
- dev = &rte_crypto_devices[dev_id];
-
- memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
-
- RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
- (*dev->dev_ops->dev_infos_get)(dev, dev_info);
-
- if (capability_copy[dev_id] == NULL) {
- if (!is_capability_checked[dev_id])
- get_v20_capabilities(dev_id, dev_info);
- } else
- dev_info->capabilities = capability_copy[dev_id];
-
- dev_info->driver_name = dev->device->driver->name;
- dev_info->device = dev->device;
-}
-VERSION_SYMBOL(rte_cryptodev_info_get, _v20, 20.0);
-
-void __vsym
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
{
struct rte_cryptodev *dev;
@@ -1334,9 +1206,6 @@ rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
dev_info->driver_name = dev->device->driver->name;
dev_info->device = dev->device;
}
-MAP_STATIC_SYMBOL(void rte_cryptodev_info_get(uint8_t dev_id,
- struct rte_cryptodev_info *dev_info), rte_cryptodev_info_get_v21);
-BIND_DEFAULT_SYMBOL(rte_cryptodev_info_get, _v21, 21);
int
rte_cryptodev_callback_register(uint8_t dev_id,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..f4767b52c 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -219,14 +219,6 @@ struct rte_cryptodev_asym_capability_idx {
* - Return NULL if the capability not exist.
*/
const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v20(uint8_t dev_id,
- const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
-rte_cryptodev_sym_capability_get_v21(uint8_t dev_id,
- const struct rte_cryptodev_sym_capability_idx *idx);
-
-const struct rte_cryptodev_symmetric_capability *
rte_cryptodev_sym_capability_get(uint8_t dev_id,
const struct rte_cryptodev_sym_capability_idx *idx);
@@ -789,34 +781,9 @@ rte_cryptodev_stats_reset(uint8_t dev_id);
* the last valid element has it's op field set to
* RTE_CRYPTO_OP_TYPE_UNDEFINED.
*/
-
void
rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-/* An extra element RTE_CRYPTO_AEAD_CHACHA20_POLY1305 is added
- * to enum rte_crypto_aead_algorithm, also changing the value of
- * RTE_CRYPTO_AEAD_LIST_END. To maintain ABI compatibility with applications
- * which linked against earlier versions, preventing them, for example, from
- * picking up the new value and using it to index into an array sized too small
- * for it, it is necessary to have two versions of rte_cryptodev_info_get()
- * The latest version just returns directly the capabilities retrieved from
- * the device. The compatible version inspects the capabilities retrieved
- * from the device, but only returns them directly if the new value
- * is not included. If the new value is included, it allocates space
- * for a copy of the device capabilities, trims the new value from this
- * and returns this copy. It only needs to do this once per device.
- * For the corner case of a corner case when the alloc may fail,
- * an empty capability list is returned, as there is no mechanism to return
- * an error and adding such a mechanism would itself be an ABI breakage.
- * The compatible version can be removed after the next major ABI release.
- */
-
-void
-rte_cryptodev_info_get_v20(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-
-void
-rte_cryptodev_info_get_v21(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
-
/**
* Register a callback function for specific device id.
*
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..7727286ac 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -58,12 +58,6 @@ DPDK_21 {
local: *;
};
-DPDK_20.0 {
- global:
- rte_cryptodev_info_get;
- rte_cryptodev_sym_capability_get;
-};
-
EXPERIMENTAL {
global:
--
2.23.0
^ permalink raw reply [relevance 11%]
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: introduce Rx buffer split
@ 2020-10-15 9:27 3% ` Jerin Jacob
2020-10-15 10:27 3% ` Jerin Jacob
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2020-10-15 9:27 UTC (permalink / raw)
To: Slava Ovsiienko
Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Stephen Hemminger,
Ferruh Yigit, Olivier Matz, Maxime Coquelin, David Marchand,
Andrew Rybchenko
On Thu, Oct 15, 2020 at 1:13 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
>
> Hi, Jerin
Hi Slava,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Wednesday, October 14, 2020 21:57
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; Stephen Hemminger
> > <stephen@networkplumber.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > Olivier Matz <olivier.matz@6wind.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > <arybchenko@solarflare.com>
> > Subject: Re: [PATCH v6 1/6] ethdev: introduce Rx buffer split
> >
> > On Wed, Oct 14, 2020 at 11:42 PM Viacheslav Ovsiienko
> > <viacheslavo@nvidia.com> wrote:
> > >
> > > The DPDK datapath in the transmit direction is very flexible.
> > > An application can build the multi-segment packet and manages almost
> > > all data aspects - the memory pools where segments are allocated from,
> > > the segment lengths, the memory attributes like external buffers,
> > > registered for DMA, etc.
> > >
>
> [..snip..]
>
> > > For example, let's suppose we configured the Rx queue with the
> > > following segments:
> > > seg0 - pool0, len0=14B, off0=2
> > > seg1 - pool1, len1=20B, off1=128B
> > > seg2 - pool2, len2=20B, off2=0B
> > > seg3 - pool3, len3=512B, off3=0B
> >
> >
> > Sorry for chime in late. This API lookout looks good to me.
> > But, I am wondering how the application can know the capability or "limits" of
> > struct rte_eth_rxseg structure for the specific PMD. The other descriptor limit,
> > it's being exposed with struct rte_eth_dev_info::rx_desc_lim; If PMD can
> > support a specific pattern rather than returning the blanket error, the
> > application should know the limit.
> > IMO, it is better to add
> > struct rte_eth_rxseg *rxsegs;
> > unint16_t nb_max_rxsegs
> > in rte_eth_dev_info structure to express the capablity.
> > Where the en and offset can define the max offset.
> >
> > Thoughts?
>
> Moreover, there might be implied a lot of various limitations - offsets might be not supported at all or
> have some requirements for alignment, the similar requirements might be applied to segment size
> (say, ask for some granularity). Currently it is not obvious how to report all nuances, and it is supposed
> the limitations of this kind must be documented in PMD chapter. As for mlx5 - it has no special
> limitations besides common requirements to the regular segments.
Reporting the limitation in the documentation will not help for the
generic applications.
>
> One more point - the split feature might be considered as just one of possible cases of using
> these segment descriptions, other features might impose other (unknown for now) limitations.
> If we see some of the features of such kind or other PMDs adopts the split feature - we'll try to find
> the common root and consider the way how to report it.
My only concern with that approach will be ABI break again if
something needs to exposed over rte_eth_dev_info().
IMO, if we featured needs to completed only when its capabilities are
exposed in a programmatic manner.
As of mlx5, if there not limitation then info
rte_eth_dev_info::rxsegs[x].len, offset etc as UINT16_MAX so
that application is aware of the state.
>
> With best regards, Slava
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 2/5] ethdev: add new attributes to hairpin config
@ 2020-10-15 5:35 4% ` Bing Zhao
0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-15 5:35 UTC (permalink / raw)
To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
bernard.iremonger, beilei.xing, wenzhuo.lu
Cc: dev
To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.
`tx_explicit` means if the application itself will insert the TX part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin TX queue and peer RX queue will be
bound automatically during the device start stage.
Different TX and RX queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.
In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no TX flow rules need to be inserted manually
by the application.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
doc/guides/prog_guide/rte_flow.rst | 3 +++
doc/guides/rel_notes/release_20_11.rst | 6 ++++++
lib/librte_ethdev/rte_ethdev.c | 8 ++++----
lib/librte_ethdev/rte_ethdev.h | 27 ++++++++++++++++++++++++++-
4 files changed, 39 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index f26a6c2..c6f828a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
the other path depending on HW capability.
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
.. _table_rte_flow_action_set_meta:
.. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0a9ae54..2e7dc2d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -70,6 +70,7 @@ New Features
* **Updated the ethdev library to support hairpin between two ports.**
New APIs are introduced to support binding / unbinding 2 ports hairpin.
+ Hairpin TX part flow rules can be inserted explicitly.
* **Updated Broadcom bnxt driver.**
@@ -355,6 +356,11 @@ ABI Changes
* ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
+ * ``struct rte_eth_hairpin_conf`` has two new members:
+
+ * ``uint32_t tx_explicit:1;``
+ * ``uint32_t manual_bind:1;``
+
Known Issues
------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 150c555..3cde7a7 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -2004,13 +2004,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_rx_2_tx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_rx_2_tx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Rx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
@@ -2175,13 +2175,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_tx_2_rx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_tx_2_rx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Tx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 3bdb189..dabbbd4 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
* A structure used to configure hairpin binding.
*/
struct rte_eth_hairpin_conf {
- uint16_t peer_count; /**< The number of peers. */
+ uint32_t peer_count:16; /**< The number of peers. */
+
+ /**
+ * Explicit TX flow rule mode. One hairpin pair of queues should have
+ * the same attribute. The actual support depends on the PMD.
+ *
+ * - When set, the user should be responsible for inserting the hairpin
+ * TX part flows and removing them.
+ * - When clear, the PMD will try to handle the TX part of the flows,
+ * e.g., by splitting one flow into two parts.
+ */
+ uint32_t tx_explicit:1;
+
+ /**
+ * Manually bind hairpin queues. One hairpin pair of queues should have
+ * the same attribute. The actual support depends on the PMD.
+ *
+ * - When set, to enable hairpin, the user should call the hairpin bind
+ * API after all the queues are set up properly and the ports are
+ * started. Also, the hairpin unbind API should be called accordingly
+ * before stopping a port that with hairpin configured.
+ * - When clear, the PMD will try to enable the hairpin with the queues
+ * configured automatically during port start.
+ */
+ uint32_t manual_bind:1;
+ uint32_t reserved:14; /**< Reserved bits. */
struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
};
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] security: update session create API
2020-10-14 18:56 2% ` [dpdk-dev] [PATCH v3] " Akhil Goyal
@ 2020-10-15 1:11 0% ` Lukasz Wojciechowski
0 siblings, 0 replies; 200+ results
From: Lukasz Wojciechowski @ 2020-10-15 1:11 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
declan.doherty, radu.nicolau, david.coyle,
"'Lukasz Wojciechowski'",
Hi Akhil,
thank you for responding to review and for v3.
You patch currently does not apply:
dpdk$ git apply v3-security-update-session-create-API.patch
error: patch failed: doc/guides/rel_notes/deprecation.rst:164
error: doc/guides/rel_notes/deprecation.rst: patch does not apply
error: patch failed: doc/guides/rel_notes/release_20_11.rst:344
error: doc/guides/rel_notes/release_20_11.rst: patch does not apply
and I'm sorry but there are still few things - see inline comments
W dniu 14.10.2020 o 20:56, Akhil Goyal pisze:
> The API ``rte_security_session_create`` takes only single
> mempool for session and session private data. So the
> application need to create mempool for twice the number of
> sessions needed and will also lead to wastage of memory as
> session private data need more memory compared to session.
> Hence the API is modified to take two mempool pointers
> - one for session and one for private data.
> This is very similar to crypto based session create APIs.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
> Changes in v3:
> fixed checkpatch issues.
> Added new test in test_security.c for priv_mempool
>
> Changes in V2:
> incorporated comments from Lukasz and David.
>
> app/test-crypto-perf/cperf_ops.c | 4 +-
> app/test-crypto-perf/main.c | 12 +-
> app/test/test_cryptodev.c | 18 ++-
> app/test/test_ipsec.c | 3 +-
> app/test/test_security.c | 160 ++++++++++++++++++++++---
> doc/guides/prog_guide/rte_security.rst | 8 +-
> doc/guides/rel_notes/deprecation.rst | 7 --
> doc/guides/rel_notes/release_20_11.rst | 6 +
> examples/ipsec-secgw/ipsec-secgw.c | 12 +-
> examples/ipsec-secgw/ipsec.c | 9 +-
> lib/librte_security/rte_security.c | 7 +-
> lib/librte_security/rte_security.h | 4 +-
> 12 files changed, 196 insertions(+), 54 deletions(-)
>
> diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
> index 3da835a9c..3a64a2c34 100644
> --- a/app/test-crypto-perf/cperf_ops.c
> +++ b/app/test-crypto-perf/cperf_ops.c
> @@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>
> /* Create security session */
> return (void *)rte_security_session_create(ctx,
> - &sess_conf, sess_mp);
> + &sess_conf, sess_mp, priv_mp);
> }
> if (options->op_type == CPERF_DOCSIS) {
> enum rte_security_docsis_direction direction;
> @@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>
> /* Create security session */
> return (void *)rte_security_session_create(ctx,
> - &sess_conf, priv_mp);
> + &sess_conf, sess_mp, priv_mp);
> }
> #endif
> sess = rte_cryptodev_sym_session_create(sess_mp);
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 62ae6048b..53864ffdd 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
> if (sess_size > max_sess_size)
> max_sess_size = sess_size;
> }
> -
> +#ifdef RTE_LIBRTE_SECURITY
> + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
> + sess_size = rte_security_session_get_size(
> + rte_cryptodev_get_sec_ctx(cdev_id));
> + if (sess_size > max_sess_size)
> + max_sess_size = sess_size;
> + }
> +#endif
> /*
> * Calculate number of needed queue pairs, based on the amount
> * of available number of logical cores and crypto devices.
> @@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
> opts->nb_qps * nb_slaves;
> #endif
> } else
> - sessions_needed = enabled_cdev_count *
> - opts->nb_qps * 2;
> + sessions_needed = enabled_cdev_count * opts->nb_qps;
>
> /*
> * A single session is required per queue pair
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index c7975ed01..9f1b92c51 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -773,9 +773,15 @@ testsuite_setup(void)
> unsigned int session_size =
> rte_cryptodev_sym_get_private_session_size(dev_id);
>
> +#ifdef RTE_LIBRTE_SECURITY
> + unsigned int security_session_size = rte_security_session_get_size(
> + rte_cryptodev_get_sec_ctx(dev_id));
> +
> + if (session_size < security_session_size)
> + session_size = security_session_size;
> +#endif
> /*
> - * Create mempool with maximum number of sessions * 2,
> - * to include the session headers
> + * Create mempool with maximum number of sessions.
> */
> if (info.sym.max_nb_sessions != 0 &&
> info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
> @@ -7751,7 +7757,8 @@ test_pdcp_proto(int i, int oop,
>
> /* Create security session */
> ut_params->sec_session = rte_security_session_create(ctx,
> - &sess_conf, ts_params->session_priv_mpool);
> + &sess_conf, ts_params->session_mpool,
> + ts_params->session_priv_mpool);
>
> if (!ut_params->sec_session) {
> printf("TestCase %s()-%d line %d failed %s: ",
> @@ -8011,7 +8018,8 @@ test_pdcp_proto_SGL(int i, int oop,
>
> /* Create security session */
> ut_params->sec_session = rte_security_session_create(ctx,
> - &sess_conf, ts_params->session_priv_mpool);
> + &sess_conf, ts_params->session_mpool,
> + ts_params->session_priv_mpool);
>
> if (!ut_params->sec_session) {
> printf("TestCase %s()-%d line %d failed %s: ",
> @@ -8368,6 +8376,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
>
> /* Create security session */
> ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> + ts_params->session_mpool,
> ts_params->session_priv_mpool);
>
> if (!ut_params->sec_session) {
> @@ -8543,6 +8552,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
>
> /* Create security session */
> ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
> + ts_params->session_mpool,
> ts_params->session_priv_mpool);
>
> if (!ut_params->sec_session) {
> diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> index 79d00d7e0..9ad07a179 100644
> --- a/app/test/test_ipsec.c
> +++ b/app/test/test_ipsec.c
> @@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
> static struct rte_security_session_conf conf;
>
> ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
> - &conf, qp->mp_session_private);
> + &conf, qp->mp_session,
> + qp->mp_session_private);
>
> if (ut->ss[j].security.ses == NULL)
> return -ENOMEM;
> diff --git a/app/test/test_security.c b/app/test/test_security.c
> index 77fd5adc6..35ed6ff10 100644
> --- a/app/test/test_security.c
> +++ b/app/test/test_security.c
> @@ -200,6 +200,24 @@
> expected_mempool_usage, mempool_usage); \
> } while (0)
>
> +/**
> + * Verify usage of mempool by checking if number of allocated objects matches
> + * expectations. The mempool is used to manage objects for sessions priv data.
> + * A single object is acquired from mempool during session_create
> + * and put back in session_destroy.
> + *
> + * @param expected_priv_mp_usage expected number of used priv mp objects
> + */
> +#define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do { \
> + struct security_testsuite_params *ts_params = &testsuite_params;\
> + unsigned int priv_mp_usage; \
> + priv_mp_usage = rte_mempool_in_use_count( \
> + ts_params->session_priv_mpool); \
> + TEST_ASSERT_EQUAL(expected_priv_mp_usage, priv_mp_usage, \
> + "Expecting %u priv mempool allocations, " \
one tab less
> + "but there are %u allocated objects", \
> + expected_priv_mp_usage, priv_mp_usage); \
> +} while (0)
>
> /**
> * Mockup structures and functions for rte_security_ops;
> @@ -237,27 +255,38 @@ static struct mock_session_create_data {
> struct rte_security_session_conf *conf;
> struct rte_security_session *sess;
> struct rte_mempool *mp;
> + struct rte_mempool *priv_mp;
>
> int ret;
>
> int called;
> int failed;
> -} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
> +} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
>
> static int
> mock_session_create(void *device,
> struct rte_security_session_conf *conf,
> struct rte_security_session *sess,
> - struct rte_mempool *mp)
> + struct rte_mempool *priv_mp)
> {
> + void *sess_priv;
> + int ret;
> +
> mock_session_create_exp.called++;
>
> MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
> MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
> - MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
> + MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
> + ret = rte_mempool_get(priv_mp, &sess_priv);
> + TEST_ASSERT_EQUAL(0, ret,
> + "priv mempool does not have enough objects");
>
> + set_sec_session_private_data(sess, sess_priv);
if op function doesn't return 0, it shouldn't leave also sess_priv set
in sess.
Maybe put the code for getting sess_priv from mempool and setting it in
session inside:
if (mock_session_create_exp.ret == 0) {
...
}
> mock_session_create_exp.sess = sess;
>
> + if (mock_session_create_exp.ret != 0)
> + rte_mempool_put(priv_mp, sess_priv);
> +
> return mock_session_create_exp.ret;
> }
>
> @@ -363,8 +392,10 @@ static struct mock_session_destroy_data {
> static int
> mock_session_destroy(void *device, struct rte_security_session *sess)
> {
> - mock_session_destroy_exp.called++;
> + void *sess_priv = get_sec_session_private_data(sess);
>
> + mock_session_destroy_exp.called++;
> + rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
sess_priv should be released only if op function is going to succeed.
You can check that in similar way as you did in create op by checking
mock_session_destroy_exp.ret
Otherwise testcase test_session_destroy_ops_failure might cause a
problem because your are putting same object twice into the mempool
(once in mock_session_destroy and 2nd time in ut_teardown when session
is destroyed)
> MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
> MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
>
> @@ -502,6 +533,7 @@ struct rte_security_ops mock_ops = {
> */
> static struct security_testsuite_params {
> struct rte_mempool *session_mpool;
> + struct rte_mempool *session_priv_mpool;
> } testsuite_params = { NULL };
>
> /**
> @@ -524,9 +556,11 @@ static struct security_unittest_params {
> .sess = NULL,
> };
>
> -#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
> +#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
> +#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
> #define SECURITY_TEST_MEMPOOL_SIZE 15
> -#define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
> +#define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
> +#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
>
> /**
> * testsuite_setup initializes whole test suite parameters.
> @@ -540,11 +574,27 @@ testsuite_setup(void)
> ts_params->session_mpool = rte_mempool_create(
> SECURITY_TEST_MEMPOOL_NAME,
> SECURITY_TEST_MEMPOOL_SIZE,
> - SECURITY_TEST_SESSION_OBJECT_SIZE,
> + SECURITY_TEST_SESSION_OBJ_SZ,
> 0, 0, NULL, NULL, NULL, NULL,
> SOCKET_ID_ANY, 0);
> TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
> "Cannot create mempool %s\n", rte_strerror(rte_errno));
> +
> + ts_params->session_priv_mpool = rte_mempool_create(
> + SECURITY_TEST_PRIV_MEMPOOL_NAME,
> + SECURITY_TEST_MEMPOOL_SIZE,
> + SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
> + 0, 0, NULL, NULL, NULL, NULL,
> + SOCKET_ID_ANY, 0);
> + if (ts_params->session_priv_mpool == NULL) {
> + RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
> + "Cannot create priv mempool %s\n",
> + __func__, __LINE__, rte_strerror(rte_errno));
> + rte_mempool_free(ts_params->session_mpool);
> + ts_params->session_mpool = NULL;
> + return TEST_FAILED;
> + }
> +
> return TEST_SUCCESS;
> }
>
> @@ -559,6 +609,10 @@ testsuite_teardown(void)
> rte_mempool_free(ts_params->session_mpool);
> ts_params->session_mpool = NULL;
> }
> + if (ts_params->session_priv_mpool) {
> + rte_mempool_free(ts_params->session_priv_mpool);
> + ts_params->session_priv_mpool = NULL;
> + }
> }
>
> /**
> @@ -656,10 +710,12 @@ ut_setup_with_session(void)
> mock_session_create_exp.device = NULL;
> mock_session_create_exp.conf = &ut_params->conf;
> mock_session_create_exp.mp = ts_params->session_mpool;
> + mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
> mock_session_create_exp.ret = 0;
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
> sess);
> TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> @@ -701,11 +757,13 @@ test_session_create_inv_context(void)
> struct rte_security_session *sess;
>
> sess = rte_security_session_create(NULL, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> @@ -725,11 +783,13 @@ test_session_create_inv_context_ops(void)
> ut_params->ctx.ops = NULL;
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> @@ -749,11 +809,13 @@ test_session_create_inv_context_ops_fun(void)
> ut_params->ctx.ops = &empty_ops;
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> @@ -770,18 +832,21 @@ test_session_create_inv_configuration(void)
> struct rte_security_session *sess;
>
> sess = rte_security_session_create(&ut_params->ctx, NULL,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> }
>
> /**
> - * Test execution of rte_security_session_create with NULL mp parameter
> + * Test execution of rte_security_session_create with NULL session
> + * mempool
> */
> static int
> test_session_create_inv_mempool(void)
> @@ -790,11 +855,35 @@ test_session_create_inv_mempool(void)
> struct rte_security_session *sess;
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - NULL);
> + NULL, NULL);
...NULL, ts_params->session_priv_mpool); would be better as it would
test if making primary mempool NULL is the cause of session_create failure.
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> + TEST_ASSERT_SESSION_COUNT(0);
> +
> + return TEST_SUCCESS;
> +}
> +
> +/**
> + * Test execution of rte_security_session_create with NULL session
> + * priv mempool
> + */
> +static int
> +test_session_create_inv_sess_priv_mempool(void)
> +{
> + struct security_unittest_params *ut_params = &unittest_params;
> + struct security_testsuite_params *ts_params = &testsuite_params;
> + struct rte_security_session *sess;
> +
> + sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> + ts_params->session_mpool, NULL);
> + TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> + sess, NULL, "%p");
> + TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> + TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> @@ -810,6 +899,7 @@ test_session_create_mempool_empty(void)
> struct security_testsuite_params *ts_params = &testsuite_params;
> struct security_unittest_params *ut_params = &unittest_params;
> struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
> + void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
> struct rte_security_session *sess;
>
> /* Get all available objects from mempool. */
> @@ -820,21 +910,34 @@ test_session_create_mempool_empty(void)
> TEST_ASSERT_EQUAL(0, ret,
> "Expect getting %d object from mempool"
> " to succeed", i);
> + ret = rte_mempool_get(ts_params->session_priv_mpool,
> + (void **)(&tmp1[i]));
> + TEST_ASSERT_EQUAL(0, ret,
> + "Expect getting %d object from priv mempool"
> + " to succeed", i);
> }
> TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
> + TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
> + TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
> TEST_ASSERT_SESSION_COUNT(0);
>
> /* Put objects back to the pool. */
> - for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i)
> - rte_mempool_put(ts_params->session_mpool, (void *)(tmp[i]));
> + for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
> + rte_mempool_put(ts_params->session_mpool,
> + (void *)(tmp[i]));
> + rte_mempool_put(ts_params->session_priv_mpool,
> + (tmp1[i]));
> + }
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
>
> return TEST_SUCCESS;
> }
> @@ -853,14 +956,17 @@ test_session_create_ops_failure(void)
> mock_session_create_exp.device = NULL;
> mock_session_create_exp.conf = &ut_params->conf;
> mock_session_create_exp.mp = ts_params->session_mpool;
> + mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
> mock_session_create_exp.ret = -1; /* Return failure status. */
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
> sess, NULL, "%p");
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> return TEST_SUCCESS;
> @@ -879,10 +985,12 @@ test_session_create_success(void)
> mock_session_create_exp.device = NULL;
> mock_session_create_exp.conf = &ut_params->conf;
> mock_session_create_exp.mp = ts_params->session_mpool;
> + mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
> mock_session_create_exp.ret = 0; /* Return success status. */
>
> sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
> - ts_params->session_mpool);
> + ts_params->session_mpool,
> + ts_params->session_priv_mpool);
> TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
> sess);
> TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
> @@ -891,6 +999,7 @@ test_session_create_success(void)
> sess, mock_session_create_exp.sess);
> TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> /*
> @@ -1276,6 +1385,7 @@ test_session_destroy_inv_context(void)
> struct security_unittest_params *ut_params = &unittest_params;
>
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(NULL, ut_params->sess);
> @@ -1283,6 +1393,7 @@ test_session_destroy_inv_context(void)
> ret, -EINVAL, "%d");
> TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> return TEST_SUCCESS;
> @@ -1299,6 +1410,7 @@ test_session_destroy_inv_context_ops(void)
> ut_params->ctx.ops = NULL;
>
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1307,6 +1419,7 @@ test_session_destroy_inv_context_ops(void)
> ret, -EINVAL, "%d");
> TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> return TEST_SUCCESS;
> @@ -1323,6 +1436,7 @@ test_session_destroy_inv_context_ops_fun(void)
> ut_params->ctx.ops = &empty_ops;
>
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1331,6 +1445,7 @@ test_session_destroy_inv_context_ops_fun(void)
> ret, -ENOTSUP, "%d");
> TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> return TEST_SUCCESS;
> @@ -1345,6 +1460,7 @@ test_session_destroy_inv_session(void)
> struct security_unittest_params *ut_params = &unittest_params;
>
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
> @@ -1352,6 +1468,7 @@ test_session_destroy_inv_session(void)
> ret, -EINVAL, "%d");
> TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> return TEST_SUCCESS;
> @@ -1371,6 +1488,7 @@ test_session_destroy_ops_failure(void)
> mock_session_destroy_exp.ret = -1;
>
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(&ut_params->ctx,
You can add also:
TEST_ASSERT_PRIV_MP_USAGE(1);
in line 1500 after rte_security_session_destroy() returned to verify
that private mempool usage stays on same level after failure of destroy op.
Currently adding it without fixing mock of session_destroy op, will
cause test failure:
EAL: Test assert test_session_destroy_ops_failure line 1500 failed:
Expecting 1 priv mempool allocations, but there are 0 allocated objects
EAL: in ../app/test/test_security.c:1500 test_session_destroy_ops_failure
> @@ -1396,6 +1514,7 @@ test_session_destroy_success(void)
> mock_session_destroy_exp.sess = ut_params->sess;
> mock_session_destroy_exp.ret = 0;
> TEST_ASSERT_MEMPOOL_USAGE(1);
> + TEST_ASSERT_PRIV_MP_USAGE(1);
> TEST_ASSERT_SESSION_COUNT(1);
>
> int ret = rte_security_session_destroy(&ut_params->ctx,
> @@ -1404,6 +1523,7 @@ test_session_destroy_success(void)
> ret, 0, "%d");
> TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
> TEST_ASSERT_MEMPOOL_USAGE(0);
> + TEST_ASSERT_PRIV_MP_USAGE(0);
> TEST_ASSERT_SESSION_COUNT(0);
>
> /*
> @@ -2370,6 +2490,8 @@ static struct unit_test_suite security_testsuite = {
> test_session_create_inv_configuration),
> TEST_CASE_ST(ut_setup, ut_teardown,
> test_session_create_inv_mempool),
> + TEST_CASE_ST(ut_setup, ut_teardown,
> + test_session_create_inv_sess_priv_mempool),
> TEST_CASE_ST(ut_setup, ut_teardown,
> test_session_create_mempool_empty),
> TEST_CASE_ST(ut_setup, ut_teardown,
> diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
> index 127da2e4f..d30a79576 100644
> --- a/doc/guides/prog_guide/rte_security.rst
> +++ b/doc/guides/prog_guide/rte_security.rst
> @@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
>
> The Security framework provides APIs to create and free sessions for crypto/ethernet
> devices, where sessions are mempool objects. It is the application's responsibility
> -to create and manage the session mempools. The mempool object size should be able to
> -accommodate the driver's private data of security session.
> +to create and manage two session mempools - one for session and other for session
> +private data. The private session data mempool object size should be able to
> +accommodate the driver's private data of security session. The application can get
> +the size of session private data using API ``rte_security_session_get_size``.
> +And the session mempool object size should be enough to accommodate
> +``rte_security_session``.
>
> Once the session mempools have been created, ``rte_security_session_create()``
> is used to allocate and initialize a session for the required crypto/ethernet device.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 43cdd3c58..26be1b3de 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -164,13 +164,6 @@ Deprecation Notices
> following the IPv6 header, as proposed in RFC
> https://protect2.fireeye.com/v1/url?k=0ff8f153-529fb575-0ff97a1c-0cc47a31384a-da56d065c0f960ba&q=1&e=4b8cafbf-ec0f-4a52-9c77-e1c5a4efcfc5&u=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-August%2F177257.html.
>
> -* security: The API ``rte_security_session_create`` takes only single mempool
> - for session and session private data. So the application need to create
> - mempool for twice the number of sessions needed and will also lead to
> - wastage of memory as session private data need more memory compared to session.
> - Hence the API will be modified to take two mempool pointers - one for session
> - and one for private data.
> -
> * cryptodev: support for using IV with all sizes is added, J0 still can
> be used but only when IV length in following structs ``rte_crypto_auth_xform``,
> ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index f1b9b4dfe..0fb1b20cb 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -344,6 +344,12 @@ API Changes
> * The structure ``rte_crypto_sym_vec`` is updated to support both
> cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
>
> +* security: The API ``rte_security_session_create`` is updated to take two
> + mempool objects one for session and other for session private data.
> + So the application need to create two mempools and get the size of session
> + private data using API ``rte_security_session_get_size`` for private session
> + mempool.
> +
>
> ABI Changes
> -----------
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 60132c4bd..2326089bb 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
>
> snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> "sess_mp_%u", socket_id);
> - /*
> - * Doubled due to rte_security_session_create() uses one mempool for
> - * session and for session private data.
> - */
> nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> - rte_lcore_count()) * 2;
> + rte_lcore_count());
> sess_mp = rte_cryptodev_sym_session_pool_create(
> mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
> socket_id);
> @@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
>
> snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> "sess_mp_priv_%u", socket_id);
> - /*
> - * Doubled due to rte_security_session_create() uses one mempool for
> - * session and for session private data.
> - */
> nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> - rte_lcore_count()) * 2;
> + rte_lcore_count());
> sess_mp = rte_mempool_create(mp_name,
> nb_sess,
> sess_sz,
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index 01faa7ac7..6baeeb342 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
> set_ipsec_conf(sa, &(sess_conf.ipsec));
>
> ips->security.ses = rte_security_session_create(ctx,
> - &sess_conf, ipsec_ctx->session_priv_pool);
> + &sess_conf, ipsec_ctx->session_pool,
> + ipsec_ctx->session_priv_pool);
> if (ips->security.ses == NULL) {
> RTE_LOG(ERR, IPSEC,
> "SEC Session init failed: err: %d\n", ret);
> @@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
> }
>
> ips->security.ses = rte_security_session_create(sec_ctx,
> - &sess_conf, skt_ctx->session_pool);
> + &sess_conf, skt_ctx->session_pool,
> + skt_ctx->session_priv_pool);
> if (ips->security.ses == NULL) {
> RTE_LOG(ERR, IPSEC,
> "SEC Session init failed: err: %d\n", ret);
> @@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
> sess_conf.userdata = (void *) sa;
>
> ips->security.ses = rte_security_session_create(sec_ctx,
> - &sess_conf, skt_ctx->session_pool);
> + &sess_conf, skt_ctx->session_pool,
> + skt_ctx->session_priv_pool);
> if (ips->security.ses == NULL) {
> RTE_LOG(ERR, IPSEC,
> "SEC Session init failed: err: %d\n", ret);
> diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
> index 515c29e04..ee4666026 100644
> --- a/lib/librte_security/rte_security.c
> +++ b/lib/librte_security/rte_security.c
> @@ -26,18 +26,21 @@
> struct rte_security_session *
> rte_security_session_create(struct rte_security_ctx *instance,
> struct rte_security_session_conf *conf,
> - struct rte_mempool *mp)
> + struct rte_mempool *mp,
> + struct rte_mempool *priv_mp)
> {
> struct rte_security_session *sess = NULL;
>
> RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
> RTE_PTR_OR_ERR_RET(conf, NULL);
> RTE_PTR_OR_ERR_RET(mp, NULL);
> + RTE_PTR_OR_ERR_RET(priv_mp, NULL);
>
> if (rte_mempool_get(mp, (void **)&sess))
> return NULL;
>
> - if (instance->ops->session_create(instance->device, conf, sess, mp)) {
> + if (instance->ops->session_create(instance->device, conf,
> + sess, priv_mp)) {
> rte_mempool_put(mp, (void *)sess);
> return NULL;
> }
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 16839e539..1710cdd6a 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -386,6 +386,7 @@ struct rte_security_session {
> * @param instance security instance
> * @param conf session configuration parameters
> * @param mp mempool to allocate session objects from
> + * @param priv_mp mempool to allocate session private data objects from
> * @return
> * - On success, pointer to session
> * - On failure, NULL
> @@ -393,7 +394,8 @@ struct rte_security_session {
> struct rte_security_session *
> rte_security_session_create(struct rte_security_ctx *instance,
> struct rte_security_session_conf *conf,
> - struct rte_mempool *mp);
> + struct rte_mempool *mp,
> + struct rte_mempool *priv_mp);
>
> /**
> * Update security session as specified by the session configuration
--
Lukasz Wojciechowski
Principal Software Engineer
Samsung R&D Institute Poland
Samsung Electronics
Office +48 22 377 88 25
l.wojciechow@partner.samsung.com
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver
@ 2020-10-15 0:55 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-10-15 0:55 UTC (permalink / raw)
To: akhil.goyal, Raveendra Padasalagi, Vikas Gupta, ajit.khaparde
Cc: dev, vikram.prakash, mdr
07/10/2020 19:18, Vikas Gupta:
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
> @@ -0,0 +1,3 @@
> +DPDK_21.0 {
> + local: *;
> +};
No!
Please be careful, all other libs use ABI DPDK_21.
Will fix
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-14 13:10 0% ` Medvedkin, Vladimir
@ 2020-10-14 23:57 0% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-10-14 23:57 UTC (permalink / raw)
To: Medvedkin, Vladimir, Michel Machado, Kevin Traynor, Ruifeng Wang,
Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, nd, Honnappa Nagarahalli, nd
<snip>
> >>
> >>
> >> On 13/10/2020 18:46, Michel Machado wrote:
> >>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
> >>>> Hi Michel,
> >>>>
> >>>> Could you please describe a condition when LPM gets inconsistent?
> >>>> As I can see if there is no free tbl8 it will return -ENOSPC.
> >>>
> >>> Consider this simple example, we need to add the following two
> >>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If
> >>> the LPM table is out of tbl8s, the second prefix is not added and
> >>> Gatekeeper will make decisions in violation of the policy. The data
> >>> structure of the LPM table is consistent, but its content
> >>> inconsistent with the policy.
max_rules and number_tbl8s in 'struct rte_lpm' contain the config information. These 2 fields do not change based on the routes added and do not indicate the amount of space left. So, you cannot use this information to decide if there is enough space to add more routes.
> >>
> >> Aha, thanks. So do I understand correctly that you need to add a set
> >> of routes atomically (either the entire set is installed or nothing)?
> >
> > Yes.
> >
> >> If so, then I would suggest having 2 lpm and switching them
> >> atomically after a successful addition. As for now, even if you have
> >> enough tbl8's, routes are installed non atomically, i.e. there will
> >> be a time gap between adding two routes, so in this time interval the
> >> table will be inconsistent with the policy.
> >> Also, if new lpm algorithms are added to the DPDK, they won't have
> >> such a thing as tbl8.
> >
> > Our code already deals with synchronization.
If the application code already deals with synchronization, is it possible to revert back (i.e. delete the routes that got added so far) when the addition of the route-set fails?
>
> OK, so my suggestion here would be to add new routes to the shadow copy
> of the lpm, and if it returns -ENOSPC, than create a new LPM with double
> amount of tbl8's and add all the routes to it. Then switch the active-shadow
> LPM pointers. In this case you'll always add a bulk of routes atomically.
>
> >
> >>> We minimize the need of replacing a LPM table by allocating LPM
> >>> tables with the double of what we need (see example here
> >>>
> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698
> >>> bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
> >>> but the code must be ready for unexpected needs that may arise in
> >>> production.
> >>>
> >>
> >> Usually, the table is initialized with a large enough number of
> >> entries, enough to add a possible number of routes. One tbl8 group
> >> takes up 1Kb of memory which is nothing comparing to the size of
> >> tbl24 which is 64Mb.
> >
> > When the prefixes come from BGP, initializing a large enough table
> > is fine. But when prefixes come from threat intelligence, the number
> > of prefixes can vary wildly and the number of prefixes above 24 bits
> > are way more common.
> >
> >> P.S. consider using rte_fib library, it has a number of advantages
> >> over LPM. You can replace the loop in __lookup_fib_bulk() with a bulk
> >> lookup call and this will probably increase the speed.
> >
> > I'm not aware of the rte_fib library. The only documentation that
> > I found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it
> > just says "FIB (Forwarding information base) implementation for IPv4
> > Longest Prefix Match".
>
> That's true, I'm going to add programmer's guide soon.
> Although the fib API is very similar to existing LPM.
>
> >
> >>>>
> >>>> On 13/10/2020 15:58, Michel Machado wrote:
> >>>>> Hi Kevin,
> >>>>>
> >>>>> We do need fields max_rules and number_tbl8s of struct
> >>>>> rte_lpm, so the removal would force us to have another patch to
> >>>>> our local copy of DPDK. We'd rather avoid this new local patch
> >>>>> because we wish to eventually be in sync with the stock DPDK.
> >>>>>
> >>>>> Those fields are needed in Gatekeeper because we found a
> >>>>> condition in an ongoing deployment in which the entries of some
> >>>>> LPM tables may suddenly change a lot to reflect policy changes. To
> >>>>> avoid getting into a state in which the LPM table is inconsistent
> >>>>> because it cannot fit all the new entries, we compute the needed
> >>>>> parameters to support the new entries, and compare with the
> >>>>> current parameters. If the current table doesn't fit everything,
> >>>>> we have to replace it with a new LPM table.
> >>>>>
> >>>>> If there were a way to obtain the struct rte_lpm_config of a
> >>>>> given LPM table, it would cleanly address our need. We have the
> >>>>> same need in IPv6 and have a local patch to work around it (see
> >>>>>
> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db
> 26a78115cb8c8f).
I do not see why such an API is not possible, we could add one API that returns max_rules and number_tbl8s (essentially, the config that was passed to rte_lpm_create API).
But, is there a possibility to store that info in the application as that data was passed to rte_lpm from the application?
> >>>>> Thus, an IPv4 and IPv6 solution would be best.
> >>>>>
> >>>>> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to
> >>>>> this disscussion.
> >>>>>
> >>>>> [ ]'s
> >>>>> Michel Machado
> >>>>>
> >>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
> >>>>>> Hi Gatekeeper maintainers (I think),
> >>>>>>
> >>>>>> fyi - there is a proposal to remove some members of a struct in
> >>>>>> DPDK LPM API that Gatekeeper is using [1]. It would be only from
> >>>>>> DPDK 20.11 but as it's an LTS I guess it would probably hit
> >>>>>> Debian in a few months.
> >>>>>>
> >>>>>> The full thread is here:
> >>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-
> ruifeng.wang@arm
> >>>>>> .com/
> >>>>>>
> >>>>>>
> >>>>>> Maybe you can take a look and tell us if they are needed in
> >>>>>> Gatekeeper or you can workaround it?
> >>>>>>
> >>>>>> thanks,
> >>>>>> Kevin.
> >>>>>>
> >>>>>> [1]
> >>>>>>
> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c
> >>>>>> #L235-L248
> >>>>>>
> >>>>>>
> >>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
> >>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
> >>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin,
> Vladimir
> >>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
> >>>>>>>> <bruce.richardson@intel.com>
> >>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
> >>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> >>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
> >>>>>>>>
> >>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
> >>>>>>>>>
> >>>>>>>>>> -----Original Message-----
> >>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> >>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
> >>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng
> >>>>>>>>>> Wang <Ruifeng.Wang@arm.com>
> >>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
> >>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> >>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
> >>>>>>>>>>
> >>>>>>>>>> Hi Ruifeng,
> >>>>>>>>>>
> >>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
> >>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang
> wrote:
> >>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no
> >>>>>>>>>>>> need to be exposed to the user.
> >>>>>>>>>>>> Hide the unneeded exposure of structure fields for better
> >>>>>>>>>>>> ABI maintainability.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Suggested-by: David Marchand
> <david.marchand@redhat.com>
> >>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> >>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
> >>>>>>>>>>>> ---
> >>>>>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
> >>>>>>>>>>>> +++++++++++++++++++++++---------------
> >>>>>>>>>> -
> >>>>>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
> >>>>>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
> >>>>>>>>>>>>
> >>>>>>>>>>> <snip>
> >>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> b/lib/librte_lpm/rte_lpm.h index 03da2d37e..112d96f37
> >>>>>>>>>>>> 100644
> >>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
> >>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
> >>>>>>>>>>>>
> >>>>>>>>>>>> /** @internal LPM structure. */
> >>>>>>>>>>>> struct rte_lpm {
> >>>>>>>>>>>> - /* LPM metadata. */
> >>>>>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the
> >>>>>>>>>>>> lpm. */
> >>>>>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm.
> >>>>>>>>>>>> */
> >>>>>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
> >>>>>>>>>>>> - struct rte_lpm_rule_info
> rule_info[RTE_LPM_MAX_DEPTH];
> >>>>>>>>>>>> /**<
> >>>>>>>>>> Rule info table. */
> >>>>>>>>>>>> -
> >>>>>>>>>>>> /* LPM Tables. */
> >>>>>>>>>>>> struct rte_lpm_tbl_entry
> >>>>>>>>>>>> tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
> >>>>>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table.
> >>>>>>>>>>>> */
> >>>>>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table.
> >>>>>>>>>>>> */
> >>>>>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> >>>>>>>>>>>> };
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
> >>>>>>>>>>>
> >>>>>>>>>>> [Basically the return value point from rte_lpm_create() will
> >>>>>>>>>>> be different, and that return value could be used by
> >>>>>>>>>>> rte_lpm_lookup()
> >>>>>>>>>>> which as a static inline function will be in the binary and
> >>>>>>>>>>> using the old structure offsets.]
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be
> >>>>>>>>>> accepted without prior notice.
> >>>>>>>>>>
> >>>>>>>>> So if the change wants to happen in 20.11, a deprecation
> >>>>>>>>> notice should have been added in 20.08.
> >>>>>>>>> I should have added a deprecation notice. This change will
> >>>>>>>>> have to wait for
> >>>>>>>> next ABI update window.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Do you plan to extend? or is this just speculative?
> >>>>>>> It is speculative.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> A quick scan and there seems to be several projects using some
> >>>>>>>> of these
> >>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
> >>>>>>>> gatekeeper. I didn't look at the details to see if they are
> >>>>>>>> really needed.
> >>>>>>>>
> >>>>>>>> Not sure how much notice they'd need or if they update DPDK
> >>>>>>>> much, but I
> >>>>>>>> think it's worth having a closer look as to how they use lpm and
> >>>>>>>> what the
> >>>>>>>> impact to them is.
> >>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
> >>>>>>> access the members to be hided.
> >>>>>>> They will not be impacted by this patch.
> >>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be
> >>>>>>> hided. Its compilation will be broken with this patch.
> >>>>>>>
> >>>>>>>>
> >>>>>>>>> Thanks.
> >>>>>>>>> Ruifeng
> >>>>>>>>>>>> /** LPM RCU QSBR configuration structure. */
> >>>>>>>>>>>> --
> >>>>>>>>>>>> 2.17.1
> >>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> Regards,
> >>>>>>>>>> Vladimir
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
> --
> Regards,
> Vladimir
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
@ 2020-10-14 21:36 2% ` Timothy McDaniel
2020-10-14 21:36 6% ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-15 14:26 7% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
To: Hemant Agrawal, Nipun Gupta, Mattias Rönnblom, Jerin Jacob,
Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren,
Nikhil Rao, Ray Kinsella, Neil Horman
Cc: dev, erik.g.carrillo, gage.eads
This commit implements the eventdev ABI changes required by
the DLB PMD.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++--
drivers/event/sw/sw_evdev_selftest.c | 6 +--
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 66 +++++++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++++----
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 +--
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
15 files changed, 134 insertions(+), 38 deletions(-)
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index b5ae87a..07cd079 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 3ae4441..712db6c 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
port_conf->enqueue_depth =
DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index e796975..933a5a5 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
};
}
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 4fc4e8f..1c6bcca 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = edev->max_num_events;
port_conf->dequeue_depth = 1;
port_conf->enqueue_depth = 1;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index b8b57c3..ae35bb5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static void
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9b2f75f..3050578 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.max_event_port_dequeue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_event_port_enqueue_depth = MAX_OPDL_CONS_Q_DEPTH,
.max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
- .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE,
+ .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index c889220..6fd1102 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
dev_info->max_num_events = (1ULL << 20);
dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
RTE_EVENT_DEV_CAP_BURST_MODE |
- RTE_EVENT_DEV_CAP_EVENT_QOS;
+ RTE_EVENT_DEV_CAP_EVENT_QOS |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
static int
@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 32 * 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static void
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 98dae71..058f568 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
}
p->inflight_max = conf->new_event_threshold;
- p->implicit_release = !conf->disable_implicit_release;
+ p->implicit_release = !(conf->event_port_cfg &
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
/* check if ring exists, same as rx_worker above */
snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
@@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
port_conf->new_event_threshold = 1024;
port_conf->dequeue_depth = 16;
port_conf->enqueue_depth = 16;
- port_conf->disable_implicit_release = 0;
+ port_conf->event_port_cfg = 0;
}
static int
@@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE),
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
};
*info = evdev_sw_info;
diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
index 38c21fa..4a7d823 100644
--- a/drivers/event/sw/sw_evdev_selftest.c
+++ b/drivers/event/sw/sw_evdev_selftest.c
@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
.new_event_threshold = 1024,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (num_ports > MAX_PORTS)
return -1;
@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
.new_event_threshold = 128,
.dequeue_depth = 32,
.enqueue_depth = 64,
- .disable_implicit_release = 0,
};
if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
printf("%d Error setting up port\n", __LINE__);
@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
* only be initialized once - and this needs to be set for multiple runs
*/
conf.new_event_threshold = 512;
- conf.disable_implicit_release = disable_implicit_release;
+ conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (rte_event_port_setup(evdev, 0, &conf) < 0) {
printf("Error setting up RX port\n");
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index bb21dc4..8a72256 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
return ret;
}
- pc->disable_implicit_release = 0;
+ pc->event_port_cfg = 0;
ret = rte_event_port_setup(dev_id, port_id, pc);
if (ret) {
RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 82c177c..3a5b738 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -32,6 +32,7 @@
#include <rte_ethdev.h>
#include <rte_cryptodev.h>
#include <rte_cryptodev_pmd.h>
+#include <rte_compat.h>
#include "rte_eventdev.h"
#include "rte_eventdev_pmd.h"
@@ -437,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_queues > info.max_event_queues) {
- RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
- dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+ if (dev_conf->nb_event_queues > info.max_event_queues +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_queues,
+ info.max_event_queues,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_queues -
+ dev_conf->nb_single_link_event_port_queues >
+ info.max_event_queues) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
+ dev_id, dev_conf->nb_event_queues,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_queues);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_queues) {
+ RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_queues);
return -EINVAL;
}
@@ -448,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
return -EINVAL;
}
- if (dev_conf->nb_event_ports > info.max_event_ports) {
- RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
- dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+ if (dev_conf->nb_event_ports > info.max_event_ports +
+ info.max_single_link_event_port_queue_pairs) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
+ dev_id, dev_conf->nb_event_ports,
+ info.max_event_ports,
+ info.max_single_link_event_port_queue_pairs);
+ return -EINVAL;
+ }
+ if (dev_conf->nb_event_ports -
+ dev_conf->nb_single_link_event_port_queues
+ > info.max_event_ports) {
+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
+ dev_id, dev_conf->nb_event_ports,
+ dev_conf->nb_single_link_event_port_queues,
+ info.max_event_ports);
+ return -EINVAL;
+ }
+
+ if (dev_conf->nb_single_link_event_port_queues >
+ dev_conf->nb_event_ports) {
+ RTE_EDEV_LOG_ERR(
+ "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
+ dev_id,
+ dev_conf->nb_single_link_event_port_queues,
+ dev_conf->nb_event_ports);
return -EINVAL;
}
@@ -737,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- if (port_conf && port_conf->disable_implicit_release &&
+ if (port_conf &&
+ (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
!(dev->data->event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
RTE_EDEV_LOG_ERR(
@@ -830,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
*attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
break;
+ case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
+ {
+ uint32_t config;
+
+ config = dev->data->ports_cfg[port_id].event_port_cfg;
+ *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
+ break;
+ }
default:
return -EINVAL;
};
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7dc8323..ce1fc2c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -291,6 +291,12 @@ struct rte_event;
* single queue to each port or map a single queue to many port.
*/
+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
+/**< Event device preserves the flow ID from the enqueued
+ * event to the dequeued event if the flag is set. Otherwise,
+ * the content of this field is implementation dependent.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -380,6 +386,10 @@ struct rte_event_dev_info {
* event port by this device.
* A device that does not support bulk enqueue will set this as 1.
*/
+ uint8_t max_event_port_links;
+ /**< Maximum number of queues that can be linked to a single event
+ * port by this device.
+ */
int32_t max_num_events;
/**< A *closed system* event dev has a limit on the number of events it
* can manage at a time. An *open system* event dev does not have a
@@ -387,6 +397,12 @@ struct rte_event_dev_info {
*/
uint32_t event_dev_cap;
/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+ uint8_t max_single_link_event_port_queue_pairs;
+ /**< Maximum number of event ports and queues that are optimized for
+ * (and only capable of) single-link configurations supported by this
+ * device. These ports and queues are not accounted for in
+ * max_event_ports or max_event_queues.
+ */
};
/**
@@ -494,6 +510,14 @@ struct rte_event_dev_config {
*/
uint32_t event_dev_cfg;
/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+ uint8_t nb_single_link_event_port_queues;
+ /**< Number of event ports and queues that will be singly-linked to
+ * each other. These are a subset of the overall event ports and
+ * queues; this value cannot exceed *nb_event_ports* or
+ * *nb_event_queues*. If the device has ports and queues that are
+ * optimized for single-link usage, this field is a hint for how many
+ * to allocate; otherwise, regular event ports and queues can be used.
+ */
};
/**
@@ -519,7 +543,6 @@ int
rte_event_dev_configure(uint8_t dev_id,
const struct rte_event_dev_config *dev_conf);
-
/* Event queue specific APIs */
/* Event queue configuration bitmap flags */
@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
/* Event port specific APIs */
+/* Event port configuration bitmap flags */
+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
+/**< Configure the port not to release outstanding events in
+ * rte_event_dev_dequeue_burst(). If set, all events received through
+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
+ */
+#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
+/**< This event port links only to a single event queue.
+ *
+ * @see rte_event_port_setup(), rte_event_port_link()
+ */
+
/** Event port configuration structure */
struct rte_event_port_conf {
int32_t new_event_threshold;
@@ -698,13 +735,7 @@ struct rte_event_port_conf {
* which previously supplied to rte_event_dev_configure().
* Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
*/
- uint8_t disable_implicit_release;
- /**< Configure the port not to release outstanding events in
- * rte_event_dev_dequeue_burst(). If true, all events received through
- * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
- * RTE_EVENT_OP_FORWARD. Must be false when the device is not
- * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
- */
+ uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
};
/**
@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
* The new event threshold of the port
*/
#define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
+/**
+ * The implicit release disable attribute of the port
+ */
+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* Get an attribute from a port.
diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
index 443cd38..a3f9244 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
return -ENXIO;
}
-
/**
* @internal
* Wrapper for use by pci drivers as a .remove function to detach a event
diff --git a/lib/librte_eventdev/rte_eventdev_trace.h b/lib/librte_eventdev/rte_eventdev_trace.h
index 4de6341..5ec43d8 100644
--- a/lib/librte_eventdev/rte_eventdev_trace.h
+++ b/lib/librte_eventdev/rte_eventdev_trace.h
@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u32(dev_conf->nb_event_port_dequeue_depth);
rte_trace_point_emit_u32(dev_conf->nb_event_port_enqueue_depth);
rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
+ rte_trace_point_emit_u8(dev_conf->nb_single_link_event_port_queues);
rte_trace_point_emit_int(rc);
)
@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_int(rc);
)
@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
rte_trace_point_emit_ptr(conf_cb);
rte_trace_point_emit_int(rc);
)
@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_i32(port_conf->new_event_threshold);
rte_trace_point_emit_u16(port_conf->dequeue_depth);
rte_trace_point_emit_u16(port_conf->enqueue_depth);
- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3d9d0ca..2846d04 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -100,7 +100,6 @@ EXPERIMENTAL {
# added in 20.05
__rte_eventdev_trace_configure;
__rte_eventdev_trace_queue_setup;
- __rte_eventdev_trace_port_setup;
__rte_eventdev_trace_port_link;
__rte_eventdev_trace_port_unlink;
__rte_eventdev_trace_start;
@@ -134,4 +133,7 @@ EXPERIMENTAL {
__rte_eventdev_trace_crypto_adapter_queue_pair_del;
__rte_eventdev_trace_crypto_adapter_start;
__rte_eventdev_trace_crypto_adapter_stop;
+
+ # changed in 20.11
+ __rte_eventdev_trace_port_setup;
};
--
2.6.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-14 21:36 2% ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
@ 2020-10-14 21:36 6% ` Timothy McDaniel
2020-10-15 14:26 7% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
2 siblings, 0 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
To: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev, erik.g.carrillo, gage.eads, hemant.agrawal
Several data structures and constants changed, or were added,
in the previous patch. This commit updates the dependent
apps and examples to use the new ABI.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
app/test-eventdev/evt_common.h | 11 ++++++++
app/test-eventdev/test_order_atq.c | 28 +++++++++++++++------
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 ++++++++++++++++------
app/test/test_eventdev.c | 4 +--
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +++--
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++++--
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +++--
examples/l3fwd/l3fwd_event_generic.c | 7 ++++--
examples/l3fwd/l3fwd_event_internal_port.c | 6 +++--
11 files changed, 80 insertions(+), 26 deletions(-)
diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
true : false;
}
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+ struct rte_event_dev_info dev_info;
+
+ rte_event_dev_info_get(dev_id, &dev_info);
+ return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+ true : false;
+}
+
static inline int
evt_service_setup(uint32_t service_id)
{
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
.dequeue_timeout_ns = opt->deq_tmo_nsec,
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = info.max_num_events,
.nb_event_queue_flows = opt->nb_flows,
.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
}
static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.sub_event_type == 0) { /* stage 0 from producer */
order_atq_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
}
static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].sub_event_type == 0) { /*stage 0 */
order_atq_process_stage_0(&ev[i]);
} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_atq_worker_burst(arg);
- else
- return order_atq_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_atq_worker_burst(arg, true);
+ else
+ return order_atq_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_atq_worker(arg, true);
+ else
+ return order_atq_worker(arg, false);
+ }
}
static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
const uint32_t flow = (uintptr_t)m % nb_flows;
/* Maintain seq number per flow */
m->seqn = producer_flow_seq[flow]++;
+ m->udata64 = flow;
ev.flow_id = flow;
ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
}
static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
}
static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev[i]);
} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_queue_worker_burst(arg);
- else
- return order_queue_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_queue_worker_burst(arg, true);
+ else
+ return order_queue_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_queue_worker(arg, true);
+ else
+ return order_queue_worker(arg, false);
+ }
}
static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
if (!(info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
pconf.enqueue_depth = info.max_event_port_enqueue_depth;
- pconf.disable_implicit_release = 1;
+ pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
- pconf.disable_implicit_release = 0;
+ pconf.event_port_cfg = 0;
}
ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 1,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
.schedule_type = cdata.queue_type,
.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
.nb_atomic_flows = 1024,
- .nb_atomic_order_sequences = 1024,
+ .nb_atomic_order_sequences = 1024,
};
struct rte_event_queue_conf tx_q_conf = {
.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
disable_implicit_release = (dev_info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
- wkr_p_conf.disable_implicit_release = disable_implicit_release;
+ wkr_p_conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (dev_info.max_num_events < config.nb_events_limit)
config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
--
2.6.4
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2
@ 2020-10-14 21:36 9% ` Timothy McDaniel
2020-10-14 21:36 2% ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
` (2 more replies)
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2 siblings, 3 replies; 200+ results
From: Timothy McDaniel @ 2020-10-14 21:36 UTC (permalink / raw)
Cc: dev, erik.g.carrillo, gage.eads, harry.van.haaren, hemant.agrawal
This series implements the eventdev ABI changes required by
the DLB and DLB2 PMDs. This ABI change was announced in the
20.08 release notes [1]. This patch was initially part of
the V1 DLB PMD patchset.
The DLB hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports are further restricted to a maximum of 1 linked queue.
3) It does not (currently) have the ability to carry the flow_id as part
of the event (QE) payload.
Due to the above, we would like to propose the following enhancements.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to another, with the remaining bits available for future
assignment.
Note that it was requested that we split this app/test
changes out from the eventdev ABI patch. As a result,
neither of these patches will build without the other
also being applied.
Major changes since V1:
Reworded commit message, as requested
Fixed errors reported by clang
Testing showed no performance impact due to the flow_id template code
added to test app.
[1] http://mails.dpdk.org/archives/dev/2020-August/177261.html
Timothy McDaniel (2):
eventdev: eventdev: express DLB/DLB2 PMD constraints
eventdev: update app and examples for new eventdev ABI
Timothy McDaniel (2):
eventdev: eventdev: express DLB/DLB2 PMD constraints
eventdev: update app and examples for new eventdev ABI
app/test-eventdev/evt_common.h | 11 ++++
app/test-eventdev/test_order_atq.c | 28 ++++++---
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 +++++++---
app/test/test_eventdev.c | 4 +-
drivers/event/dpaa/dpaa_eventdev.c | 3 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
drivers/event/dsw/dsw_evdev.c | 3 +-
drivers/event/octeontx/ssovf_evdev.c | 5 +-
drivers/event/octeontx2/otx2_evdev.c | 3 +-
drivers/event/opdl/opdl_evdev.c | 3 +-
drivers/event/skeleton/skeleton_eventdev.c | 5 +-
drivers/event/sw/sw_evdev.c | 8 ++-
drivers/event/sw/sw_evdev_selftest.c | 6 +-
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +-
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++-
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +-
examples/l3fwd/l3fwd_event_generic.c | 7 ++-
examples/l3fwd/l3fwd_event_internal_port.c | 6 +-
lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
lib/librte_eventdev/rte_eventdev.c | 66 +++++++++++++++++++---
lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++---
lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
lib/librte_eventdev/rte_eventdev_trace.h | 7 ++-
lib/librte_eventdev/rte_eventdev_version.map | 4 +-
26 files changed, 214 insertions(+), 64 deletions(-)
--
2.6.4
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v3] eventdev: update app and examples for new eventdev ABI
2020-10-14 17:33 6% ` [dpdk-dev] [PATCH v3] " Timothy McDaniel
@ 2020-10-14 20:01 4% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-14 20:01 UTC (permalink / raw)
To: Timothy McDaniel
Cc: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh, dpdk-dev,
Erik Gabriel Carrillo, Gage Eads
On Wed, Oct 14, 2020 at 11:01 PM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> Several data structures and constants changed, or were added,
> in the previous patch. This commit updates the dependent
> apps and examples to use the new ABI.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> Acked-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Please send both spec and this patches as series. Not this
http://patches.dpdk.org/patch/80782/ alone.
Reason: The spec patch[1] still has apply issue[2]. Please rebase both
patches on top of next-evendev and send a series.
I am changing the patchwork status for the following patches as
"Changes requested"
http://patches.dpdk.org/patch/79715/
http://patches.dpdk.org/patch/79716/
http://patches.dpdk.org/patch/80782/
[1]
http://patches.dpdk.org/patch/79715/
[2]
[for-main]dell[dpdk-next-eventdev] $ date &&
/home/jerin/config/scripts/build_each_patch.sh /tmp/r/ && date
Thu Oct 15 01:23:43 AM IST 2020
HEAD is now at 1d41eebe8 event/sw: performance improvements
meson build test
Applying: eventdev: eventdev: express DLB/DLB2 PMD constraints
Using index info to reconstruct a base tree...
M drivers/event/dpaa2/dpaa2_eventdev.c
M drivers/event/octeontx/ssovf_evdev.c
M drivers/event/octeontx2/otx2_evdev.c
M drivers/event/sw/sw_evdev.c
M lib/librte_eventdev/rte_event_eth_tx_adapter.c
M lib/librte_eventdev/rte_eventdev.c
Falling back to patching base and 3-way merge...
Auto-merging lib/librte_eventdev/rte_eventdev.c
CONFLICT (content): Merge conflict in lib/librte_eventdev/rte_eventdev.c
Auto-merging lib/librte_eventdev/rte_event_eth_tx_adapter.c
Auto-merging drivers/event/sw/sw_evdev.c
Auto-merging drivers/event/octeontx2/otx2_evdev.c
Auto-merging drivers/event/octeontx/ssovf_evdev.c
Auto-merging drivers/event/dpaa2/dpaa2_eventdev.c
Recorded preimage for 'lib/librte_eventdev/rte_eventdev.c'
error: Failed to merge in the changes.
Patch failed at 0001 eventdev: eventdev: express DLB/DLB2 PMD constraints
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
git am failed /tmp/r//v2-1-2-eventdev-eventdev-express-DLB-DLB2-PMD-constraints
HEAD is now at 1d41eebe8 event/sw: performance improvements
Thu Oct 15 01:23:43 AM IST 2020
> ---
> app/test-eventdev/evt_common.h | 11 ++++++++
> app/test-eventdev/test_order_atq.c | 28 +++++++++++++++------
> app/test-eventdev/test_order_common.c | 1 +
> app/test-eventdev/test_order_queue.c | 29 ++++++++++++++++------
> app/test/test_eventdev.c | 4 +--
> .../eventdev_pipeline/pipeline_worker_generic.c | 6 +++--
> examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
> examples/l2fwd-event/l2fwd_event_generic.c | 7 ++++--
> examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +++--
> examples/l3fwd/l3fwd_event_generic.c | 7 ++++--
> examples/l3fwd/l3fwd_event_internal_port.c | 6 +++--
> 11 files changed, 80 insertions(+), 26 deletions(-)
>
> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
> index f9d7378..a1da1cf 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
> true : false;
> }
>
> +static inline bool
> +evt_has_flow_id(uint8_t dev_id)
> +{
> + struct rte_event_dev_info dev_info;
> +
> + rte_event_dev_info_get(dev_id, &dev_info);
> + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
> + true : false;
> +}
> +
> static inline int
> evt_service_setup(uint32_t service_id)
> {
> @@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
> .dequeue_timeout_ns = opt->deq_tmo_nsec,
> .nb_event_queues = nb_queues,
> .nb_event_ports = nb_ports,
> + .nb_single_link_event_port_queues = 0,
> .nb_events_limit = info.max_num_events,
> .nb_event_queue_flows = opt->nb_flows,
> .nb_event_port_dequeue_depth =
> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
> index 3366cfc..cfcb1dc 100644
> --- a/app/test-eventdev/test_order_atq.c
> +++ b/app/test-eventdev/test_order_atq.c
> @@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
> }
>
> static int
> -order_atq_worker(void *arg)
> +order_atq_worker(void *arg, const bool flow_id_cap)
> {
> ORDER_WORKER_INIT;
> struct rte_event ev;
> @@ -34,6 +34,9 @@ order_atq_worker(void *arg)
> continue;
> }
>
> + if (!flow_id_cap)
> + ev.flow_id = ev.mbuf->udata64;
> +
> if (ev.sub_event_type == 0) { /* stage 0 from producer */
> order_atq_process_stage_0(&ev);
> while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_atq_worker(void *arg)
> }
>
> static int
> -order_atq_worker_burst(void *arg)
> +order_atq_worker_burst(void *arg, const bool flow_id_cap)
> {
> ORDER_WORKER_INIT;
> struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
> }
>
> for (i = 0; i < nb_rx; i++) {
> + if (!flow_id_cap)
> + ev[i].flow_id = ev[i].mbuf->udata64;
> +
> if (ev[i].sub_event_type == 0) { /*stage 0 */
> order_atq_process_stage_0(&ev[i]);
> } else if (ev[i].sub_event_type == 1) { /* stage 1 */
> @@ -95,11 +101,19 @@ worker_wrapper(void *arg)
> {
> struct worker_data *w = arg;
> const bool burst = evt_has_burst_mode(w->dev_id);
> -
> - if (burst)
> - return order_atq_worker_burst(arg);
> - else
> - return order_atq_worker(arg);
> + const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> + if (burst) {
> + if (flow_id_cap)
> + return order_atq_worker_burst(arg, true);
> + else
> + return order_atq_worker_burst(arg, false);
> + } else {
> + if (flow_id_cap)
> + return order_atq_worker(arg, true);
> + else
> + return order_atq_worker(arg, false);
> + }
> }
>
> static int
> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
> index 4190f9a..7942390 100644
> --- a/app/test-eventdev/test_order_common.c
> +++ b/app/test-eventdev/test_order_common.c
> @@ -49,6 +49,7 @@ order_producer(void *arg)
> const uint32_t flow = (uintptr_t)m % nb_flows;
> /* Maintain seq number per flow */
> m->seqn = producer_flow_seq[flow]++;
> + m->udata64 = flow;
>
> ev.flow_id = flow;
> ev.mbuf = m;
> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
> index 495efd9..1511c00 100644
> --- a/app/test-eventdev/test_order_queue.c
> +++ b/app/test-eventdev/test_order_queue.c
> @@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
> }
>
> static int
> -order_queue_worker(void *arg)
> +order_queue_worker(void *arg, const bool flow_id_cap)
> {
> ORDER_WORKER_INIT;
> struct rte_event ev;
> @@ -34,6 +34,9 @@ order_queue_worker(void *arg)
> continue;
> }
>
> + if (!flow_id_cap)
> + ev.flow_id = ev.mbuf->udata64;
> +
> if (ev.queue_id == 0) { /* from ordered queue */
> order_queue_process_stage_0(&ev);
> while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -50,7 +53,7 @@ order_queue_worker(void *arg)
> }
>
> static int
> -order_queue_worker_burst(void *arg)
> +order_queue_worker_burst(void *arg, const bool flow_id_cap)
> {
> ORDER_WORKER_INIT;
> struct rte_event ev[BURST_SIZE];
> @@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
> }
>
> for (i = 0; i < nb_rx; i++) {
> +
> + if (!flow_id_cap)
> + ev[i].flow_id = ev[i].mbuf->udata64;
> +
> if (ev[i].queue_id == 0) { /* from ordered queue */
> order_queue_process_stage_0(&ev[i]);
> } else if (ev[i].queue_id == 1) {/* from atomic queue */
> @@ -95,11 +102,19 @@ worker_wrapper(void *arg)
> {
> struct worker_data *w = arg;
> const bool burst = evt_has_burst_mode(w->dev_id);
> -
> - if (burst)
> - return order_queue_worker_burst(arg);
> - else
> - return order_queue_worker(arg);
> + const bool flow_id_cap = evt_has_flow_id(w->dev_id);
> +
> + if (burst) {
> + if (flow_id_cap)
> + return order_queue_worker_burst(arg, true);
> + else
> + return order_queue_worker_burst(arg, false);
> + } else {
> + if (flow_id_cap)
> + return order_queue_worker(arg, true);
> + else
> + return order_queue_worker(arg, false);
> + }
> }
>
> static int
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 43ccb1c..62019c1 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
> if (!(info.event_dev_cap &
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
> pconf.enqueue_depth = info.max_event_port_enqueue_depth;
> - pconf.disable_implicit_release = 1;
> + pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
> TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
> - pconf.disable_implicit_release = 0;
> + pconf.event_port_cfg = 0;
> }
>
> ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 42ff4ee..f70ab0c 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
> struct rte_event_dev_config config = {
> .nb_event_queues = nb_queues,
> .nb_event_ports = nb_ports,
> + .nb_single_link_event_port_queues = 1,
> .nb_events_limit = 4096,
> .nb_event_queue_flows = 1024,
> .nb_event_port_dequeue_depth = 128,
> @@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
> .schedule_type = cdata.queue_type,
> .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
> .nb_atomic_flows = 1024,
> - .nb_atomic_order_sequences = 1024,
> + .nb_atomic_order_sequences = 1024,
> };
> struct rte_event_queue_conf tx_q_conf = {
> .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> @@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
> disable_implicit_release = (dev_info.event_dev_cap &
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>
> - wkr_p_conf.disable_implicit_release = disable_implicit_release;
> + wkr_p_conf.event_port_cfg = disable_implicit_release ?
> + RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
> if (dev_info.max_num_events < config.nb_events_limit)
> config.nb_events_limit = dev_info.max_num_events;
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index 55bb2f7..ca6cd20 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
> struct rte_event_dev_config config = {
> .nb_event_queues = nb_queues,
> .nb_event_ports = nb_ports,
> + .nb_single_link_event_port_queues = 0,
> .nb_events_limit = 4096,
> .nb_event_queue_flows = 1024,
> .nb_event_port_dequeue_depth = 128,
> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
> index 2dc95e5..9a3167c 100644
> --- a/examples/l2fwd-event/l2fwd_event_generic.c
> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> @@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
> if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
> event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> - event_p_conf.disable_implicit_release =
> - evt_rsrc->disable_implicit_release;
> + event_p_conf.event_port_cfg = 0;
> + if (evt_rsrc->disable_implicit_release)
> + event_p_conf.event_port_cfg |=
> + RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
> evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
> for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
> index 63d57b4..203a14c 100644
> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> @@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
> if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
> event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> - event_p_conf.disable_implicit_release =
> - evt_rsrc->disable_implicit_release;
> + event_p_conf.event_port_cfg = 0;
> + if (evt_rsrc->disable_implicit_release)
> + event_p_conf.event_port_cfg |=
> + RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
> for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> event_p_id++) {
> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
> index f8c9843..c80573f 100644
> --- a/examples/l3fwd/l3fwd_event_generic.c
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
> if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
> event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> - event_p_conf.disable_implicit_release =
> - evt_rsrc->disable_implicit_release;
> + event_p_conf.event_port_cfg = 0;
> + if (evt_rsrc->disable_implicit_release)
> + event_p_conf.event_port_cfg |=
> + RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
> +
> evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
> for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
> index 03ac581..9916a7f 100644
> --- a/examples/l3fwd/l3fwd_event_internal_port.c
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
> if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
> event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> - event_p_conf.disable_implicit_release =
> - evt_rsrc->disable_implicit_release;
> + event_p_conf.event_port_cfg = 0;
> + if (evt_rsrc->disable_implicit_release)
> + event_p_conf.event_port_cfg |=
> + RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
> for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> event_p_id++) {
> --
> 2.6.4
>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3] security: update session create API
@ 2020-10-14 18:56 2% ` Akhil Goyal
2020-10-15 1:11 0% ` Lukasz Wojciechowski
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2020-10-14 18:56 UTC (permalink / raw)
To: dev
Cc: thomas, mdr, anoobj, hemant.agrawal, konstantin.ananyev,
declan.doherty, radu.nicolau, david.coyle, l.wojciechow,
Akhil Goyal
The API ``rte_security_session_create`` takes only single
mempool for session and session private data. So the
application need to create mempool for twice the number of
sessions needed and will also lead to wastage of memory as
session private data need more memory compared to session.
Hence the API is modified to take two mempool pointers
- one for session and one for private data.
This is very similar to crypto based session create APIs.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
Changes in v3:
fixed checkpatch issues.
Added new test in test_security.c for priv_mempool
Changes in V2:
incorporated comments from Lukasz and David.
app/test-crypto-perf/cperf_ops.c | 4 +-
app/test-crypto-perf/main.c | 12 +-
app/test/test_cryptodev.c | 18 ++-
app/test/test_ipsec.c | 3 +-
app/test/test_security.c | 160 ++++++++++++++++++++++---
doc/guides/prog_guide/rte_security.rst | 8 +-
doc/guides/rel_notes/deprecation.rst | 7 --
doc/guides/rel_notes/release_20_11.rst | 6 +
examples/ipsec-secgw/ipsec-secgw.c | 12 +-
examples/ipsec-secgw/ipsec.c | 9 +-
lib/librte_security/rte_security.c | 7 +-
lib/librte_security/rte_security.h | 4 +-
12 files changed, 196 insertions(+), 54 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 3da835a9c..3a64a2c34 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -621,7 +621,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
/* Create security session */
return (void *)rte_security_session_create(ctx,
- &sess_conf, sess_mp);
+ &sess_conf, sess_mp, priv_mp);
}
if (options->op_type == CPERF_DOCSIS) {
enum rte_security_docsis_direction direction;
@@ -664,7 +664,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
/* Create security session */
return (void *)rte_security_session_create(ctx,
- &sess_conf, priv_mp);
+ &sess_conf, sess_mp, priv_mp);
}
#endif
sess = rte_cryptodev_sym_session_create(sess_mp);
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 62ae6048b..53864ffdd 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -156,7 +156,14 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
if (sess_size > max_sess_size)
max_sess_size = sess_size;
}
-
+#ifdef RTE_LIBRTE_SECURITY
+ for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
+ sess_size = rte_security_session_get_size(
+ rte_cryptodev_get_sec_ctx(cdev_id));
+ if (sess_size > max_sess_size)
+ max_sess_size = sess_size;
+ }
+#endif
/*
* Calculate number of needed queue pairs, based on the amount
* of available number of logical cores and crypto devices.
@@ -247,8 +254,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
opts->nb_qps * nb_slaves;
#endif
} else
- sessions_needed = enabled_cdev_count *
- opts->nb_qps * 2;
+ sessions_needed = enabled_cdev_count * opts->nb_qps;
/*
* A single session is required per queue pair
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index c7975ed01..9f1b92c51 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -773,9 +773,15 @@ testsuite_setup(void)
unsigned int session_size =
rte_cryptodev_sym_get_private_session_size(dev_id);
+#ifdef RTE_LIBRTE_SECURITY
+ unsigned int security_session_size = rte_security_session_get_size(
+ rte_cryptodev_get_sec_ctx(dev_id));
+
+ if (session_size < security_session_size)
+ session_size = security_session_size;
+#endif
/*
- * Create mempool with maximum number of sessions * 2,
- * to include the session headers
+ * Create mempool with maximum number of sessions.
*/
if (info.sym.max_nb_sessions != 0 &&
info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
@@ -7751,7 +7757,8 @@ test_pdcp_proto(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_priv_mpool);
+ &sess_conf, ts_params->session_mpool,
+ ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
@@ -8011,7 +8018,8 @@ test_pdcp_proto_SGL(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_priv_mpool);
+ &sess_conf, ts_params->session_mpool,
+ ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
@@ -8368,6 +8376,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+ ts_params->session_mpool,
ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
@@ -8543,6 +8552,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
+ ts_params->session_mpool,
ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index 79d00d7e0..9ad07a179 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -632,7 +632,8 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
static struct rte_security_session_conf conf;
ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
- &conf, qp->mp_session_private);
+ &conf, qp->mp_session,
+ qp->mp_session_private);
if (ut->ss[j].security.ses == NULL)
return -ENOMEM;
diff --git a/app/test/test_security.c b/app/test/test_security.c
index 77fd5adc6..35ed6ff10 100644
--- a/app/test/test_security.c
+++ b/app/test/test_security.c
@@ -200,6 +200,24 @@
expected_mempool_usage, mempool_usage); \
} while (0)
+/**
+ * Verify usage of mempool by checking if number of allocated objects matches
+ * expectations. The mempool is used to manage objects for sessions priv data.
+ * A single object is acquired from mempool during session_create
+ * and put back in session_destroy.
+ *
+ * @param expected_priv_mp_usage expected number of used priv mp objects
+ */
+#define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do { \
+ struct security_testsuite_params *ts_params = &testsuite_params;\
+ unsigned int priv_mp_usage; \
+ priv_mp_usage = rte_mempool_in_use_count( \
+ ts_params->session_priv_mpool); \
+ TEST_ASSERT_EQUAL(expected_priv_mp_usage, priv_mp_usage, \
+ "Expecting %u priv mempool allocations, " \
+ "but there are %u allocated objects", \
+ expected_priv_mp_usage, priv_mp_usage); \
+} while (0)
/**
* Mockup structures and functions for rte_security_ops;
@@ -237,27 +255,38 @@ static struct mock_session_create_data {
struct rte_security_session_conf *conf;
struct rte_security_session *sess;
struct rte_mempool *mp;
+ struct rte_mempool *priv_mp;
int ret;
int called;
int failed;
-} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
+} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
static int
mock_session_create(void *device,
struct rte_security_session_conf *conf,
struct rte_security_session *sess,
- struct rte_mempool *mp)
+ struct rte_mempool *priv_mp)
{
+ void *sess_priv;
+ int ret;
+
mock_session_create_exp.called++;
MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
- MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, mp);
+ MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
+ ret = rte_mempool_get(priv_mp, &sess_priv);
+ TEST_ASSERT_EQUAL(0, ret,
+ "priv mempool does not have enough objects");
+ set_sec_session_private_data(sess, sess_priv);
mock_session_create_exp.sess = sess;
+ if (mock_session_create_exp.ret != 0)
+ rte_mempool_put(priv_mp, sess_priv);
+
return mock_session_create_exp.ret;
}
@@ -363,8 +392,10 @@ static struct mock_session_destroy_data {
static int
mock_session_destroy(void *device, struct rte_security_session *sess)
{
- mock_session_destroy_exp.called++;
+ void *sess_priv = get_sec_session_private_data(sess);
+ mock_session_destroy_exp.called++;
+ rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
@@ -502,6 +533,7 @@ struct rte_security_ops mock_ops = {
*/
static struct security_testsuite_params {
struct rte_mempool *session_mpool;
+ struct rte_mempool *session_priv_mpool;
} testsuite_params = { NULL };
/**
@@ -524,9 +556,11 @@ static struct security_unittest_params {
.sess = NULL,
};
-#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestsMempoolName"
+#define SECURITY_TEST_MEMPOOL_NAME "SecurityTestMp"
+#define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
#define SECURITY_TEST_MEMPOOL_SIZE 15
-#define SECURITY_TEST_SESSION_OBJECT_SIZE sizeof(struct rte_security_session)
+#define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
+#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
/**
* testsuite_setup initializes whole test suite parameters.
@@ -540,11 +574,27 @@ testsuite_setup(void)
ts_params->session_mpool = rte_mempool_create(
SECURITY_TEST_MEMPOOL_NAME,
SECURITY_TEST_MEMPOOL_SIZE,
- SECURITY_TEST_SESSION_OBJECT_SIZE,
+ SECURITY_TEST_SESSION_OBJ_SZ,
0, 0, NULL, NULL, NULL, NULL,
SOCKET_ID_ANY, 0);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"Cannot create mempool %s\n", rte_strerror(rte_errno));
+
+ ts_params->session_priv_mpool = rte_mempool_create(
+ SECURITY_TEST_PRIV_MEMPOOL_NAME,
+ SECURITY_TEST_MEMPOOL_SIZE,
+ SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
+ 0, 0, NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (ts_params->session_priv_mpool == NULL) {
+ RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
+ "Cannot create priv mempool %s\n",
+ __func__, __LINE__, rte_strerror(rte_errno));
+ rte_mempool_free(ts_params->session_mpool);
+ ts_params->session_mpool = NULL;
+ return TEST_FAILED;
+ }
+
return TEST_SUCCESS;
}
@@ -559,6 +609,10 @@ testsuite_teardown(void)
rte_mempool_free(ts_params->session_mpool);
ts_params->session_mpool = NULL;
}
+ if (ts_params->session_priv_mpool) {
+ rte_mempool_free(ts_params->session_priv_mpool);
+ ts_params->session_priv_mpool = NULL;
+ }
}
/**
@@ -656,10 +710,12 @@ ut_setup_with_session(void)
mock_session_create_exp.device = NULL;
mock_session_create_exp.conf = &ut_params->conf;
mock_session_create_exp.mp = ts_params->session_mpool;
+ mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
mock_session_create_exp.ret = 0;
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
sess);
TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -701,11 +757,13 @@ test_session_create_inv_context(void)
struct rte_security_session *sess;
sess = rte_security_session_create(NULL, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
@@ -725,11 +783,13 @@ test_session_create_inv_context_ops(void)
ut_params->ctx.ops = NULL;
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
@@ -749,11 +809,13 @@ test_session_create_inv_context_ops_fun(void)
ut_params->ctx.ops = &empty_ops;
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
@@ -770,18 +832,21 @@ test_session_create_inv_configuration(void)
struct rte_security_session *sess;
sess = rte_security_session_create(&ut_params->ctx, NULL,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
}
/**
- * Test execution of rte_security_session_create with NULL mp parameter
+ * Test execution of rte_security_session_create with NULL session
+ * mempool
*/
static int
test_session_create_inv_mempool(void)
@@ -790,11 +855,35 @@ test_session_create_inv_mempool(void)
struct rte_security_session *sess;
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- NULL);
+ NULL, NULL);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
+ TEST_ASSERT_SESSION_COUNT(0);
+
+ return TEST_SUCCESS;
+}
+
+/**
+ * Test execution of rte_security_session_create with NULL session
+ * priv mempool
+ */
+static int
+test_session_create_inv_sess_priv_mempool(void)
+{
+ struct security_unittest_params *ut_params = &unittest_params;
+ struct security_testsuite_params *ts_params = &testsuite_params;
+ struct rte_security_session *sess;
+
+ sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
+ ts_params->session_mpool, NULL);
+ TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
+ sess, NULL, "%p");
+ TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
+ TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
@@ -810,6 +899,7 @@ test_session_create_mempool_empty(void)
struct security_testsuite_params *ts_params = &testsuite_params;
struct security_unittest_params *ut_params = &unittest_params;
struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
+ void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
struct rte_security_session *sess;
/* Get all available objects from mempool. */
@@ -820,21 +910,34 @@ test_session_create_mempool_empty(void)
TEST_ASSERT_EQUAL(0, ret,
"Expect getting %d object from mempool"
" to succeed", i);
+ ret = rte_mempool_get(ts_params->session_priv_mpool,
+ (void **)(&tmp1[i]));
+ TEST_ASSERT_EQUAL(0, ret,
+ "Expect getting %d object from priv mempool"
+ " to succeed", i);
}
TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+ TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+ TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
TEST_ASSERT_SESSION_COUNT(0);
/* Put objects back to the pool. */
- for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i)
- rte_mempool_put(ts_params->session_mpool, (void *)(tmp[i]));
+ for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
+ rte_mempool_put(ts_params->session_mpool,
+ (void *)(tmp[i]));
+ rte_mempool_put(ts_params->session_priv_mpool,
+ (tmp1[i]));
+ }
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
return TEST_SUCCESS;
}
@@ -853,14 +956,17 @@ test_session_create_ops_failure(void)
mock_session_create_exp.device = NULL;
mock_session_create_exp.conf = &ut_params->conf;
mock_session_create_exp.mp = ts_params->session_mpool;
+ mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
mock_session_create_exp.ret = -1; /* Return failure status. */
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
sess, NULL, "%p");
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
return TEST_SUCCESS;
@@ -879,10 +985,12 @@ test_session_create_success(void)
mock_session_create_exp.device = NULL;
mock_session_create_exp.conf = &ut_params->conf;
mock_session_create_exp.mp = ts_params->session_mpool;
+ mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
mock_session_create_exp.ret = 0; /* Return success status. */
sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
- ts_params->session_mpool);
+ ts_params->session_mpool,
+ ts_params->session_priv_mpool);
TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
sess);
TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -891,6 +999,7 @@ test_session_create_success(void)
sess, mock_session_create_exp.sess);
TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
/*
@@ -1276,6 +1385,7 @@ test_session_destroy_inv_context(void)
struct security_unittest_params *ut_params = &unittest_params;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(NULL, ut_params->sess);
@@ -1283,6 +1393,7 @@ test_session_destroy_inv_context(void)
ret, -EINVAL, "%d");
TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
return TEST_SUCCESS;
@@ -1299,6 +1410,7 @@ test_session_destroy_inv_context_ops(void)
ut_params->ctx.ops = NULL;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1307,6 +1419,7 @@ test_session_destroy_inv_context_ops(void)
ret, -EINVAL, "%d");
TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
return TEST_SUCCESS;
@@ -1323,6 +1436,7 @@ test_session_destroy_inv_context_ops_fun(void)
ut_params->ctx.ops = &empty_ops;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1331,6 +1445,7 @@ test_session_destroy_inv_context_ops_fun(void)
ret, -ENOTSUP, "%d");
TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
return TEST_SUCCESS;
@@ -1345,6 +1460,7 @@ test_session_destroy_inv_session(void)
struct security_unittest_params *ut_params = &unittest_params;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
@@ -1352,6 +1468,7 @@ test_session_destroy_inv_session(void)
ret, -EINVAL, "%d");
TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
return TEST_SUCCESS;
@@ -1371,6 +1488,7 @@ test_session_destroy_ops_failure(void)
mock_session_destroy_exp.ret = -1;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1396,6 +1514,7 @@ test_session_destroy_success(void)
mock_session_destroy_exp.sess = ut_params->sess;
mock_session_destroy_exp.ret = 0;
TEST_ASSERT_MEMPOOL_USAGE(1);
+ TEST_ASSERT_PRIV_MP_USAGE(1);
TEST_ASSERT_SESSION_COUNT(1);
int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1404,6 +1523,7 @@ test_session_destroy_success(void)
ret, 0, "%d");
TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
TEST_ASSERT_MEMPOOL_USAGE(0);
+ TEST_ASSERT_PRIV_MP_USAGE(0);
TEST_ASSERT_SESSION_COUNT(0);
/*
@@ -2370,6 +2490,8 @@ static struct unit_test_suite security_testsuite = {
test_session_create_inv_configuration),
TEST_CASE_ST(ut_setup, ut_teardown,
test_session_create_inv_mempool),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_session_create_inv_sess_priv_mempool),
TEST_CASE_ST(ut_setup, ut_teardown,
test_session_create_mempool_empty),
TEST_CASE_ST(ut_setup, ut_teardown,
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index 127da2e4f..d30a79576 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -533,8 +533,12 @@ and this allows further acceleration of the offload of Crypto workloads.
The Security framework provides APIs to create and free sessions for crypto/ethernet
devices, where sessions are mempool objects. It is the application's responsibility
-to create and manage the session mempools. The mempool object size should be able to
-accommodate the driver's private data of security session.
+to create and manage two session mempools - one for session and other for session
+private data. The private session data mempool object size should be able to
+accommodate the driver's private data of security session. The application can get
+the size of session private data using API ``rte_security_session_get_size``.
+And the session mempool object size should be enough to accommodate
+``rte_security_session``.
Once the session mempools have been created, ``rte_security_session_create()``
is used to allocate and initialize a session for the required crypto/ethernet device.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 43cdd3c58..26be1b3de 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -164,13 +164,6 @@ Deprecation Notices
following the IPv6 header, as proposed in RFC
https://mails.dpdk.org/archives/dev/2020-August/177257.html.
-* security: The API ``rte_security_session_create`` takes only single mempool
- for session and session private data. So the application need to create
- mempool for twice the number of sessions needed and will also lead to
- wastage of memory as session private data need more memory compared to session.
- Hence the API will be modified to take two mempool pointers - one for session
- and one for private data.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f1b9b4dfe..0fb1b20cb 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -344,6 +344,12 @@ API Changes
* The structure ``rte_crypto_sym_vec`` is updated to support both
cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
+* security: The API ``rte_security_session_create`` is updated to take two
+ mempool objects one for session and other for session private data.
+ So the application need to create two mempools and get the size of session
+ private data using API ``rte_security_session_get_size`` for private session
+ mempool.
+
ABI Changes
-----------
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 60132c4bd..2326089bb 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2348,12 +2348,8 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
"sess_mp_%u", socket_id);
- /*
- * Doubled due to rte_security_session_create() uses one mempool for
- * session and for session private data.
- */
nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
- rte_lcore_count()) * 2;
+ rte_lcore_count());
sess_mp = rte_cryptodev_sym_session_pool_create(
mp_name, nb_sess, sess_sz, CDEV_MP_CACHE_SZ, 0,
socket_id);
@@ -2376,12 +2372,8 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
"sess_mp_priv_%u", socket_id);
- /*
- * Doubled due to rte_security_session_create() uses one mempool for
- * session and for session private data.
- */
nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
- rte_lcore_count()) * 2;
+ rte_lcore_count());
sess_mp = rte_mempool_create(mp_name,
nb_sess,
sess_sz,
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 01faa7ac7..6baeeb342 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -117,7 +117,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
set_ipsec_conf(sa, &(sess_conf.ipsec));
ips->security.ses = rte_security_session_create(ctx,
- &sess_conf, ipsec_ctx->session_priv_pool);
+ &sess_conf, ipsec_ctx->session_pool,
+ ipsec_ctx->session_priv_pool);
if (ips->security.ses == NULL) {
RTE_LOG(ERR, IPSEC,
"SEC Session init failed: err: %d\n", ret);
@@ -198,7 +199,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
}
ips->security.ses = rte_security_session_create(sec_ctx,
- &sess_conf, skt_ctx->session_pool);
+ &sess_conf, skt_ctx->session_pool,
+ skt_ctx->session_priv_pool);
if (ips->security.ses == NULL) {
RTE_LOG(ERR, IPSEC,
"SEC Session init failed: err: %d\n", ret);
@@ -378,7 +380,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
sess_conf.userdata = (void *) sa;
ips->security.ses = rte_security_session_create(sec_ctx,
- &sess_conf, skt_ctx->session_pool);
+ &sess_conf, skt_ctx->session_pool,
+ skt_ctx->session_priv_pool);
if (ips->security.ses == NULL) {
RTE_LOG(ERR, IPSEC,
"SEC Session init failed: err: %d\n", ret);
diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c
index 515c29e04..ee4666026 100644
--- a/lib/librte_security/rte_security.c
+++ b/lib/librte_security/rte_security.c
@@ -26,18 +26,21 @@
struct rte_security_session *
rte_security_session_create(struct rte_security_ctx *instance,
struct rte_security_session_conf *conf,
- struct rte_mempool *mp)
+ struct rte_mempool *mp,
+ struct rte_mempool *priv_mp)
{
struct rte_security_session *sess = NULL;
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
RTE_PTR_OR_ERR_RET(conf, NULL);
RTE_PTR_OR_ERR_RET(mp, NULL);
+ RTE_PTR_OR_ERR_RET(priv_mp, NULL);
if (rte_mempool_get(mp, (void **)&sess))
return NULL;
- if (instance->ops->session_create(instance->device, conf, sess, mp)) {
+ if (instance->ops->session_create(instance->device, conf,
+ sess, priv_mp)) {
rte_mempool_put(mp, (void *)sess);
return NULL;
}
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 16839e539..1710cdd6a 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -386,6 +386,7 @@ struct rte_security_session {
* @param instance security instance
* @param conf session configuration parameters
* @param mp mempool to allocate session objects from
+ * @param priv_mp mempool to allocate session private data objects from
* @return
* - On success, pointer to session
* - On failure, NULL
@@ -393,7 +394,8 @@ struct rte_security_session {
struct rte_security_session *
rte_security_session_create(struct rte_security_ctx *instance,
struct rte_security_session_conf *conf,
- struct rte_mempool *mp);
+ struct rte_mempool *mp,
+ struct rte_mempool *priv_mp);
/**
* Update security session as specified by the session configuration
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3] eventdev: update app and examples for new eventdev ABI
@ 2020-10-14 17:33 6% ` Timothy McDaniel
2020-10-14 20:01 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Timothy McDaniel @ 2020-10-14 17:33 UTC (permalink / raw)
To: Jerin Jacob, Harry van Haaren, Marko Kovacevic, Ori Kam,
Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev, erik.g.carrillo, gage.eads
Several data structures and constants changed, or were added,
in the previous patch. This commit updates the dependent
apps and examples to use the new ABI.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
app/test-eventdev/evt_common.h | 11 ++++++++
app/test-eventdev/test_order_atq.c | 28 +++++++++++++++------
app/test-eventdev/test_order_common.c | 1 +
app/test-eventdev/test_order_queue.c | 29 ++++++++++++++++------
app/test/test_eventdev.c | 4 +--
.../eventdev_pipeline/pipeline_worker_generic.c | 6 +++--
examples/eventdev_pipeline/pipeline_worker_tx.c | 1 +
examples/l2fwd-event/l2fwd_event_generic.c | 7 ++++--
examples/l2fwd-event/l2fwd_event_internal_port.c | 6 +++--
examples/l3fwd/l3fwd_event_generic.c | 7 ++++--
examples/l3fwd/l3fwd_event_internal_port.c | 6 +++--
11 files changed, 80 insertions(+), 26 deletions(-)
diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index f9d7378..a1da1cf 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -104,6 +104,16 @@ evt_has_all_types_queue(uint8_t dev_id)
true : false;
}
+static inline bool
+evt_has_flow_id(uint8_t dev_id)
+{
+ struct rte_event_dev_info dev_info;
+
+ rte_event_dev_info_get(dev_id, &dev_info);
+ return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID) ?
+ true : false;
+}
+
static inline int
evt_service_setup(uint32_t service_id)
{
@@ -169,6 +179,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
.dequeue_timeout_ns = opt->deq_tmo_nsec,
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = info.max_num_events,
.nb_event_queue_flows = opt->nb_flows,
.nb_event_port_dequeue_depth =
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 3366cfc..cfcb1dc 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -19,7 +19,7 @@ order_atq_process_stage_0(struct rte_event *const ev)
}
static int
-order_atq_worker(void *arg)
+order_atq_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_atq_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.sub_event_type == 0) { /* stage 0 from producer */
order_atq_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_atq_worker(void *arg)
}
static int
-order_atq_worker_burst(void *arg)
+order_atq_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,9 @@ order_atq_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].sub_event_type == 0) { /*stage 0 */
order_atq_process_stage_0(&ev[i]);
} else if (ev[i].sub_event_type == 1) { /* stage 1 */
@@ -95,11 +101,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_atq_worker_burst(arg);
- else
- return order_atq_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_atq_worker_burst(arg, true);
+ else
+ return order_atq_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_atq_worker(arg, true);
+ else
+ return order_atq_worker(arg, false);
+ }
}
static int
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 4190f9a..7942390 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -49,6 +49,7 @@ order_producer(void *arg)
const uint32_t flow = (uintptr_t)m % nb_flows;
/* Maintain seq number per flow */
m->seqn = producer_flow_seq[flow]++;
+ m->udata64 = flow;
ev.flow_id = flow;
ev.mbuf = m;
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 495efd9..1511c00 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -19,7 +19,7 @@ order_queue_process_stage_0(struct rte_event *const ev)
}
static int
-order_queue_worker(void *arg)
+order_queue_worker(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev;
@@ -34,6 +34,9 @@ order_queue_worker(void *arg)
continue;
}
+ if (!flow_id_cap)
+ ev.flow_id = ev.mbuf->udata64;
+
if (ev.queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev);
while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
@@ -50,7 +53,7 @@ order_queue_worker(void *arg)
}
static int
-order_queue_worker_burst(void *arg)
+order_queue_worker_burst(void *arg, const bool flow_id_cap)
{
ORDER_WORKER_INIT;
struct rte_event ev[BURST_SIZE];
@@ -68,6 +71,10 @@ order_queue_worker_burst(void *arg)
}
for (i = 0; i < nb_rx; i++) {
+
+ if (!flow_id_cap)
+ ev[i].flow_id = ev[i].mbuf->udata64;
+
if (ev[i].queue_id == 0) { /* from ordered queue */
order_queue_process_stage_0(&ev[i]);
} else if (ev[i].queue_id == 1) {/* from atomic queue */
@@ -95,11 +102,19 @@ worker_wrapper(void *arg)
{
struct worker_data *w = arg;
const bool burst = evt_has_burst_mode(w->dev_id);
-
- if (burst)
- return order_queue_worker_burst(arg);
- else
- return order_queue_worker(arg);
+ const bool flow_id_cap = evt_has_flow_id(w->dev_id);
+
+ if (burst) {
+ if (flow_id_cap)
+ return order_queue_worker_burst(arg, true);
+ else
+ return order_queue_worker_burst(arg, false);
+ } else {
+ if (flow_id_cap)
+ return order_queue_worker(arg, true);
+ else
+ return order_queue_worker(arg, false);
+ }
}
static int
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 43ccb1c..62019c1 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
if (!(info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
pconf.enqueue_depth = info.max_event_port_enqueue_depth;
- pconf.disable_implicit_release = 1;
+ pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
- pconf.disable_implicit_release = 0;
+ pconf.event_port_cfg = 0;
}
ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 42ff4ee..f70ab0c 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 1,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
@@ -143,7 +144,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
.schedule_type = cdata.queue_type,
.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
.nb_atomic_flows = 1024,
- .nb_atomic_order_sequences = 1024,
+ .nb_atomic_order_sequences = 1024,
};
struct rte_event_queue_conf tx_q_conf = {
.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
@@ -167,7 +168,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
disable_implicit_release = (dev_info.event_dev_cap &
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
- wkr_p_conf.disable_implicit_release = disable_implicit_release;
+ wkr_p_conf.event_port_cfg = disable_implicit_release ?
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
if (dev_info.max_num_events < config.nb_events_limit)
config.nb_events_limit = dev_info.max_num_events;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 55bb2f7..ca6cd20 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
struct rte_event_dev_config config = {
.nb_event_queues = nb_queues,
.nb_event_ports = nb_ports,
+ .nb_single_link_event_port_queues = 0,
.nb_events_limit = 4096,
.nb_event_queue_flows = 1024,
.nb_event_port_dequeue_depth = 128,
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2dc95e5..9a3167c 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -126,8 +126,11 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 63d57b4..203a14c 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -123,8 +123,10 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index f8c9843..c80573f 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -115,8 +115,11 @@ l3fwd_event_port_setup_generic(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
+
evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 03ac581..9916a7f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -113,8 +113,10 @@ l3fwd_event_port_setup_internal_port(void)
if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
- event_p_conf.disable_implicit_release =
- evt_rsrc->disable_implicit_release;
+ event_p_conf.event_port_cfg = 0;
+ if (evt_rsrc->disable_implicit_release)
+ event_p_conf.event_port_cfg |=
+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
event_p_id++) {
--
2.6.4
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets
2020-10-14 16:35 3% ` [dpdk-dev] [PATCH v7 0/5] " Dekel Peled
2020-10-14 16:35 4% ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
@ 2020-10-14 17:18 0% ` Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-10-14 17:18 UTC (permalink / raw)
To: Dekel Peled, orika, thomas, arybchenko, konstantin.ananyev,
olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
shahafs, viacheslavo
Cc: dev
On 10/14/2020 5:35 PM, Dekel Peled wrote:
> This series implements support of matching on packets based on the
> fragmentation attribute of the packet, i.e. if packet is a fragment
> of a larger packet, or the opposite - packet is not a fragment.
>
> In ethdev, add API to support IPv6 extension headers, and specifically
> the IPv6 fragment extension header item.
> Testpmd CLI is updated accordingly.
> Documentation is updated accordingly.
>
> ---
> v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
> v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
> v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
> v5: update following rebase on recent ICMP changes.
> v6: - move MLX5 PMD patches to separate series.
> - rename IPv6 extension flags for clarity (e.g. frag_ext_exist renamed to has_frag_ext).
> v7: remove the announcement from deprecation file.
> ---
>
> Dekel Peled (5):
> ethdev: add extensions attributes to IPv6 item
> ethdev: add IPv6 fragment extension header item
> app/testpmd: support IPv4 fragments
> app/testpmd: support IPv6 fragments
> app/testpmd: support IPv6 fragment extension item
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets
@ 2020-10-14 16:35 3% ` Dekel Peled
2020-10-14 16:35 4% ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
2020-10-14 17:18 0% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
0 siblings, 2 replies; 200+ results
From: Dekel Peled @ 2020-10-14 16:35 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
shahafs, viacheslavo
Cc: dev
This series implements support of matching on packets based on the
fragmentation attribute of the packet, i.e. if packet is a fragment
of a larger packet, or the opposite - packet is not a fragment.
In ethdev, add API to support IPv6 extension headers, and specifically
the IPv6 fragment extension header item.
Testpmd CLI is updated accordingly.
Documentation is updated accordingly.
---
v2: add patch 'net/mlx5: enforce limitation on IPv6 next proto'
v3: update patch 'ethdev: add IPv6 fragment extension header item' to avoid ABI breakage.
v4: update rte_flow documentation to clarify use of IPv6 extension header flags.
v5: update following rebase on recent ICMP changes.
v6: - move MLX5 PMD patches to separate series.
- rename IPv6 extension flags for clarity (e.g. frag_ext_exist renamed to has_frag_ext).
v7: remove the announcement from deprecation file.
---
Dekel Peled (5):
ethdev: add extensions attributes to IPv6 item
ethdev: add IPv6 fragment extension header item
app/testpmd: support IPv4 fragments
app/testpmd: support IPv6 fragments
app/testpmd: support IPv6 fragment extension item
app/test-pmd/cmdline_flow.c | 53 ++++++++++++++++++++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 32 ++++++++++++++++++--
doc/guides/rel_notes/deprecation.rst | 5 ----
doc/guides/rel_notes/release_20_11.rst | 5 ++++
lib/librte_ethdev/rte_flow.c | 1 +
lib/librte_ethdev/rte_flow.h | 43 +++++++++++++++++++++++++--
lib/librte_ip_frag/rte_ip_frag.h | 26 ++---------------
lib/librte_net/rte_ip.h | 26 +++++++++++++++--
8 files changed, 155 insertions(+), 36 deletions(-)
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item
2020-10-14 16:35 3% ` [dpdk-dev] [PATCH v7 0/5] " Dekel Peled
@ 2020-10-14 16:35 4% ` Dekel Peled
2020-10-14 17:18 0% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Dekel Peled @ 2020-10-14 16:35 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
shahafs, viacheslavo
Cc: dev
Using the current implementation of DPDK, an application cannot match on
IPv6 packets, based on the existing extension headers, in a simple way.
Field 'Next Header' in IPv6 header indicates type of the first extension
header only. Following extension headers can't be identified by
inspecting the IPv6 header.
As a result, the existence or absence of specific extension headers
can't be used for packet matching.
For example, fragmented IPv6 packets contain a dedicated extension header
(which is implemented in a later patch of this series).
Non-fragmented packets don't contain the fragment extension header.
For an application to match on non-fragmented IPv6 packets, the current
implementation doesn't provide a suitable solution.
Matching on the Next Header field is not sufficient, since additional
extension headers might be present in the same packet.
To match on fragmented IPv6 packets, the same difficulty exists.
This patch implements the update as detailed in RFC [1].
A set of additional values will be added to IPv6 header struct.
These values will indicate the existence of every defined extension
header type, providing simple means for identification of existing
extensions in the packet header.
Continuing the above example, fragmented packets can be identified using
the specific value indicating existence of fragment extension header.
To match on non-fragmented IPv6 packets, need to use has_frag_ext 0.
To match on fragmented IPv6 packets, need to use has_frag_ext 1.
To match on any IPv6 packets, the has_frag_ext field should
not be specified for match.
[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++++++++---
doc/guides/rel_notes/deprecation.rst | 5 -----
doc/guides/rel_notes/release_20_11.rst | 5 +++++
lib/librte_ethdev/rte_flow.h | 23 +++++++++++++++++++++--
4 files changed, 43 insertions(+), 10 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index f26a6c2..97fdf2a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -946,11 +946,25 @@ Item: ``IPV6``
Matches an IPv6 header.
-Note: IPv6 options are handled by dedicated pattern items, see `Item:
-IPV6_EXT`_.
+Dedicated flags indicate if header contains specific extension headers.
+To match on packets containing a specific extension header, an application
+should match on the dedicated flag set to 1.
+To match on packets not containing a specific extension header, an application
+should match on the dedicated flag clear to 0.
+In case application doesn't care about the existence of a specific extension
+header, it should not specify the dedicated flag for matching.
- ``hdr``: IPv6 header definition (``rte_ip.h``).
-- Default ``mask`` matches source and destination addresses only.
+- ``has_hop_ext``: header contains Hop-by-Hop Options extension header.
+- ``has_route_ext``: header contains Routing extension header.
+- ``has_frag_ext``: header contains Fragment extension header.
+- ``has_auth_ext``: header contains Authentication extension header.
+- ``has_esp_ext``: header contains Encapsulation Security Payload extension header.
+- ``has_dest_ext``: header contains Destination Options extension header.
+- ``has_mobil_ext``: header contains Mobility extension header.
+- ``has_hip_ext``: header contains Host Identity Protocol extension header.
+- ``has_shim6_ext``: header contains Shim6 Protocol extension header.
+- Default ``mask`` matches ``hdr`` source and destination addresses only.
Item: ``ICMP``
^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e720..87a7c44 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -159,11 +159,6 @@ Deprecation Notices
or absence of a VLAN header following the current header, as proposed in RFC
https://mails.dpdk.org/archives/dev/2020-August/177536.html.
-* ethdev: The ``struct rte_flow_item_ipv6`` struct will be modified to include
- additional values, indicating existence or absence of IPv6 extension headers
- following the IPv6 header, as proposed in RFC
- https://mails.dpdk.org/archives/dev/2020-August/177257.html.
-
* security: The API ``rte_security_session_create`` takes only single mempool
for session and session private data. So the application need to create
mempool for twice the number of sessions needed and will also lead to
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 30db8f2..730e9df 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -351,6 +351,11 @@ ABI Changes
* ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
+ * Added extensions' attributes to struct ``rte_flow_item_ipv6``.
+ A set of additional values added to struct, indicating the existence of
+ every defined extension header type.
+ Applications should use the new values for identification of existing
+ extensions in the packet header.
Known Issues
------------
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 3d5fb09..aa18925 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -792,11 +792,30 @@ struct rte_flow_item_ipv4 {
*
* Matches an IPv6 header.
*
- * Note: IPv6 options are handled by dedicated pattern items, see
- * RTE_FLOW_ITEM_TYPE_IPV6_EXT.
+ * Dedicated flags indicate if header contains specific extension headers.
*/
struct rte_flow_item_ipv6 {
struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
+ uint32_t has_hop_ext:1;
+ /**< Header contains Hop-by-Hop Options extension header. */
+ uint32_t has_route_ext:1;
+ /**< Header contains Routing extension header. */
+ uint32_t has_frag_ext:1;
+ /**< Header contains Fragment extension header. */
+ uint32_t has_auth_ext:1;
+ /**< Header contains Authentication extension header. */
+ uint32_t has_esp_ext:1;
+ /**< Header contains Encapsulation Security Payload extension header. */
+ uint32_t has_dest_ext:1;
+ /**< Header contains Destination Options extension header. */
+ uint32_t has_mobil_ext:1;
+ /**< Header contains Mobility extension header. */
+ uint32_t has_hip_ext:1;
+ /**< Header contains Host Identity Protocol extension header. */
+ uint32_t has_shim6_ext:1;
+ /**< Header contains Shim6 Protocol extension header. */
+ uint32_t reserved:23;
+ /**< Reserved for future extension headers, must be zero. */
};
/** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v6 03/18] eal: rename lcore word choices
@ 2020-10-14 15:27 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-14 15:27 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Anatoly Burakov
Replace master lcore with main lcore and
replace slave lcore with worker lcore.
Keep the old functions and macros but mark them as deprecated
for this release.
The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/rel_notes/deprecation.rst | 19 -------
doc/guides/rel_notes/release_20_11.rst | 11 ++++
lib/librte_eal/common/eal_common_dynmem.c | 10 ++--
lib/librte_eal/common/eal_common_launch.c | 36 ++++++------
lib/librte_eal/common/eal_common_lcore.c | 8 +--
lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
lib/librte_eal/common/eal_options.h | 2 +
lib/librte_eal/common/eal_private.h | 6 +-
lib/librte_eal/common/rte_random.c | 2 +-
lib/librte_eal/common/rte_service.c | 2 +-
lib/librte_eal/freebsd/eal.c | 28 +++++-----
lib/librte_eal/freebsd/eal_thread.c | 32 +++++------
lib/librte_eal/include/rte_eal.h | 4 +-
lib/librte_eal/include/rte_eal_trace.h | 4 +-
lib/librte_eal/include/rte_launch.h | 60 ++++++++++----------
lib/librte_eal/include/rte_lcore.h | 35 ++++++++----
lib/librte_eal/linux/eal.c | 28 +++++-----
lib/librte_eal/linux/eal_memory.c | 10 ++--
lib/librte_eal/linux/eal_thread.c | 32 +++++------
lib/librte_eal/rte_eal_version.map | 2 +-
lib/librte_eal/windows/eal.c | 16 +++---
lib/librte_eal/windows/eal_thread.c | 30 +++++-----
22 files changed, 230 insertions(+), 211 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087934..7271e9ca4d39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
for returning key match count. It will ease handling of no-match case.
-* eal: To be more inclusive in choice of naming, the DPDK project
- will replace uses of master/slave in the API's and command line arguments.
-
- References to master/slave in relation to lcore will be renamed
- to initial/worker. The function ``rte_get_master_lcore()``
- will be renamed to ``rte_get_initial_lcore()``.
- For the 20.11 release, both names will be present and the
- old function will be marked with the deprecated tag.
- The old function will be removed in a future version.
-
- The iterator for worker lcores will also change:
- ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
- ``RTE_LCORE_FOREACH_WORKER``.
-
- The ``master-lcore`` argument to testpmd will be replaced
- with ``initial-lcore``. The old ``master-lcore`` argument
- will produce a runtime notification in 20.11 release, and
- be removed completely in a future release.
-
* eal: The terms blacklist and whitelist to describe devices used
by DPDK will be replaced in the 20.11 relase.
This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 71665c1de65f..bbc64ea2e3a6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -298,6 +298,17 @@ API Changes
* bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
+* eal: Changed the function ``rte_get_master_lcore()`` is
+ replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+ The iterator for worker lcores will also change:
+ ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+ ``RTE_LCORE_FOREACH_WORKER``.
+
+ The ``master-lcore`` argument to testpmd will be replaced
+ with ``main-lcore``. The old ``master-lcore`` argument
+ will produce a runtime notification in 20.11 release, and
+ be removed completely in a future release.
ABI Changes
-----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
total_size -= default_size;
}
#else
- /* in 32-bit mode, allocate all of the memory only on master
+ /* in 32-bit mode, allocate all of the memory only on main
* lcore socket
*/
total_size = internal_conf->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
struct rte_config *cfg = rte_eal_get_configuration();
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
- master_lcore_socket =
- rte_lcore_to_socket_id(cfg->master_lcore);
+ main_lcore_socket =
+ rte_lcore_to_socket_id(cfg->main_lcore);
- if (master_lcore_socket != socket)
+ if (main_lcore_socket != socket)
continue;
/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
* Wait until a lcore finished its job.
*/
int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
{
- if (lcore_config[slave_id].state == WAIT)
+ if (lcore_config[worker_id].state == WAIT)
return 0;
- while (lcore_config[slave_id].state != WAIT &&
- lcore_config[slave_id].state != FINISHED)
+ while (lcore_config[worker_id].state != WAIT &&
+ lcore_config[worker_id].state != FINISHED)
rte_pause();
rte_rmb();
/* we are in finished state, go to wait state */
- lcore_config[slave_id].state = WAIT;
- return lcore_config[slave_id].ret;
+ lcore_config[worker_id].state = WAIT;
+ return lcore_config[worker_id].ret;
}
/*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
*/
int
rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
- enum rte_rmt_call_master_t call_master)
+ enum rte_rmt_call_main_t call_main)
{
int lcore_id;
- int master = rte_get_master_lcore();
+ int main_lcore = rte_get_main_lcore();
/* check state of lcores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (lcore_config[lcore_id].state != WAIT)
return -EBUSY;
}
/* send messages to cores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_remote_launch(f, arg, lcore_id);
}
- if (call_master == CALL_MASTER) {
- lcore_config[master].ret = f(arg);
- lcore_config[master].state = FINISHED;
+ if (call_main == CALL_MAIN) {
+ lcore_config[main_lcore].ret = f(arg);
+ lcore_config[main_lcore].state = FINISHED;
}
return 0;
}
/*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
*/
enum rte_lcore_state_t
rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
{
unsigned lcore_id;
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_wait_lcore(lcore_id);
}
}
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
#include "eal_private.h"
#include "eal_thread.h"
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
{
- return rte_eal_get_configuration()->master_lcore;
+ return rte_eal_get_configuration()->main_lcore;
}
unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
return cfg->lcore_role[lcore_id] == ROLE_RTE;
}
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
{
i++;
if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
while (i < RTE_MAX_LCORE) {
if (!rte_lcore_is_enabled(i) ||
- (skip_master && (i == rte_get_master_lcore()))) {
+ (skip_main && (i == rte_get_main_lcore()))) {
i++;
if (wrap)
i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
{OPT_TRACE_BUF_SIZE, 1, NULL, OPT_TRACE_BUF_SIZE_NUM },
{OPT_TRACE_MODE, 1, NULL, OPT_TRACE_MODE_NUM },
{OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM },
+ {OPT_MAIN_LCORE, 1, NULL, OPT_MAIN_LCORE_NUM },
{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
{OPT_NO_HPET, 0, NULL, OPT_NO_HPET_NUM },
{OPT_NO_HUGE, 0, NULL, OPT_NO_HUGE_NUM },
@@ -144,7 +145,7 @@ struct device_option {
static struct device_option_list devopt_list =
TAILQ_HEAD_INITIALIZER(devopt_list);
-static int master_lcore_parsed;
+static int main_lcore_parsed;
static int mem_parsed;
static int core_parsed;
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
j++, idx++) {
if ((1 << j) & val) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (master_lcore_parsed &&
- cfg->master_lcore == lcore) {
+ if (main_lcore_parsed &&
+ cfg->main_lcore == lcore) {
RTE_LOG(ERR, EAL,
- "lcore %u is master lcore, cannot use as service core\n",
+ "lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
min = idx;
for (idx = min; idx <= max; idx++) {
if (cfg->lcore_role[idx] != ROLE_SERVICE) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (cfg->master_lcore == lcore &&
- master_lcore_parsed) {
+ if (cfg->main_lcore == lcore &&
+ main_lcore_parsed) {
RTE_LOG(ERR, EAL,
- "Error: lcore %u is master lcore, cannot use as service core\n",
+ "Error: lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
return 0;
}
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
struct rte_config *cfg = rte_eal_get_configuration();
errno = 0;
- cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+ cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
if (errno || parsing_end[0] != 0)
return -1;
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- master_lcore_parsed = 1;
+ main_lcore_parsed = 1;
- /* ensure master core is not used as service core */
- if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+ /* ensure main core is not used as service core */
+ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
RTE_LOG(ERR, EAL,
- "Error: Master lcore is used as a service core\n");
+ "Error: Main lcore is used as a service core\n");
return -1;
}
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
break;
case OPT_MASTER_LCORE_NUM:
- if (eal_parse_master_lcore(optarg) < 0) {
+ fprintf(stderr,
+ "Option --" OPT_MASTER_LCORE
+ " is deprecated use " OPT_MAIN_LCORE "\n");
+ /* fallthrough */
+ case OPT_MAIN_LCORE_NUM:
+ if (eal_parse_main_lcore(optarg) < 0) {
RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_MASTER_LCORE "\n");
+ OPT_MAIN_LCORE "\n");
return -1;
}
break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
RTE_CPU_AND(cpuset, cpuset, &default_set);
- /* if no remaining cpu, use master lcore cpu affinity */
+ /* if no remaining cpu, use main lcore cpu affinity */
if (!CPU_COUNT(cpuset)) {
- memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+ memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
sizeof(*cpuset));
}
}
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
if (internal_conf->process_type == RTE_PROC_AUTO)
internal_conf->process_type = eal_proc_type_detect();
- /* default master lcore is the first one */
- if (!master_lcore_parsed) {
- cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ /* default main lcore is the first one */
+ if (!main_lcore_parsed) {
+ cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+ lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
}
compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
- RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+ if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+ RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
return -1;
}
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
" '( )' can be omitted for single element group,\n"
" '@' can be omitted if cpus and lcores have the same value\n"
" -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
- " --"OPT_MASTER_LCORE" ID Core ID that is used as master\n"
+ " --"OPT_MAIN_LCORE" ID Core ID that is used as main\n"
" --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
" -n CHANNELS Number of memory channels\n"
" -m MB Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
OPT_TRACE_BUF_SIZE_NUM,
#define OPT_TRACE_MODE "trace-mode"
OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE "main-lcore"
+ OPT_MAIN_LCORE_NUM,
#define OPT_MASTER_LCORE "master-lcore"
OPT_MASTER_LCORE_NUM,
#define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
*/
struct lcore_config {
pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
+ int pipe_main2worker[2]; /**< communication pipe with main */
+ int pipe_worker2main[2]; /**< communication pipe with main */
lcore_function_t * volatile f; /**< function to call */
void * volatile arg; /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
* The global RTE configuration structure.
*/
struct rte_config {
- uint32_t master_lcore; /**< Id of the master lcore */
+ uint32_t main_lcore; /**< Id of the main lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
lcore_id = rte_lcore_id();
if (unlikely(lcore_id == LCORE_ID_ANY))
- lcore_id = rte_get_master_lcore();
+ lcore_id = rte_get_main_lcore();
return &rand_states[lcore_id];
}
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
struct rte_config *cfg = rte_eal_get_configuration();
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_config[i].core_role == ROLE_SERVICE) {
- if ((unsigned int)i == cfg->master_lcore)
+ if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
- config->master_lcore, thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+ config->main_lcore, thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
rte_thread_setname(lcore_config[i].thread_id, thread_name);
ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
/**
* Initialize the Environment Abstraction Layer (EAL).
*
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
* as possible in the application's main() function.
*
* The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
*
* When the multi-partition feature is supported, depending on the
* configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_eal_trace_thread_remote_launch,
RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
- unsigned int slave_id, int rc),
+ unsigned int worker_id, int rc),
rte_trace_point_emit_ptr(f);
rte_trace_point_emit_ptr(arg);
- rte_trace_point_emit_u32(slave_id);
+ rte_trace_point_emit_u32(worker_id);
rte_trace_point_emit_int(rc);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
/**
* Launch a function on another lcore.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
* is in the WAIT state (this is true after the first call to
* rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
*
* When the remote lcore receives the message, it switches to
* the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
* the return value of f is stored in a local variable to be read using
* rte_eal_wait_lcore().
*
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
* nothing about the completion of f.
*
* Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
* The function to be called.
* @param arg
* The argument for the function.
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore on which the function should be executed.
* @return
* - 0: Success. Execution of function f started on the remote lcore.
* - (-EBUSY): The remote lcore is not in a WAIT state.
*/
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
/**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
* launched on all logical cores.
*/
-enum rte_rmt_call_master_t {
- SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
- CALL_MASTER, /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+ SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+ CALL_MAIN, /**< lcore handler executed by main core. */
};
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
/**
* Launch a function on all lcores.
*
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
* rte_eal_remote_launch() for each lcore.
*
* @param f
* The function to be called.
* @param arg
* The argument for the function.
- * @param call_master
- * If call_master set to SKIP_MASTER, the MASTER lcore does not call
- * the function. If call_master is set to CALL_MASTER, the function
- * is also called on master before returning. In any case, the master
+ * @param call_main
+ * If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ * the function. If call_main is set to CALL_MAIN, the function
+ * is also called on main before returning. In any case, the main
* lcore returns as soon as it finished its job and knows nothing
* about the completion of f on the other lcores.
* @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
* case, no message is sent to any of the lcores.
*/
int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
- enum rte_rmt_call_master_t call_master);
+ enum rte_rmt_call_main_t call_main);
/**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
* The state of the lcore.
*/
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
/**
* Wait until an lcore finishes its job.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
* switch to the WAIT state. If the lcore is in RUNNING state, wait until
* the lcore finishes its job and moves to the FINISHED state.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
- * - 0: If the lcore identified by the slave_id is in a WAIT state.
+ * - 0: If the lcore identified by the worker_id is in a WAIT state.
* - The value that was returned by the previous remote launch
- * function call if the lcore identified by the slave_id was in a
+ * function call if the lcore identified by the worker_id was in a
* FINISHED or RUNNING state. In this case, it changes the state
* of the lcore to WAIT.
*/
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
/**
* Wait until all lcores finish their jobs.
*
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
* rte_eal_wait_lcore() for every lcore. The return values are
* ignored.
*
* After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
*/
void rte_eal_mp_wait_lcore(void);
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
}
/**
- * Get the id of the master lcore
+ * Get the id of the main lcore
*
* @return
- * the id of the master lcore
+ * the id of the main lcore
*/
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ * the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+ return rte_get_main_lcore();
+}
/**
* Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
*
* @param i
* The current lcore (reference).
- * @param skip_master
- * If true, do not return the ID of the master lcore.
+ * @param skip_main
+ * If true, do not return the ID of the main lcore.
* @param wrap
* If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
* return RTE_MAX_LCORE.
* @return
* The next lcore_id or RTE_MAX_LCORE if not found.
*/
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
/**
* Macro to browse all running lcores.
*/
#define RTE_LCORE_FOREACH(i) \
for (i = rte_get_next_lcore(-1, 0, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 0, 0))
/**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
*/
-#define RTE_LCORE_FOREACH_SLAVE(i) \
+#define RTE_LCORE_FOREACH_WORKER(i) \
for (i = rte_get_next_lcore(-1, 1, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 1, 0))
+#define RTE_LCORE_FOREACH_SLAVE(l) \
+ RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
/**
* Callback prototype for initializing lcores.
*
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
- config->master_lcore, (uintptr_t)thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ config->main_lcore, (uintptr_t)thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
ret = rte_thread_setname(lcore_config[i].thread_id,
thread_name);
if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
/* the allocation logic is a little bit convoluted, but here's how it
* works, in a nutshell:
* - if user hasn't specified on which sockets to allocate memory via
- * --socket-mem, we allocate all of our memory on master core socket.
+ * --socket-mem, we allocate all of our memory on main core socket.
* - if user has specified sockets to allocate memory on, there may be
* some "unused" memory left (e.g. if user has specified --socket-mem
* such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
for (i = 0; i < rte_socket_count(); i++) {
int hp_sizes = (int) internal_conf->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
struct rte_config *cfg = rte_eal_get_configuration();
bool skip;
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
skip = active_sockets != 0 &&
internal_conf->socket_mem[socket_id] == 0;
/* ...or if we didn't specifically request memory on *any*
- * socket, and this is not master lcore
+ * socket, and this is not main lcore
*/
- master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
- skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+ main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+ skip |= active_sockets == 0 && socket_id != main_lcore_socket;
if (skip) {
RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a93dea9fe616..33ee2748ede0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -74,7 +74,7 @@ DPDK_21 {
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
- rte_get_master_lcore;
+ rte_get_main_lcore;
rte_get_next_lcore;
rte_get_tsc_hz;
rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index bc48f27ab39a..cbca20956210 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -350,8 +350,8 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
bscan = rte_bus_scan();
if (bscan < 0) {
@@ -360,16 +360,16 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (_pipe(lcore_config[i].pipe_master2slave,
+ if (_pipe(lcore_config[i].pipe_main2worker,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
- if (_pipe(lcore_config[i].pipe_slave2master,
+ if (_pipe(lcore_config[i].pipe_worker2main,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
@@ -394,10 +394,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
return fctret;
}
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
#include "eal_windows.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
return -EBUSY;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(m2s, &c, 1);
+ n = _write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = _read(s2m, &c, 1);
+ n = _read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
int n, ret;
unsigned int lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
/* wait command */
do {
- n = _read(m2s, &c, 1);
+ n = _read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(s2m, &c, 1);
+ n = _write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
--
2.27.0
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-13 19:48 0% ` Michel Machado
@ 2020-10-14 13:10 0% ` Medvedkin, Vladimir
2020-10-14 23:57 0% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-14 13:10 UTC (permalink / raw)
To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
On 13/10/2020 20:48, Michel Machado wrote:
> On 10/13/20 3:06 PM, Medvedkin, Vladimir wrote:
>>
>>
>> On 13/10/2020 18:46, Michel Machado wrote:
>>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>>>> Hi Michel,
>>>>
>>>> Could you please describe a condition when LPM gets inconsistent? As
>>>> I can see if there is no free tbl8 it will return -ENOSPC.
>>>
>>> Consider this simple example, we need to add the following two
>>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If
>>> the LPM table is out of tbl8s, the second prefix is not added and
>>> Gatekeeper will make decisions in violation of the policy. The data
>>> structure of the LPM table is consistent, but its content
>>> inconsistent with the policy.
>>
>> Aha, thanks. So do I understand correctly that you need to add a set
>> of routes atomically (either the entire set is installed or nothing)?
>
> Yes.
>
>> If so, then I would suggest having 2 lpm and switching them atomically
>> after a successful addition. As for now, even if you have enough
>> tbl8's, routes are installed non atomically, i.e. there will be a time
>> gap between adding two routes, so in this time interval the table will
>> be inconsistent with the policy.
>> Also, if new lpm algorithms are added to the DPDK, they won't have
>> such a thing as tbl8.
>
> Our code already deals with synchronization.
OK, so my suggestion here would be to add new routes to the shadow copy
of the lpm, and if it returns -ENOSPC, than create a new LPM with double
amount of tbl8's and add all the routes to it. Then switch the
active-shadow LPM pointers. In this case you'll always add a bulk of
routes atomically.
>
>>> We minimize the need of replacing a LPM table by allocating LPM
>>> tables with the double of what we need (see example here
>>> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
>>> but the code must be ready for unexpected needs that may arise in
>>> production.
>>>
>>
>> Usually, the table is initialized with a large enough number of
>> entries, enough to add a possible number of routes. One tbl8 group
>> takes up 1Kb of memory which is nothing comparing to the size of tbl24
>> which is 64Mb.
>
> When the prefixes come from BGP, initializing a large enough table
> is fine. But when prefixes come from threat intelligence, the number of
> prefixes can vary wildly and the number of prefixes above 24 bits are
> way more common.
>
>> P.S. consider using rte_fib library, it has a number of advantages
>> over LPM. You can replace the loop in __lookup_fib_bulk() with a bulk
>> lookup call and this will probably increase the speed.
>
> I'm not aware of the rte_fib library. The only documentation that I
> found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it
> just says "FIB (Forwarding information base) implementation for IPv4
> Longest Prefix Match".
That's true, I'm going to add programmer's guide soon.
Although the fib API is very similar to existing LPM.
>
>>>>
>>>> On 13/10/2020 15:58, Michel Machado wrote:
>>>>> Hi Kevin,
>>>>>
>>>>> We do need fields max_rules and number_tbl8s of struct rte_lpm,
>>>>> so the removal would force us to have another patch to our local
>>>>> copy of DPDK. We'd rather avoid this new local patch because we
>>>>> wish to eventually be in sync with the stock DPDK.
>>>>>
>>>>> Those fields are needed in Gatekeeper because we found a
>>>>> condition in an ongoing deployment in which the entries of some LPM
>>>>> tables may suddenly change a lot to reflect policy changes. To
>>>>> avoid getting into a state in which the LPM table is inconsistent
>>>>> because it cannot fit all the new entries, we compute the needed
>>>>> parameters to support the new entries, and compare with the current
>>>>> parameters. If the current table doesn't fit everything, we have to
>>>>> replace it with a new LPM table.
>>>>>
>>>>> If there were a way to obtain the struct rte_lpm_config of a
>>>>> given LPM table, it would cleanly address our need. We have the
>>>>> same need in IPv6 and have a local patch to work around it (see
>>>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
>>>>> Thus, an IPv4 and IPv6 solution would be best.
>>>>>
>>>>> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to
>>>>> this disscussion.
>>>>>
>>>>> [ ]'s
>>>>> Michel Machado
>>>>>
>>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>>>> Hi Gatekeeper maintainers (I think),
>>>>>>
>>>>>> fyi - there is a proposal to remove some members of a struct in
>>>>>> DPDK LPM
>>>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11
>>>>>> but
>>>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>>>
>>>>>> The full thread is here:
>>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>>>>
>>>>>>
>>>>>> Maybe you can take a look and tell us if they are needed in
>>>>>> Gatekeeper
>>>>>> or you can workaround it?
>>>>>>
>>>>>> thanks,
>>>>>> Kevin.
>>>>>>
>>>>>> [1]
>>>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>>>>>>
>>>>>>
>>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>>>> <bruce.richardson@intel.com>
>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>>>
>>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>>>
>>>>>>>>>> Hi Ruifeng,
>>>>>>>>>>
>>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no
>>>>>>>>>>>> need to
>>>>>>>>>>>> be exposed to the user.
>>>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>>>> maintainability.
>>>>>>>>>>>>
>>>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>>>> -
>>>>>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>> <snip>
>>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>>>
>>>>>>>>>>>> /** @internal LPM structure. */
>>>>>>>>>>>> struct rte_lpm {
>>>>>>>>>>>> - /* LPM metadata. */
>>>>>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the
>>>>>>>>>>>> lpm. */
>>>>>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH];
>>>>>>>>>>>> /**<
>>>>>>>>>> Rule info table. */
>>>>>>>>>>>> -
>>>>>>>>>>>> /* LPM Tables. */
>>>>>>>>>>>> struct rte_lpm_tbl_entry
>>>>>>>>>>>> tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>>>> };
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>>>
>>>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>>>> different, and that return value could be used by
>>>>>>>>>>> rte_lpm_lookup()
>>>>>>>>>>> which as a static inline function will be in the binary and
>>>>>>>>>>> using
>>>>>>>>>>> the old structure offsets.]
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>>>> without prior notice.
>>>>>>>>>>
>>>>>>>>> So if the change wants to happen in 20.11, a deprecation notice
>>>>>>>>> should
>>>>>>>>> have been added in 20.08.
>>>>>>>>> I should have added a deprecation notice. This change will have
>>>>>>>>> to wait for
>>>>>>>> next ABI update window.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Do you plan to extend? or is this just speculative?
>>>>>>> It is speculative.
>>>>>>>
>>>>>>>>
>>>>>>>> A quick scan and there seems to be several projects using some
>>>>>>>> of these
>>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>>>> gatekeeper. I didn't look at the details to see if they are
>>>>>>>> really needed.
>>>>>>>>
>>>>>>>> Not sure how much notice they'd need or if they update DPDK
>>>>>>>> much, but I
>>>>>>>> think it's worth having a closer look as to how they use lpm and
>>>>>>>> what the
>>>>>>>> impact to them is.
>>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
>>>>>>> access the members to be hided.
>>>>>>> They will not be impacted by this patch.
>>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be
>>>>>>> hided. Its compilation will be broken with this patch.
>>>>>>>
>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>> Ruifeng
>>>>>>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>>>>>>> --
>>>>>>>>>>>> 2.17.1
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards,
>>>>>>>>>> Vladimir
>>>>>>>
>>>>>>
>>>>
>>
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
` (2 preceding siblings ...)
2020-10-14 10:41 15% ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
@ 2020-10-14 10:41 18% ` Conor Walsh
[not found] ` <7206c209-ed4a-2aeb-12d8-ee162ef92596@ashroe.eu>
4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh
Updates to the Checking Compilation and Checking ABI compatibility
sections of the patches part of the contribution guide
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/contributing/patches.rst | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9ff60944c..e11d63bb0 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
combinations of compilation configuration.
By default, each build will be put in a subfolder of the current working directory.
However, if it is preferred to place the builds in a different location,
-the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
-For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
-in a single subfolder called "__builds" created in the current directory.
-Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
+the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
+can be set to that desired location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
.. _integrated_abi_check:
@@ -483,14 +482,17 @@ Checking ABI compatibility
By default, ABI compatibility checks are disabled.
-To enable them, a reference version must be selected via the environment
-variable ``DPDK_ABI_REF_VERSION``.
-
-The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
-then build this reference version in a temporary directory and store the
-results in a subfolder of the current working directory.
-The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
-to a different location.
+To enable ABI checks the required reference version must be set using either the
+environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
+The tag ``latest`` is supported, which will select the latest quarterly release.
+e.g. ``./devtools/test-meson-builds.sh -a latest``.
+
+The ``devtools/test-meson-builds.sh`` script will then either build this reference version
+or download a cached version when available in a temporary directory and store the results
+in a subfolder of the current working directory.
+The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
+the results go to a different location.
+Environmental variables can also be specified in ``.config/dpdk/devel.config``.
Sending Patches
--
2.25.1
^ permalink raw reply [relevance 18%]
* [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
2020-10-14 10:41 21% ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-14 10:41 26% ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
@ 2020-10-14 10:41 15% ` Conor Walsh
2020-10-14 10:41 18% ` [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
[not found] ` <7206c209-ed4a-2aeb-12d8-ee162ef92596@ashroe.eu>
4 siblings, 0 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh
Change dump file not found from an error to a warning to make check-abi.sh
compatible with the changes to test-meson-builds.sh needed to use
prebuilt references.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
devtools/check-abi.sh | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ab6748cfb..60d88777e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
- echo "Error: can't find $name in $newdir"
- error=1
+ echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
continue
fi
abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
--
2.25.1
^ permalink raw reply [relevance 15%]
* [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
2020-10-14 10:41 21% ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
@ 2020-10-14 10:41 26% ` Conor Walsh
2020-10-15 10:16 4% ` Kinsella, Ray
2020-10-14 10:41 15% ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
` (2 subsequent siblings)
4 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh
The core reason for this patch is to reduce the amount of time needed to
run abi checks. The number of abi checks being run has been reduced to
only 2 (1 x86_64 and 1 arm). The script can now also take adavtage of
prebuilt abi references.
Invoke using "./test-meson-builds.sh [-b <build directory>]
[-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
[-d <directory for abi references>]"
- <build directory>: directory to store builds (relative or absolute)
- <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
- <uri for abi references>: http location or directory to get prebuilt
abi references from
- <directory for abi references>: directory to store abi references
(relative or absolute)
e.g. "./test-meson-builds.sh -a latest"
If no flags are specified test-meson-builds.sh will run the standard
meson tests with default options unless environmental variables are
specified.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
devtools/test-meson-builds.sh | 171 +++++++++++++++++++++++++++-------
1 file changed, 139 insertions(+), 32 deletions(-)
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index a87de635a..6b959eb63 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -1,12 +1,74 @@
#! /bin/sh -e
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018-2020 Intel Corporation
# Run meson to auto-configure the various builds.
# * all builds get put in a directory whose name starts with "build-"
# * if a build-directory already exists we assume it was properly configured
# Run ninja after configuration is done.
+# Get arguments
+usage()
+{
+ echo "Usage: $0
+ [-b <build directory>]
+ [-a <dpdk tag or latest for abi check>]
+ [-u <uri for abi references>]
+ [-d <directory for abi references>]" 1>&2; exit 1;
+}
+
+# Placeholder default uri
+DPDK_ABI_DEFAULT_URI="http://abi-ref.dpdk.org"
+
+while getopts "a:u:d:b:h" arg; do
+ case $arg in
+ a)
+ if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+ echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
+ exit 1
+ fi
+ DPDK_ABI_REF_VERSION=${OPTARG} ;;
+ u)
+ if [ -n "$DPDK_ABI_TAR_URI" ]; then
+ echo "DPDK_ABI_TAR_URI and -u cannot both be set"
+ exit 1
+ fi
+ DPDK_ABI_TAR_URI=${OPTARG} ;;
+ d)
+ if [ -n "$DPDK_ABI_REF_DIR" ]; then
+ echo "DPDK_ABI_REF_DIR and -d cannot both be set"
+ exit 1
+ fi
+ DPDK_ABI_REF_DIR=${OPTARG} ;;
+ b)
+ if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
+ echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
+ exit 1
+ fi
+ DPDK_BUILD_TEST_DIR=${OPTARG} ;;
+ h)
+ usage ;;
+ *)
+ usage ;;
+ esac
+done
+
+if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
+ if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
+ DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
+ sed "s/.*\///" | grep -v "r\|{}" |
+ grep '^[^.]*.[^.]*$' | tail -n 1)
+ elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
+ echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
+ exit 1
+ fi
+fi
+if [ -z $DPDK_ABI_TAR_URI ] ; then
+ DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
+fi
+# allow the generation script to override value with env var
+abi_checks_done=${DPDK_ABI_GEN_REF:-0}
+
# set pipefail option if possible
PIPEFAIL=""
set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
@@ -16,7 +78,11 @@ srcdir=$(dirname $(readlink -f $0))/..
MESON=${MESON:-meson}
use_shared="--default-library=shared"
-builds_dir=${DPDK_BUILD_TEST_DIR:-.}
+builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
+# ensure path is absolute meson returns error when some paths are relative
+if echo "$builds_dir" | grep -qv '^/'; then
+ builds_dir=$srcdir/$builds_dir
+fi
if command -v gmake >/dev/null 2>&1 ; then
MAKE=gmake
@@ -123,39 +189,49 @@ install_target () # <builddir> <installdir>
fi
}
-build () # <directory> <target compiler | cross file> <meson options>
+abi_gen_check () # no options
{
- targetdir=$1
- shift
- crossfile=
- [ -r $1 ] && crossfile=$1 || targetcc=$1
- shift
- # skip build if compiler not available
- command -v ${CC##* } >/dev/null 2>&1 || return 0
- if [ -n "$crossfile" ] ; then
- cross="--cross-file $crossfile"
- targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
- $crossfile | tr -d "'" | tr -d '"')
- else
- cross=
+ abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
+ mkdir -p $abirefdir
+ # ensure path is absolute meson returns error when some are relative
+ if echo "$abirefdir" | grep -qv '^/'; then
+ abirefdir=$srcdir/$abirefdir
fi
- load_env $targetcc || return 0
- config $srcdir $builds_dir/$targetdir $cross --werror $*
- compile $builds_dir/$targetdir
- if [ -n "$DPDK_ABI_REF_VERSION" ]; then
- abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
- if [ ! -d $abirefdir/$targetdir ]; then
+ if [ ! -d $abirefdir/$targetdir ]; then
+
+ # try to get abi reference
+ if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
+ if [ $abi_checks_done -gt -1 ]; then
+ if curl --head --fail --silent \
+ "$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
+ >/dev/null; then
+ curl -o $abirefdir/$targetdir.tar.gz \
+ $DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
+ fi
+ fi
+ elif [ $abi_checks_done -gt -1 ]; then
+ if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
+ cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
+ $abirefdir/
+ fi
+ fi
+ if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
+ tar -xf $abirefdir/$targetdir.tar.gz \
+ -C $abirefdir >/dev/null
+ rm -rf $abirefdir/$targetdir.tar.gz
+ # if no reference can be found then generate one
+ else
# clone current sources
if [ ! -d $abirefdir/src ]; then
git clone --local --no-hardlinks \
- --single-branch \
- -b $DPDK_ABI_REF_VERSION \
- $srcdir $abirefdir/src
+ --single-branch \
+ -b $DPDK_ABI_REF_VERSION \
+ $srcdir $abirefdir/src
fi
rm -rf $abirefdir/build
config $abirefdir/src $abirefdir/build $cross \
- -Dexamples= $*
+ -Dexamples= $*
compile $abirefdir/build
install_target $abirefdir/build $abirefdir/$targetdir
$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
@@ -164,17 +240,46 @@ build () # <directory> <target compiler | cross file> <meson options>
find $abirefdir/$targetdir/usr/local -name '*.a' -delete
rm -rf $abirefdir/$targetdir/usr/local/bin
rm -rf $abirefdir/$targetdir/usr/local/share
+ rm -rf $abirefdir/$targetdir/usr/local/lib
fi
+ fi
- install_target $builds_dir/$targetdir \
- $(readlink -f $builds_dir/$targetdir/install)
- $srcdir/devtools/gen-abi.sh \
- $(readlink -f $builds_dir/$targetdir/install)
+ install_target $builds_dir/$targetdir \
+ $(readlink -f $builds_dir/$targetdir/install)
+ $srcdir/devtools/gen-abi.sh \
+ $(readlink -f $builds_dir/$targetdir/install)
+ # check abi if not generating references
+ if [ -z $DPDK_ABI_GEN_REF ] ; then
$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
$(readlink -f $builds_dir/$targetdir/install)
fi
}
+build () # <directory> <target compiler | cross file> <meson options>
+{
+ targetdir=$1
+ shift
+ crossfile=
+ [ -r $1 ] && crossfile=$1 || targetcc=$1
+ shift
+ # skip build if compiler not available
+ command -v ${CC##* } >/dev/null 2>&1 || return 0
+ if [ -n "$crossfile" ] ; then
+ cross="--cross-file $crossfile"
+ targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
+ $crossfile | tr -d "'" | tr -d '"')
+ else
+ cross=
+ fi
+ load_env $targetcc || return 0
+ config $srcdir $builds_dir/$targetdir $cross --werror $*
+ compile $builds_dir/$targetdir
+ if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
+ abi_gen_check
+ abi_checks_done=$((abi_checks_done+1))
+ fi
+}
+
if [ "$1" = "-vv" ] ; then
TEST_MESON_BUILD_VERY_VERBOSE=1
elif [ "$1" = "-v" ] ; then
@@ -189,7 +294,7 @@ fi
# shared and static linked builds with gcc and clang
for c in gcc clang ; do
command -v $c >/dev/null 2>&1 || continue
- for s in static shared ; do
+ for s in shared static ; do
export CC="$CCACHE $c"
build build-$c-$s $c --default-library=$s
unset CC
@@ -211,6 +316,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
# generic armv8a with clang as host compiler
f=$srcdir/config/arm/arm64_armv8_linux_gcc
+# run abi checks with 1 arm build
+abi_checks_done=$((abi_checks_done-1))
export CC="clang"
build build-arm64-host-clang $f $use_shared
unset CC
@@ -231,7 +338,7 @@ done
build_path=$(readlink -f $builds_dir/build-x86-default)
export DESTDIR=$build_path/install
# No need to reinstall if ABI checks are enabled
-if [ -z "$DPDK_ABI_REF_VERSION" ]; then
+if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
install_target $build_path $DESTDIR
fi
--
2.25.1
^ permalink raw reply [relevance 26%]
* [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
@ 2020-10-14 10:41 21% ` Conor Walsh
2020-10-15 10:15 4% ` Kinsella, Ray
2020-10-14 10:41 26% ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
` (3 subsequent siblings)
4 siblings, 1 reply; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh
This patch adds a script that generates compressed archives
containing .dump files which can be used to perform abi
breakage checking in test-meson-build.sh.
Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
- <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
e.g. "./gen-abi-tarballs.sh -v latest"
If no tag is specified, the script will default to "latest"
Using these parameters the script will produce several *.tar.gz
archives containing .dump files required to do abi breakage checking
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
---
devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
create mode 100755 devtools/gen-abi-tarballs.sh
diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
new file mode 100755
index 000000000..bcc1beac5
--- /dev/null
+++ b/devtools/gen-abi-tarballs.sh
@@ -0,0 +1,48 @@
+#! /bin/sh -e
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+# Generate the required prebuilt ABI references for test-meson-build.sh
+
+# Get arguments
+usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
+abi_tag=
+while getopts "v:h" arg; do
+ case $arg in
+ v)
+ if [ -n "$DPDK_ABI_REF_VERSION" ]; then
+ echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
+ exit 1
+ fi
+ DPDK_ABI_REF_VERSION=${OPTARG} ;;
+ h)
+ usage ;;
+ *)
+ usage ;;
+ esac
+done
+
+if [ -z $DPDK_ABI_REF_VERSION ] ; then
+ DPDK_ABI_REF_VERSION="latest"
+fi
+
+srcdir=$(dirname $(readlink -f $0))/..
+
+DPDK_ABI_GEN_REF=-20
+DPDK_ABI_REF_DIR=$srcdir/__abitarballs
+
+. $srcdir/devtools/test-meson-builds.sh
+
+abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
+
+rm -rf $abirefdir/build-*.tar.gz
+cd $abirefdir
+for f in build-* ; do
+ tar -czf $f.tar.gz $f
+done
+cp *.tar.gz ../
+rm -rf *
+mv ../*.tar.gz .
+rm -rf build-x86-default.tar.gz
+
+echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
--
2.25.1
^ permalink raw reply [relevance 21%]
* [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks
` (4 preceding siblings ...)
2020-10-14 9:37 4% ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
@ 2020-10-14 10:41 9% ` Conor Walsh
2020-10-14 10:41 21% ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
` (4 more replies)
5 siblings, 5 replies; 200+ results
From: Conor Walsh @ 2020-10-14 10:41 UTC (permalink / raw)
To: mdr, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev, Conor Walsh
This patchset introduces changes to test-meson-builds.sh, check-abi.sh and
adds a new script gen-abi-tarballs.sh. The changes to test-meson-builds.sh
include UX improvements such as adding command line arguments and allowing
the use of relative paths. Reduced the number of abi checks to just two,
one for both x86_64 and ARM, the references for these tests can now be
prebuilt and downloaded by test-meson-builds.sh, these changes will allow
the tests to run much faster. check-abi.sh is updated to use the prebuilt
references. gen-abi-tarballs.sh is a new script to generate the prebuilt
abi references used by test-meson-builds.sh, these compressed archives can
be retrieved from either a local directory or a remote http location.
---
v7: Changes resulting from list feedback
v6: Corrected a mistake in the doc patch
v5:
- Patchset has been completely reworked following feedback
- Patchset is now part of test-meson-builds.sh not the meson build
system
v4:
- Reworked both Python scripts to use more native Python functions
and modules.
- Python scripts are now in line with how other Python scripts in
DPDK are structured.
v3:
- Fix for bug which now allows meson < 0.48.0 to be used
- Various coding style changes throughout
- Minor bug fixes to the various meson.build files
v2: Spelling mistake, corrected spelling of environmental
Conor Walsh (4):
devtools: add generation of compressed abi dump archives
devtools: abi and UX changes for test-meson-builds.sh
devtools: change dump file not found to warning in check-abi.sh
doc: test-meson-builds.sh doc updates
devtools/check-abi.sh | 3 +-
devtools/gen-abi-tarballs.sh | 48 ++++++++
devtools/test-meson-builds.sh | 171 ++++++++++++++++++++++------
doc/guides/contributing/patches.rst | 26 +++--
4 files changed, 202 insertions(+), 46 deletions(-)
create mode 100755 devtools/gen-abi-tarballs.sh
--
2.25.1
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks
2020-10-14 9:37 4% ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
@ 2020-10-14 10:33 4% ` Walsh, Conor
0 siblings, 0 replies; 200+ results
From: Walsh, Conor @ 2020-10-14 10:33 UTC (permalink / raw)
To: Kinsella, Ray, nhorman, Richardson, Bruce, thomas, david.marchand; +Cc: dev
Thanks for your feedback Ray,
V7 with your suggested changes for the patchset is on its way.
/Conor
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Wednesday 14 October 2020 10:37
> To: Walsh, Conor <conor.walsh@intel.com>; nhorman@tuxdriver.com;
> Richardson, Bruce <bruce.richardson@intel.com>; thomas@monjalon.net;
> david.marchand@redhat.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v6 0/4] devtools: abi breakage checks
>
>
>
> On 12/10/2020 14:03, Conor Walsh wrote:
> > This patchset will help developers discover abi breakages more easily
> > before upstreaming their code. Currently checking that the DPDK ABI
> > has not changed before up-streaming code is not intuitive and the
> > process is time consuming. Currently contributors must use the
> > test-meson-builds.sh tool, alongside some environmental variables to
> > test their changes. Contributors in many cases are either unaware or
> > unable to do this themselves, leading to a potentially serious situation
> > where they are unknowingly up-streaming code that breaks the ABI. These
> > breakages are caught by Travis, but it would be more efficient if they
> > were caught locally before up-streaming.
>
> I would remove everything in the git log text before this line...
>
> > This patchset introduces changes
> > to test-meson-builds.sh, check-abi.sh and adds a new script
> > gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX
>
> UX changes = improvements
>
> > changes such as adding command line arguments and allowing the use of
> > relative paths. Reduced the number of abi checks to just two, one for both
> > x86_64 and ARM, the references for these tests can now be prebuilt and
> > downloaded by test-meson-builds.sh, these changes will allow the tests to
> > run much faster. check-abi.sh is updated to use the prebuilt references.
> > gen-abi-tarballs.sh is a new script to generate the prebuilt abi
> > references used by test-meson-builds.sh, these compressed archives can
> be
> > retrieved from either a local directory or a remote http location.
> >
> > ---
> > v6: Corrected a mistake in the doc patch
> >
> > v5:
> > - Patchset has been completely reworked following feedback
> > - Patchset is now part of test-meson-builds.sh not the meson build system
> >
> > v4:
> > - Reworked both Python scripts to use more native Python functions
> > and modules.
> > - Python scripts are now in line with how other Python scripts in
> > DPDK are structured.
> >
> > v3:
> > - Fix for bug which now allows meson < 0.48.0 to be used
> > - Various coding style changes throughout
> > - Minor bug fixes to the various meson.build files
> >
> > v2: Spelling mistake, corrected spelling of environmental
> >
> > Conor Walsh (4):
> > devtools: add generation of compressed abi dump archives
> > devtools: abi and UX changes for test-meson-builds.sh
> > devtools: change dump file not found to warning in check-abi.sh
> > doc: test-meson-builds.sh doc updates
> >
> > devtools/check-abi.sh | 3 +-
> > devtools/gen-abi-tarballs.sh | 48 ++++++++
> > devtools/test-meson-builds.sh | 170 ++++++++++++++++++++++------
> > doc/guides/contributing/patches.rst | 26 +++--
> > 4 files changed, 201 insertions(+), 46 deletions(-)
> > create mode 100755 devtools/gen-abi-tarballs.sh
> >
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates
@ 2020-10-14 9:46 0% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:46 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 12/10/2020 14:03, Conor Walsh wrote:
> Updates to the Checking Compilation and Checking ABI compatibility
> sections of the patches part of the contribution guide
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
>
> ---
> doc/guides/contributing/patches.rst | 26 ++++++++++++++------------
> 1 file changed, 14 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
> index 9ff60944c..e11d63bb0 100644
> --- a/doc/guides/contributing/patches.rst
> +++ b/doc/guides/contributing/patches.rst
> @@ -470,10 +470,9 @@ The script internally checks for dependencies, then builds for several
> combinations of compilation configuration.
> By default, each build will be put in a subfolder of the current working directory.
> However, if it is preferred to place the builds in a different location,
> -the environment variable ``DPDK_BUILD_TEST_DIR`` can be set to that desired location.
> -For example, setting ``DPDK_BUILD_TEST_DIR=__builds`` will put all builds
> -in a single subfolder called "__builds" created in the current directory.
> -Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
> +the environment variable ``DPDK_BUILD_TEST_DIR`` or the command line argument ``-b``
> +can be set to that desired location.
> +Environmental variables can also be specified in ``.config/dpdk/devel.config``.
>
>
> .. _integrated_abi_check:
> @@ -483,14 +482,17 @@ Checking ABI compatibility
>
> By default, ABI compatibility checks are disabled.
>
> -To enable them, a reference version must be selected via the environment
> -variable ``DPDK_ABI_REF_VERSION``.
> -
> -The ``devtools/test-build.sh`` and ``devtools/test-meson-builds.sh`` scripts
> -then build this reference version in a temporary directory and store the
> -results in a subfolder of the current working directory.
> -The environment variable ``DPDK_ABI_REF_DIR`` can be set so that the results go
> -to a different location.
> +To enable ABI checks the required reference version must be set using either the
> +environment variable ``DPDK_ABI_REF_VERSION`` or the command line argument ``-a``.
> +The tag ``latest`` is supported, which will select the latest quarterly release.
> +e.g. ``./devtools/test-meson-builds.sh -a latest``.
> +
> +The ``devtools/test-meson-builds.sh`` script will then either build this reference version
> +or download a cached version when available in a temporary directory and store the results
> +in a subfolder of the current working directory.
> +The environment variable ``DPDK_ABI_REF_DIR`` or the argument ``-d`` can be set so that
> +the results go to a different location.
> +Environmental variables can also be specified in ``.config/dpdk/devel.config``.
>
>
> Sending Patches
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh
@ 2020-10-14 9:44 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:44 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 12/10/2020 14:03, Conor Walsh wrote:
> Change dump file not found from an error to a warning to make check-abi.sh
> compatible with the changes to test-meson-builds.sh needed to use
> prebuilt references.
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
>
> ---
> devtools/check-abi.sh | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
> index ab6748cfb..60d88777e 100755
> --- a/devtools/check-abi.sh
> +++ b/devtools/check-abi.sh
> @@ -46,8 +46,7 @@ for dump in $(find $refdir -name "*.dump"); do
> fi
> dump2=$(find $newdir -name $name)
> if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
> - echo "Error: can't find $name in $newdir"
> - error=1
> + echo "WARNING: can't find $name in $newdir, are you building with all dependencies?"
> continue
> fi
> abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh
@ 2020-10-14 9:43 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:43 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 12/10/2020 14:03, Conor Walsh wrote:
> This patch adds new features to test-meson-builds.sh that help to make
> the process of using the script easier, the patch also includes
> changes to make the abi breakage checks more performant.
Avoid commentary such as the above.
I reduce the following list of bullets to a single paragraph describing the change.
The core of this change is to improve build times.
So describe reducing the number of build to 2 and using the pre-build references, and thats it.
> Changes/Additions:
> - Command line arguments added, the changes are fully backwards
> compatible and all previous environmental variables are still supported
> - All paths supplied by user are converted to absolute paths if they
> are relative as meson has a bug that can sometimes error if a
> relative path is supplied to it.
> - abi check/generation code moved to function to improve readability
> - Only 2 abi checks will now be completed:
> - 1 x86_64 gcc or clang check
> - 1 ARM gcc or clang check
> It is not necessary to check abi breakages in every build
> - abi checks can now make use of prebuilt abi references from a http
> or local source, it is hoped these would be hosted on dpdk.org in
> the future.
<new line to aid reading>
> Invoke using "./test-meson-builds.sh [-b <build directory>]
> [-a <dpdk tag or latest for abi check>] [-u <uri for abi references>]
> [-d <directory for abi references>]"
> - <build directory>: directory to store builds (relative or absolute)
> - <dpdk tag or latest for abi check>: dpdk tag e.g. "v20.11" or "latest"
> - <uri for abi references>: http location or directory to get prebuilt
> abi references from
> - <directory for abi references>: directory to store abi references
> (relative or absolute)
> e.g. "./test-meson-builds.sh -a latest"
> If no flags are specified test-meson-builds.sh will run the standard
> meson tests with default options unless environmental variables are
> specified.
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
>
> ---
> devtools/test-meson-builds.sh | 170 +++++++++++++++++++++++++++-------
> 1 file changed, 138 insertions(+), 32 deletions(-)
>
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index a87de635a..b45506fb0 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -1,12 +1,73 @@
> #! /bin/sh -e
> # SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> +# Copyright(c) 2018-2020 Intel Corporation
>
> # Run meson to auto-configure the various builds.
> # * all builds get put in a directory whose name starts with "build-"
> # * if a build-directory already exists we assume it was properly configured
> # Run ninja after configuration is done.
>
> +# Get arguments
> +usage()
> +{
> + echo "Usage: $0
> + [-b <build directory>]
> + [-a <dpdk tag or latest for abi check>]
> + [-u <uri for abi references>]
> + [-d <directory for abi references>]" 1>&2; exit 1;
> +}
> +
> +DPDK_ABI_DEFAULT_URI="http://dpdk.org/abi-refs"
> +
> +while getopts "a:u:d:b:h" arg; do
> + case $arg in
> + a)
> + if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> + echo "DPDK_ABI_REF_VERSION and -a cannot both be set"
> + exit 1
> + fi
> + DPDK_ABI_REF_VERSION=${OPTARG} ;;
> + u)
> + if [ -n "$DPDK_ABI_TAR_URI" ]; then
> + echo "DPDK_ABI_TAR_URI and -u cannot both be set"
> + exit 1
> + fi
> + DPDK_ABI_TAR_URI=${OPTARG} ;;
> + d)
> + if [ -n "$DPDK_ABI_REF_DIR" ]; then
> + echo "DPDK_ABI_REF_DIR and -d cannot both be set"
> + exit 1
> + fi
> + DPDK_ABI_REF_DIR=${OPTARG} ;;
> + b)
> + if [ -n "$DPDK_BUILD_TEST_DIR" ]; then
> + echo "DPDK_BUILD_TEST_DIR and -a cannot both be set"
> + exit 1
> + fi
> + DPDK_BUILD_TEST_DIR=${OPTARG} ;;
> + h)
> + usage ;;
> + *)
> + usage ;;
> + esac
> +done
> +
> +if [ -n "$DPDK_ABI_REF_VERSION" ] ; then
> + if [ "$DPDK_ABI_REF_VERSION" = "latest" ] ; then
> + DPDK_ABI_REF_VERSION=$(git ls-remote --tags http://dpdk.org/git/dpdk |
> + sed "s/.*\///" | grep -v "r\|{}" |
> + grep '^[^.]*.[^.]*$' | tail -n 1)
> + elif [ -z "$(git ls-remote http://dpdk.org/git/dpdk refs/tags/$DPDK_ABI_REF_VERSION)" ] ; then
> + echo "$DPDK_ABI_REF_VERSION is not a valid DPDK tag"
> + exit 1
> + fi
> +fi
> +if [ -z $DPDK_ABI_TAR_URI ] ; then
> + DPDK_ABI_TAR_URI=$DPDK_ABI_DEFAULT_URI
> +fi
> +# allow the generation script to override value with env var
> +abi_checks_done=${DPDK_ABI_GEN_REF:-0}
> +
> # set pipefail option if possible
> PIPEFAIL=""
> set -o | grep -q pipefail && set -o pipefail && PIPEFAIL=1
> @@ -16,7 +77,11 @@ srcdir=$(dirname $(readlink -f $0))/..
>
> MESON=${MESON:-meson}
> use_shared="--default-library=shared"
> -builds_dir=${DPDK_BUILD_TEST_DIR:-.}
> +builds_dir=${DPDK_BUILD_TEST_DIR:-$srcdir/builds}
> +# ensure path is absolute meson returns error when some paths are relative
> +if echo "$builds_dir" | grep -qv '^/'; then
> + builds_dir=$srcdir/$builds_dir
> +fi
>
> if command -v gmake >/dev/null 2>&1 ; then
> MAKE=gmake
> @@ -123,39 +188,49 @@ install_target () # <builddir> <installdir>
> fi
> }
>
> -build () # <directory> <target compiler | cross file> <meson options>
> +abi_gen_check () # no options
> {
> - targetdir=$1
> - shift
> - crossfile=
> - [ -r $1 ] && crossfile=$1 || targetcc=$1
> - shift
> - # skip build if compiler not available
> - command -v ${CC##* } >/dev/null 2>&1 || return 0
> - if [ -n "$crossfile" ] ; then
> - cross="--cross-file $crossfile"
> - targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
> - $crossfile | tr -d "'" | tr -d '"')
> - else
> - cross=
> + abirefdir=${DPDK_ABI_REF_DIR:-$builds_dir/__reference}/$DPDK_ABI_REF_VERSION
> + mkdir -p $abirefdir
> + # ensure path is absolute meson returns error when some are relative
> + if echo "$abirefdir" | grep -qv '^/'; then
> + abirefdir=$srcdir/$abirefdir
> fi
> - load_env $targetcc || return 0
> - config $srcdir $builds_dir/$targetdir $cross --werror $*
> - compile $builds_dir/$targetdir
> - if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> - abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
> - if [ ! -d $abirefdir/$targetdir ]; then
> + if [ ! -d $abirefdir/$targetdir ]; then
> +
> + # try to get abi reference
> + if echo "$DPDK_ABI_TAR_URI" | grep -q '^http'; then
> + if [ $abi_checks_done -gt -1 ]; then
> + if curl --head --fail --silent \
> + "$DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz" \
> + >/dev/null; then
> + curl -o $abirefdir/$targetdir.tar.gz \
> + $DPDK_ABI_TAR_URI/$DPDK_ABI_REF_VERSION/$targetdir.tar.gz
> + fi
> + fi
> + elif [ $abi_checks_done -gt -1 ]; then
> + if [ -f "$DPDK_ABI_TAR_URI/$targetdir.tar.gz" ]; then
> + cp $DPDK_ABI_TAR_URI/$targetdir.tar.gz \
> + $abirefdir/
> + fi
> + fi
> + if [ -f "$abirefdir/$targetdir.tar.gz" ]; then
> + tar -xf $abirefdir/$targetdir.tar.gz \
> + -C $abirefdir >/dev/null
> + rm -rf $abirefdir/$targetdir.tar.gz
> + # if no reference can be found then generate one
> + else
> # clone current sources
> if [ ! -d $abirefdir/src ]; then
> git clone --local --no-hardlinks \
> - --single-branch \
> - -b $DPDK_ABI_REF_VERSION \
> - $srcdir $abirefdir/src
> + --single-branch \
> + -b $DPDK_ABI_REF_VERSION \
> + $srcdir $abirefdir/src
> fi
>
> rm -rf $abirefdir/build
> config $abirefdir/src $abirefdir/build $cross \
> - -Dexamples= $*
> + -Dexamples= $*
> compile $abirefdir/build
> install_target $abirefdir/build $abirefdir/$targetdir
> $srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
> @@ -164,17 +239,46 @@ build () # <directory> <target compiler | cross file> <meson options>
> find $abirefdir/$targetdir/usr/local -name '*.a' -delete
> rm -rf $abirefdir/$targetdir/usr/local/bin
> rm -rf $abirefdir/$targetdir/usr/local/share
> + rm -rf $abirefdir/$targetdir/usr/local/lib
> fi
> + fi
>
> - install_target $builds_dir/$targetdir \
> - $(readlink -f $builds_dir/$targetdir/install)
> - $srcdir/devtools/gen-abi.sh \
> - $(readlink -f $builds_dir/$targetdir/install)
> + install_target $builds_dir/$targetdir \
> + $(readlink -f $builds_dir/$targetdir/install)
> + $srcdir/devtools/gen-abi.sh \
> + $(readlink -f $builds_dir/$targetdir/install)
> + # check abi if not generating references
> + if [ -z $DPDK_ABI_GEN_REF ] ; then
> $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> $(readlink -f $builds_dir/$targetdir/install)
> fi
> }
>
> +build () # <directory> <target compiler | cross file> <meson options>
> +{
> + targetdir=$1
> + shift
> + crossfile=
> + [ -r $1 ] && crossfile=$1 || targetcc=$1
> + shift
> + # skip build if compiler not available
> + command -v ${CC##* } >/dev/null 2>&1 || return 0
> + if [ -n "$crossfile" ] ; then
> + cross="--cross-file $crossfile"
> + targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
> + $crossfile | tr -d "'" | tr -d '"')
> + else
> + cross=
> + fi
> + load_env $targetcc || return 0
> + config $srcdir $builds_dir/$targetdir $cross --werror $*
> + compile $builds_dir/$targetdir
> + if [ -n "$DPDK_ABI_REF_VERSION" ] && [ $abi_checks_done -lt 1 ] ; then
> + abi_gen_check
> + abi_checks_done=$((abi_checks_done+1))
> + fi
> +}
> +
> if [ "$1" = "-vv" ] ; then
> TEST_MESON_BUILD_VERY_VERBOSE=1
> elif [ "$1" = "-v" ] ; then
> @@ -189,7 +293,7 @@ fi
> # shared and static linked builds with gcc and clang
> for c in gcc clang ; do
> command -v $c >/dev/null 2>&1 || continue
> - for s in static shared ; do
> + for s in shared static ; do
> export CC="$CCACHE $c"
> build build-$c-$s $c --default-library=$s
> unset CC
> @@ -211,6 +315,8 @@ build build-x86-mingw $srcdir/config/x86/cross-mingw -Dexamples=helloworld
>
> # generic armv8a with clang as host compiler
> f=$srcdir/config/arm/arm64_armv8_linux_gcc
> +# run abi checks with 1 arm build
> +abi_checks_done=$((abi_checks_done-1))
> export CC="clang"
> build build-arm64-host-clang $f $use_shared
> unset CC
> @@ -231,7 +337,7 @@ done
> build_path=$(readlink -f $builds_dir/build-x86-default)
> export DESTDIR=$build_path/install
> # No need to reinstall if ABI checks are enabled
> -if [ -z "$DPDK_ABI_REF_VERSION" ]; then
> +if [ -z "$DPDK_ABI_REF_VERSION" ] ; then
> install_target $build_path $DESTDIR
> fi
>
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives
@ 2020-10-14 9:38 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:38 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 12/10/2020 14:03, Conor Walsh wrote:
> This patch adds a script that generates compressed archives
> containing .dump files which can be used to perform abi
> breakage checking in test-meson-build.sh.
<new line to aid reading>
> Invoke using "./gen-abi-tarballs.sh [-v <dpdk tag>]"
> - <dpdk tag>: dpdk tag e.g. "v20.11" or "latest"
> e.g. "./gen-abi-tarballs.sh -v latest"
<new line to aid reading>
> If no tag is specified, the script will default to "latest"
> Using these parameters the script will produce several *.tar.gz
> archives containing .dump files required to do abi breakage checking
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
>
> ---
> devtools/gen-abi-tarballs.sh | 48 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 48 insertions(+)
> create mode 100755 devtools/gen-abi-tarballs.sh
>
> diff --git a/devtools/gen-abi-tarballs.sh b/devtools/gen-abi-tarballs.sh
> new file mode 100755
> index 000000000..bcc1beac5
> --- /dev/null
> +++ b/devtools/gen-abi-tarballs.sh
> @@ -0,0 +1,48 @@
> +#! /bin/sh -e
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2020 Intel Corporation
> +
> +# Generate the required prebuilt ABI references for test-meson-build.sh
> +
> +# Get arguments
> +usage() { echo "Usage: $0 [-v <dpdk tag or latest>]" 1>&2; exit 1; }
> +abi_tag=
> +while getopts "v:h" arg; do
> + case $arg in
> + v)
> + if [ -n "$DPDK_ABI_REF_VERSION" ]; then
> + echo "DPDK_ABI_REF_VERSION and -v cannot both be set"
> + exit 1
> + fi
> + DPDK_ABI_REF_VERSION=${OPTARG} ;;
> + h)
> + usage ;;
> + *)
> + usage ;;
> + esac
> +done
> +
> +if [ -z $DPDK_ABI_REF_VERSION ] ; then
> + DPDK_ABI_REF_VERSION="latest"
> +fi
> +
> +srcdir=$(dirname $(readlink -f $0))/..
> +
> +DPDK_ABI_GEN_REF=-20
> +DPDK_ABI_REF_DIR=$srcdir/__abitarballs
> +
> +. $srcdir/devtools/test-meson-builds.sh
> +
> +abirefdir=$DPDK_ABI_REF_DIR/$DPDK_ABI_REF_VERSION
> +
> +rm -rf $abirefdir/build-*.tar.gz
> +cd $abirefdir
> +for f in build-* ; do
> + tar -czf $f.tar.gz $f
> +done
> +cp *.tar.gz ../
> +rm -rf *
> +mv ../*.tar.gz .
> +rm -rf build-x86-default.tar.gz
> +
> +echo "The references for $DPDK_ABI_REF_VERSION are now available in $abirefdir"
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks
` (3 preceding siblings ...)
@ 2020-10-14 9:37 4% ` Kinsella, Ray
2020-10-14 10:33 4% ` Walsh, Conor
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
5 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:37 UTC (permalink / raw)
To: Conor Walsh, nhorman, bruce.richardson, thomas, david.marchand; +Cc: dev
On 12/10/2020 14:03, Conor Walsh wrote:
> This patchset will help developers discover abi breakages more easily
> before upstreaming their code. Currently checking that the DPDK ABI
> has not changed before up-streaming code is not intuitive and the
> process is time consuming. Currently contributors must use the
> test-meson-builds.sh tool, alongside some environmental variables to
> test their changes. Contributors in many cases are either unaware or
> unable to do this themselves, leading to a potentially serious situation
> where they are unknowingly up-streaming code that breaks the ABI. These
> breakages are caught by Travis, but it would be more efficient if they
> were caught locally before up-streaming.
I would remove everything in the git log text before this line...
> This patchset introduces changes
> to test-meson-builds.sh, check-abi.sh and adds a new script
> gen-abi-tarballs.sh. The changes to test-meson-builds.sh include UX
UX changes = improvements
> changes such as adding command line arguments and allowing the use of
> relative paths. Reduced the number of abi checks to just two, one for both
> x86_64 and ARM, the references for these tests can now be prebuilt and
> downloaded by test-meson-builds.sh, these changes will allow the tests to
> run much faster. check-abi.sh is updated to use the prebuilt references.
> gen-abi-tarballs.sh is a new script to generate the prebuilt abi
> references used by test-meson-builds.sh, these compressed archives can be
> retrieved from either a local directory or a remote http location.
>
> ---
> v6: Corrected a mistake in the doc patch
>
> v5:
> - Patchset has been completely reworked following feedback
> - Patchset is now part of test-meson-builds.sh not the meson build system
>
> v4:
> - Reworked both Python scripts to use more native Python functions
> and modules.
> - Python scripts are now in line with how other Python scripts in
> DPDK are structured.
>
> v3:
> - Fix for bug which now allows meson < 0.48.0 to be used
> - Various coding style changes throughout
> - Minor bug fixes to the various meson.build files
>
> v2: Spelling mistake, corrected spelling of environmental
>
> Conor Walsh (4):
> devtools: add generation of compressed abi dump archives
> devtools: abi and UX changes for test-meson-builds.sh
> devtools: change dump file not found to warning in check-abi.sh
> doc: test-meson-builds.sh doc updates
>
> devtools/check-abi.sh | 3 +-
> devtools/gen-abi-tarballs.sh | 48 ++++++++
> devtools/test-meson-builds.sh | 170 ++++++++++++++++++++++------
> doc/guides/contributing/patches.rst | 26 +++--
> 4 files changed, 201 insertions(+), 46 deletions(-)
> create mode 100755 devtools/gen-abi-tarballs.sh
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods
@ 2020-10-14 9:23 4% ` Kinsella, Ray
0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-10-14 9:23 UTC (permalink / raw)
To: Ananyev, Konstantin, David Marchand
Cc: dev, Jerin Jacob Kollanukkaran,
Ruifeng Wang (Arm Technology China),
Medvedkin, Vladimir, Thomas Monjalon, Richardson, Bruce
On 06/10/2020 17:07, Ananyev, Konstantin wrote:
>
>>
>> On Mon, Oct 5, 2020 at 9:44 PM Konstantin Ananyev
>> <konstantin.ananyev@intel.com> wrote:
>>>
>>> These patch series introduce support of AVX512 specific classify
>>> implementation for ACL library.
>>> It adds two new algorithms:
>>> - RTE_ACL_CLASSIFY_AVX512X16 - can process up to 16 flows in parallel.
>>> It uses 256-bit width instructions/registers only
>>> (to avoid frequency level change).
>>> On my SKX box test-acl shows ~15-30% improvement
>>> (depending on rule-set and input burst size)
>>> when switching from AVX2 to AVX512X16 classify algorithms.
>>> - RTE_ACL_CLASSIFY_AVX512X32 - can process up to 32 flows in parallel.
>>> It uses 512-bit width instructions/registers and provides higher
>>> performance then AVX512X16, but can cause frequency level change.
>>> On my SKX box test-acl shows ~50-70% improvement
>>> (depending on rule-set and input burst size)
>>> when switching from AVX2 to AVX512X32 classify algorithms.
>>> ICX and CLX testing showed similar level of speedup.
>>>
>>> Current AVX512 classify implementation is only supported on x86_64.
>>> Note that this series introduce a formal ABI incompatibility
>>
>> The only API change I can see is in rte_acl_classify_alg() new error
>> code but I don't think we need an announcement for this.
>> As for ABI, we are breaking it in this release, so I see no pb.
>
> Cool, I just wanted to underline that patch #3:
> https://patches.dpdk.org/patch/79786/
> is a formal ABI breakage.
As David said, this is an ABI breaking release - so there is no requirement to maintain compatibility.
https://doc.dpdk.org/guides/contributing/abi_policy.html
However the following requirements remain:-
* The acknowledgment of the maintainer of the component is mandatory, or if no maintainer is available for the component, the tree/sub-tree maintainer for that component must acknowledge the ABI change instead.
* The acknowledgment of three members of the technical board, as delegates of the technical board acknowledging the need for the ABI change, is also mandatory.
I guess you are the maintainer in this case, so that requirement is satisfied.
>
>>
>>
>>> with previous versions of ACL library.
>>>
>>> v2 -> v3:
>>> Fix checkpatch warnings
>>> Split AVX512 algorithm into two and deduplicate common code
>>
>> Patch 7 still references a RTE_MACHINE_CPUFLAG flag.
>> Can you rework now that those flags have been dropped?
>>
>
> Should be fixed in v4:
> https://patches.dpdk.org/project/dpdk/list/?series=12721
>
> One more thing to mention - this series has a dependency on Vladimir's patch:
> https://patches.dpdk.org/patch/79310/ ("eal/x86: introduce AVX 512-bit type"),
> so CI/travis would still report an error.
>
> Thanks
> Konstantin
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-13 19:06 0% ` Medvedkin, Vladimir
@ 2020-10-13 19:48 0% ` Michel Machado
2020-10-14 13:10 0% ` Medvedkin, Vladimir
0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 19:48 UTC (permalink / raw)
To: Medvedkin, Vladimir, Kevin Traynor, Ruifeng Wang,
Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
On 10/13/20 3:06 PM, Medvedkin, Vladimir wrote:
>
>
> On 13/10/2020 18:46, Michel Machado wrote:
>> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>>> Hi Michel,
>>>
>>> Could you please describe a condition when LPM gets inconsistent? As
>>> I can see if there is no free tbl8 it will return -ENOSPC.
>>
>> Consider this simple example, we need to add the following two
>> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If
>> the LPM table is out of tbl8s, the second prefix is not added and
>> Gatekeeper will make decisions in violation of the policy. The data
>> structure of the LPM table is consistent, but its content inconsistent
>> with the policy.
>
> Aha, thanks. So do I understand correctly that you need to add a set of
> routes atomically (either the entire set is installed or nothing)?
Yes.
> If so, then I would suggest having 2 lpm and switching them atomically
> after a successful addition. As for now, even if you have enough tbl8's,
> routes are installed non atomically, i.e. there will be a time gap
> between adding two routes, so in this time interval the table will be
> inconsistent with the policy.
> Also, if new lpm algorithms are added to the DPDK, they won't have such
> a thing as tbl8.
Our code already deals with synchronization.
>> We minimize the need of replacing a LPM table by allocating LPM
>> tables with the double of what we need (see example here
>> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
>> but the code must be ready for unexpected needs that may arise in
>> production.
>>
>
> Usually, the table is initialized with a large enough number of entries,
> enough to add a possible number of routes. One tbl8 group takes up 1Kb
> of memory which is nothing comparing to the size of tbl24 which is 64Mb.
When the prefixes come from BGP, initializing a large enough table
is fine. But when prefixes come from threat intelligence, the number of
prefixes can vary wildly and the number of prefixes above 24 bits are
way more common.
> P.S. consider using rte_fib library, it has a number of advantages over
> LPM. You can replace the loop in __lookup_fib_bulk() with a bulk lookup
> call and this will probably increase the speed.
I'm not aware of the rte_fib library. The only documentation that I
found on Google was https://doc.dpdk.org/api/rte__fib_8h.html and it
just says "FIB (Forwarding information base) implementation for IPv4
Longest Prefix Match".
>>>
>>> On 13/10/2020 15:58, Michel Machado wrote:
>>>> Hi Kevin,
>>>>
>>>> We do need fields max_rules and number_tbl8s of struct rte_lpm,
>>>> so the removal would force us to have another patch to our local
>>>> copy of DPDK. We'd rather avoid this new local patch because we wish
>>>> to eventually be in sync with the stock DPDK.
>>>>
>>>> Those fields are needed in Gatekeeper because we found a
>>>> condition in an ongoing deployment in which the entries of some LPM
>>>> tables may suddenly change a lot to reflect policy changes. To avoid
>>>> getting into a state in which the LPM table is inconsistent because
>>>> it cannot fit all the new entries, we compute the needed parameters
>>>> to support the new entries, and compare with the current parameters.
>>>> If the current table doesn't fit everything, we have to replace it
>>>> with a new LPM table.
>>>>
>>>> If there were a way to obtain the struct rte_lpm_config of a
>>>> given LPM table, it would cleanly address our need. We have the same
>>>> need in IPv6 and have a local patch to work around it (see
>>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
>>>> Thus, an IPv4 and IPv6 solution would be best.
>>>>
>>>> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to
>>>> this disscussion.
>>>>
>>>> [ ]'s
>>>> Michel Machado
>>>>
>>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>>> Hi Gatekeeper maintainers (I think),
>>>>>
>>>>> fyi - there is a proposal to remove some members of a struct in
>>>>> DPDK LPM
>>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>>
>>>>> The full thread is here:
>>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>>>
>>>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>>>> or you can workaround it?
>>>>>
>>>>> thanks,
>>>>> Kevin.
>>>>>
>>>>> [1]
>>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>>>>>
>>>>>
>>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>>> <bruce.richardson@intel.com>
>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>>
>>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>>
>>>>>>>>> Hi Ruifeng,
>>>>>>>>>
>>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no
>>>>>>>>>>> need to
>>>>>>>>>>> be exposed to the user.
>>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>>> maintainability.
>>>>>>>>>>>
>>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>>> ---
>>>>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>>> -
>>>>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>>
>>>>>>>>>> <snip>
>>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>>
>>>>>>>>>>> /** @internal LPM structure. */
>>>>>>>>>>> struct rte_lpm {
>>>>>>>>>>> - /* LPM metadata. */
>>>>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the
>>>>>>>>>>> lpm. */
>>>>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>>>> Rule info table. */
>>>>>>>>>>> -
>>>>>>>>>>> /* LPM Tables. */
>>>>>>>>>>> struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>>> };
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>>
>>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>>> different, and that return value could be used by
>>>>>>>>>> rte_lpm_lookup()
>>>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>>>> the old structure offsets.]
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>>> without prior notice.
>>>>>>>>>
>>>>>>>> So if the change wants to happen in 20.11, a deprecation notice
>>>>>>>> should
>>>>>>>> have been added in 20.08.
>>>>>>>> I should have added a deprecation notice. This change will have
>>>>>>>> to wait for
>>>>>>> next ABI update window.
>>>>>>>>
>>>>>>>
>>>>>>> Do you plan to extend? or is this just speculative?
>>>>>> It is speculative.
>>>>>>
>>>>>>>
>>>>>>> A quick scan and there seems to be several projects using some of
>>>>>>> these
>>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>>> gatekeeper. I didn't look at the details to see if they are
>>>>>>> really needed.
>>>>>>>
>>>>>>> Not sure how much notice they'd need or if they update DPDK much,
>>>>>>> but I
>>>>>>> think it's worth having a closer look as to how they use lpm and
>>>>>>> what the
>>>>>>> impact to them is.
>>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
>>>>>> access the members to be hided.
>>>>>> They will not be impacted by this patch.
>>>>>> But Gatekeeper accesses the rte_lpm internal members that to be
>>>>>> hided. Its compilation will be broken with this patch.
>>>>>>
>>>>>>>
>>>>>>>> Thanks.
>>>>>>>> Ruifeng
>>>>>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>>>>>> --
>>>>>>>>>>> 2.17.1
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Regards,
>>>>>>>>> Vladimir
>>>>>>
>>>>>
>>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-13 15:41 0% ` Medvedkin, Vladimir
@ 2020-10-13 17:46 0% ` Michel Machado
2020-10-13 19:06 0% ` Medvedkin, Vladimir
0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 17:46 UTC (permalink / raw)
To: Medvedkin, Vladimir, Kevin Traynor, Ruifeng Wang,
Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
> Hi Michel,
>
> Could you please describe a condition when LPM gets inconsistent? As I
> can see if there is no free tbl8 it will return -ENOSPC.
Consider this simple example, we need to add the following two
prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If the
LPM table is out of tbl8s, the second prefix is not added and Gatekeeper
will make decisions in violation of the policy. The data structure of
the LPM table is consistent, but its content inconsistent with the policy.
We minimize the need of replacing a LPM table by allocating LPM
tables with the double of what we need (see example here
https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
but the code must be ready for unexpected needs that may arise in
production.
>
> On 13/10/2020 15:58, Michel Machado wrote:
>> Hi Kevin,
>>
>> We do need fields max_rules and number_tbl8s of struct rte_lpm, so
>> the removal would force us to have another patch to our local copy of
>> DPDK. We'd rather avoid this new local patch because we wish to
>> eventually be in sync with the stock DPDK.
>>
>> Those fields are needed in Gatekeeper because we found a condition
>> in an ongoing deployment in which the entries of some LPM tables may
>> suddenly change a lot to reflect policy changes. To avoid getting into
>> a state in which the LPM table is inconsistent because it cannot fit
>> all the new entries, we compute the needed parameters to support the
>> new entries, and compare with the current parameters. If the current
>> table doesn't fit everything, we have to replace it with a new LPM table.
>>
>> If there were a way to obtain the struct rte_lpm_config of a given
>> LPM table, it would cleanly address our need. We have the same need in
>> IPv6 and have a local patch to work around it (see
>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
>> Thus, an IPv4 and IPv6 solution would be best.
>>
>> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this
>> disscussion.
>>
>> [ ]'s
>> Michel Machado
>>
>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>> Hi Gatekeeper maintainers (I think),
>>>
>>> fyi - there is a proposal to remove some members of a struct in DPDK LPM
>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>
>>> The full thread is here:
>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>
>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>> or you can workaround it?
>>>
>>> thanks,
>>> Kevin.
>>>
>>> [1]
>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>>>
>>>
>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>> <bruce.richardson@intel.com>
>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>
>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>
>>>>>>> Hi Ruifeng,
>>>>>>>
>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>>> be exposed to the user.
>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>> maintainability.
>>>>>>>>>
>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>> ---
>>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>> -
>>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>
>>>>>>>> <snip>
>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>
>>>>>>>>> /** @internal LPM structure. */
>>>>>>>>> struct rte_lpm {
>>>>>>>>> - /* LPM metadata. */
>>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
>>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>> Rule info table. */
>>>>>>>>> -
>>>>>>>>> /* LPM Tables. */
>>>>>>>>> struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>
>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>
>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>> the old structure offsets.]
>>>>>>>>
>>>>>>>
>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>> without prior notice.
>>>>>>>
>>>>>> So if the change wants to happen in 20.11, a deprecation notice
>>>>>> should
>>>>>> have been added in 20.08.
>>>>>> I should have added a deprecation notice. This change will have to
>>>>>> wait for
>>>>> next ABI update window.
>>>>>>
>>>>>
>>>>> Do you plan to extend? or is this just speculative?
>>>> It is speculative.
>>>>
>>>>>
>>>>> A quick scan and there seems to be several projects using some of
>>>>> these
>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>> gatekeeper. I didn't look at the details to see if they are really
>>>>> needed.
>>>>>
>>>>> Not sure how much notice they'd need or if they update DPDK much,
>>>>> but I
>>>>> think it's worth having a closer look as to how they use lpm and
>>>>> what the
>>>>> impact to them is.
>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
>>>> access the members to be hided.
>>>> They will not be impacted by this patch.
>>>> But Gatekeeper accesses the rte_lpm internal members that to be
>>>> hided. Its compilation will be broken with this patch.
>>>>
>>>>>
>>>>>> Thanks.
>>>>>> Ruifeng
>>>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>>>> --
>>>>>>>>> 2.17.1
>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> Vladimir
>>>>
>>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
@ 2020-10-13 14:58 0% ` Michel Machado
2020-10-13 15:41 0% ` Medvedkin, Vladimir
0 siblings, 1 reply; 200+ results
From: Michel Machado @ 2020-10-13 14:58 UTC (permalink / raw)
To: Kevin Traynor, Ruifeng Wang, Medvedkin, Vladimir,
Bruce Richardson, Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
Hi Kevin,
We do need fields max_rules and number_tbl8s of struct rte_lpm, so
the removal would force us to have another patch to our local copy of
DPDK. We'd rather avoid this new local patch because we wish to
eventually be in sync with the stock DPDK.
Those fields are needed in Gatekeeper because we found a condition
in an ongoing deployment in which the entries of some LPM tables may
suddenly change a lot to reflect policy changes. To avoid getting into a
state in which the LPM table is inconsistent because it cannot fit all
the new entries, we compute the needed parameters to support the new
entries, and compare with the current parameters. If the current table
doesn't fit everything, we have to replace it with a new LPM table.
If there were a way to obtain the struct rte_lpm_config of a given
LPM table, it would cleanly address our need. We have the same need in
IPv6 and have a local patch to work around it (see
https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
Thus, an IPv4 and IPv6 solution would be best.
PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this
disscussion.
[ ]'s
Michel Machado
On 10/13/20 9:53 AM, Kevin Traynor wrote:
> Hi Gatekeeper maintainers (I think),
>
> fyi - there is a proposal to remove some members of a struct in DPDK LPM
> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
> as it's an LTS I guess it would probably hit Debian in a few months.
>
> The full thread is here:
> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>
> Maybe you can take a look and tell us if they are needed in Gatekeeper
> or you can workaround it?
>
> thanks,
> Kevin.
>
> [1]
> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>
> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>
>>> -----Original Message-----
>>> From: Kevin Traynor <ktraynor@redhat.com>
>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>> <bruce.richardson@intel.com>
>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>
>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>> <Ruifeng.Wang@arm.com>
>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>
>>>>> Hi Ruifeng,
>>>>>
>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>> be exposed to the user.
>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>> maintainability.
>>>>>>>
>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>> ---
>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>> +++++++++++++++++++++++---------------
>>>>> -
>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>
>>>>>> <snip>
>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>
>>>>>>> /** @internal LPM structure. */
>>>>>>> struct rte_lpm {
>>>>>>> - /* LPM metadata. */
>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>> Rule info table. */
>>>>>>> -
>>>>>>> /* LPM Tables. */
>>>>>>> struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>> };
>>>>>>>
>>>>>>
>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>
>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>> which as a static inline function will be in the binary and using
>>>>>> the old structure offsets.]
>>>>>>
>>>>>
>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>> without prior notice.
>>>>>
>>>> So if the change wants to happen in 20.11, a deprecation notice should
>>>> have been added in 20.08.
>>>> I should have added a deprecation notice. This change will have to wait for
>>> next ABI update window.
>>>>
>>>
>>> Do you plan to extend? or is this just speculative?
>> It is speculative.
>>
>>>
>>> A quick scan and there seems to be several projects using some of these
>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>> gatekeeper. I didn't look at the details to see if they are really needed.
>>>
>>> Not sure how much notice they'd need or if they update DPDK much, but I
>>> think it's worth having a closer look as to how they use lpm and what the
>>> impact to them is.
>> Checked the projects listed above. BESS, NFF-Go and DPVS don't access the members to be hided.
>> They will not be impacted by this patch.
>> But Gatekeeper accesses the rte_lpm internal members that to be hided. Its compilation will be broken with this patch.
>>
>>>
>>>> Thanks.
>>>> Ruifeng
>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>> --
>>>>>>> 2.17.1
>>>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Vladimir
>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI
@ 2020-10-13 19:20 4% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2020-10-13 19:20 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula
Cc: Van Haaren, Harry, McDaniel, Timothy, Jerin Jacob Kollanukkaran,
Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori, dev, Carrillo,
Erik G, Eads, Gage, hemant.agrawal
On Tue, Oct 13, 2020 at 12:39 AM Pavan Nikhilesh Bhagavatula
<pbhagavatula@marvell.com> wrote:
>
> >> Subject: [PATCH v2 2/2] eventdev: update app and examples for new
> >eventdev ABI
> >>
> >> Several data structures and constants changed, or were added,
> >> in the previous patch. This commit updates the dependent
> >> apps and examples to use the new ABI.
> >>
> >> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
>
> With fixes to trace framework
> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
@McDaniel, Timothy ,
The series has apply issues[1], Could you send the final version with
Ack of Harry and Pavan.
I will merge this series for RC1 and Let's move DLB PMD driver updates to RC2.
[1]
[for-main]dell[dpdk-next-eventdev] $ date &&
/home/jerin/config/scripts/build_each_patch.sh /tmp/r/ && date
Wed Oct 14 12:41:19 AM IST 2020
HEAD is now at b7a8eea2c app/eventdev: enable fast free offload
meson build test
Applying: eventdev: eventdev: express DLB/DLB2 PMD constraints
Using index info to reconstruct a base tree...
M drivers/event/dpaa2/dpaa2_eventdev.c
M drivers/event/octeontx/ssovf_evdev.c
M drivers/event/octeontx2/otx2_evdev.c
M lib/librte_eventdev/rte_event_eth_tx_adapter.c
M lib/librte_eventdev/rte_eventdev.c
Falling back to patching base and 3-way merge...
Auto-merging lib/librte_eventdev/rte_eventdev.c
CONFLICT (content): Merge conflict in lib/librte_eventdev/rte_eventdev.c
Auto-merging lib/librte_eventdev/rte_event_eth_tx_adapter.c
Auto-merging drivers/event/octeontx2/otx2_evdev.c
Auto-merging drivers/event/octeontx/ssovf_evdev.c
Auto-merging drivers/event/dpaa2/dpaa2_eventdev.c
Recorded preimage for 'lib/librte_eventdev/rte_eventdev.c'
error: Failed to merge in the changes.
Patch failed at 0001 eventdev: eventdev: express DLB/DLB2 PMD constraints
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
git am failed /tmp/r//v2-1-2-eventdev-eventdev-express-DLB-DLB2-PMD-constraints
HEAD is now at b7a8eea2c app/eventdev: enable fast free offload
Wed Oct 14 12:41:19 AM IST 2020
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-13 17:46 0% ` Michel Machado
@ 2020-10-13 19:06 0% ` Medvedkin, Vladimir
2020-10-13 19:48 0% ` Michel Machado
0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-13 19:06 UTC (permalink / raw)
To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
On 13/10/2020 18:46, Michel Machado wrote:
> On 10/13/20 11:41 AM, Medvedkin, Vladimir wrote:
>> Hi Michel,
>>
>> Could you please describe a condition when LPM gets inconsistent? As I
>> can see if there is no free tbl8 it will return -ENOSPC.
>
> Consider this simple example, we need to add the following two
> prefixes with different next hops: 10.99.0.0/16, 18.99.99.128/25. If the
> LPM table is out of tbl8s, the second prefix is not added and Gatekeeper
> will make decisions in violation of the policy. The data structure of
> the LPM table is consistent, but its content inconsistent with the policy.
Aha, thanks. So do I understand correctly that you need to add a set of
routes atomically (either the entire set is installed or nothing)?
If so, then I would suggest having 2 lpm and switching them atomically
after a successful addition. As for now, even if you have enough tbl8's,
routes are installed non atomically, i.e. there will be a time gap
between adding two routes, so in this time interval the table will be
inconsistent with the policy.
Also, if new lpm algorithms are added to the DPDK, they won't have such
a thing as tbl8.
>
> We minimize the need of replacing a LPM table by allocating LPM
> tables with the double of what we need (see example here
> https://github.com/AltraMayor/gatekeeper/blob/95d1d6e8201861a0d0c698bfd06ad606674f1e07/lua/examples/policy.lua#L172-L183),
> but the code must be ready for unexpected needs that may arise in
> production.
>
Usually, the table is initialized with a large enough number of entries,
enough to add a possible number of routes. One tbl8 group takes up 1Kb
of memory which is nothing comparing to the size of tbl24 which is 64Mb.
P.S. consider using rte_fib library, it has a number of advantages over
LPM. You can replace the loop in __lookup_fib_bulk() with a bulk lookup
call and this will probably increase the speed.
>>
>> On 13/10/2020 15:58, Michel Machado wrote:
>>> Hi Kevin,
>>>
>>> We do need fields max_rules and number_tbl8s of struct rte_lpm,
>>> so the removal would force us to have another patch to our local copy
>>> of DPDK. We'd rather avoid this new local patch because we wish to
>>> eventually be in sync with the stock DPDK.
>>>
>>> Those fields are needed in Gatekeeper because we found a
>>> condition in an ongoing deployment in which the entries of some LPM
>>> tables may suddenly change a lot to reflect policy changes. To avoid
>>> getting into a state in which the LPM table is inconsistent because
>>> it cannot fit all the new entries, we compute the needed parameters
>>> to support the new entries, and compare with the current parameters.
>>> If the current table doesn't fit everything, we have to replace it
>>> with a new LPM table.
>>>
>>> If there were a way to obtain the struct rte_lpm_config of a
>>> given LPM table, it would cleanly address our need. We have the same
>>> need in IPv6 and have a local patch to work around it (see
>>> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
>>> Thus, an IPv4 and IPv6 solution would be best.
>>>
>>> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this
>>> disscussion.
>>>
>>> [ ]'s
>>> Michel Machado
>>>
>>> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>>>> Hi Gatekeeper maintainers (I think),
>>>>
>>>> fyi - there is a proposal to remove some members of a struct in DPDK
>>>> LPM
>>>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>>>> as it's an LTS I guess it would probably hit Debian in a few months.
>>>>
>>>> The full thread is here:
>>>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>>>
>>>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>>>> or you can workaround it?
>>>>
>>>> thanks,
>>>> Kevin.
>>>>
>>>> [1]
>>>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>>>>
>>>>
>>>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>>>> <bruce.richardson@intel.com>
>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>>>
>>>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>>>> <Ruifeng.Wang@arm.com>
>>>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>>>
>>>>>>>> Hi Ruifeng,
>>>>>>>>
>>>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>>>> be exposed to the user.
>>>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>>>> maintainability.
>>>>>>>>>>
>>>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>>>> ---
>>>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>>>>> +++++++++++++++++++++++---------------
>>>>>>>> -
>>>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>>>
>>>>>>>>> <snip>
>>>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>>>
>>>>>>>>>> /** @internal LPM structure. */
>>>>>>>>>> struct rte_lpm {
>>>>>>>>>> - /* LPM metadata. */
>>>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
>>>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>>>> Rule info table. */
>>>>>>>>>> -
>>>>>>>>>> /* LPM Tables. */
>>>>>>>>>> struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>>>> };
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>>>
>>>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>>>> which as a static inline function will be in the binary and using
>>>>>>>>> the old structure offsets.]
>>>>>>>>>
>>>>>>>>
>>>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>>>> without prior notice.
>>>>>>>>
>>>>>>> So if the change wants to happen in 20.11, a deprecation notice
>>>>>>> should
>>>>>>> have been added in 20.08.
>>>>>>> I should have added a deprecation notice. This change will have
>>>>>>> to wait for
>>>>>> next ABI update window.
>>>>>>>
>>>>>>
>>>>>> Do you plan to extend? or is this just speculative?
>>>>> It is speculative.
>>>>>
>>>>>>
>>>>>> A quick scan and there seems to be several projects using some of
>>>>>> these
>>>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>>>> gatekeeper. I didn't look at the details to see if they are really
>>>>>> needed.
>>>>>>
>>>>>> Not sure how much notice they'd need or if they update DPDK much,
>>>>>> but I
>>>>>> think it's worth having a closer look as to how they use lpm and
>>>>>> what the
>>>>>> impact to them is.
>>>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't
>>>>> access the members to be hided.
>>>>> They will not be impacted by this patch.
>>>>> But Gatekeeper accesses the rte_lpm internal members that to be
>>>>> hided. Its compilation will be broken with this patch.
>>>>>
>>>>>>
>>>>>>> Thanks.
>>>>>>> Ruifeng
>>>>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>>>>> --
>>>>>>>>>> 2.17.1
>>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>> Vladimir
>>>>>
>>>>
>>
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 2/5] ethdev: add new attributes to hairpin config
@ 2020-10-13 16:19 4% ` Bing Zhao
0 siblings, 0 replies; 200+ results
From: Bing Zhao @ 2020-10-13 16:19 UTC (permalink / raw)
To: thomas, orika, ferruh.yigit, arybchenko, mdr, nhorman,
bernard.iremonger, beilei.xing, wenzhuo.lu
Cc: dev
To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.
`tx_explicit` means if the application itself will insert the TX part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin TX queue and peer RX queue will be
bound automatically during the device start stage.
Different TX and RX queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.
In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no TX flow rules need to be inserted manually
by the application.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4: squash document update and more info for the two new attributes
v2: optimize the structure and remove unused macros
---
doc/guides/prog_guide/rte_flow.rst | 3 +++
doc/guides/rel_notes/release_20_11.rst | 6 ++++++
lib/librte_ethdev/rte_ethdev.c | 8 ++++----
lib/librte_ethdev/rte_ethdev.h | 27 ++++++++++++++++++++++++++-
4 files changed, 39 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..bb54d67 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2592,6 +2592,9 @@ set, unpredictable value will be seen depending on driver implementation. For
loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to
the other path depending on HW capability.
+In hairpin case with TX explicit flow mode, metadata could (not mandatory) be
+used to connect the RX and TX flows if it can be propagated from RX to TX path.
+
.. _table_rte_flow_action_set_meta:
.. table:: SET_META
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 6b3d223..a1e20a6 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -63,6 +63,7 @@ New Features
* **Updated the ethdev library to support hairpin between two ports.**
New APIs are introduced to support binding / unbinding 2 ports hairpin.
+ Hairpin TX part flow rules can be inserted explicitly.
* **Updated Broadcom bnxt driver.**
@@ -318,6 +319,11 @@ ABI Changes
* ``ethdev`` internal functions are marked with ``__rte_internal`` tag.
+ * ``struct rte_eth_hairpin_conf`` has two new members:
+
+ * ``uint32_t tx_explicit:1;``
+ * ``uint32_t manual_bind:1;``
+
Known Issues
------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index b6371fb..14b9f3a 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1954,13 +1954,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_rx_2_tx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_rx_2_tx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Rx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Rx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
@@ -2125,13 +2125,13 @@ struct rte_eth_dev *
}
if (conf->peer_count > cap.max_tx_2_rx) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: <= %hu",
+ "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu",
conf->peer_count, cap.max_tx_2_rx);
return -EINVAL;
}
if (conf->peer_count == 0) {
RTE_ETHDEV_LOG(ERR,
- "Invalid value for number of peers for Tx queue(=%hu), should be: > 0",
+ "Invalid value for number of peers for Tx queue(=%u), should be: > 0",
conf->peer_count);
return -EINVAL;
}
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 5106098..938df08 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1045,7 +1045,32 @@ struct rte_eth_hairpin_peer {
* A structure used to configure hairpin binding.
*/
struct rte_eth_hairpin_conf {
- uint16_t peer_count; /**< The number of peers. */
+ uint32_t peer_count:16; /**< The number of peers. */
+
+ /**
+ * Explicit TX flow rule mode. One hairpin pair of queues should have
+ * the same attribute. The actual support depends on the PMD.
+ *
+ * - When set, the user should be responsible for inserting the hairpin
+ * TX part flows and removing them.
+ * - When clear, the PMD will try to handle the TX part of the flows,
+ * e.g., by splitting one flow into two parts.
+ */
+ uint32_t tx_explicit:1;
+
+ /**
+ * Manually bind hairpin queues. One hairpin pair of queues should have
+ * the same attribute. The actual support depends on the PMD.
+ *
+ * - When set, to enable hairpin, the user should call the hairpin bind
+ * API after all the queues are set up properly and the ports are
+ * started. Also, the hairpin unbind API should be called accordingly
+ * before stopping a port that with hairpin configured.
+ * - When clear, the PMD will try to enable the hairpin with the queues
+ * configured automatically during port start.
+ */
+ uint32_t manual_bind:1;
+ uint32_t reserved:14; /**< Reserved bits. */
struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS];
};
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
2020-10-13 14:58 0% ` Michel Machado
@ 2020-10-13 15:41 0% ` Medvedkin, Vladimir
2020-10-13 17:46 0% ` Michel Machado
0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2020-10-13 15:41 UTC (permalink / raw)
To: Michel Machado, Kevin Traynor, Ruifeng Wang, Bruce Richardson,
Cody Doucette, Andre Nathan, Qiaobin Fu
Cc: dev, Honnappa Nagarahalli, nd
Hi Michel,
Could you please describe a condition when LPM gets inconsistent? As I
can see if there is no free tbl8 it will return -ENOSPC.
On 13/10/2020 15:58, Michel Machado wrote:
> Hi Kevin,
>
> We do need fields max_rules and number_tbl8s of struct rte_lpm, so
> the removal would force us to have another patch to our local copy of
> DPDK. We'd rather avoid this new local patch because we wish to
> eventually be in sync with the stock DPDK.
>
> Those fields are needed in Gatekeeper because we found a condition
> in an ongoing deployment in which the entries of some LPM tables may
> suddenly change a lot to reflect policy changes. To avoid getting into a
> state in which the LPM table is inconsistent because it cannot fit all
> the new entries, we compute the needed parameters to support the new
> entries, and compare with the current parameters. If the current table
> doesn't fit everything, we have to replace it with a new LPM table.
>
> If there were a way to obtain the struct rte_lpm_config of a given
> LPM table, it would cleanly address our need. We have the same need in
> IPv6 and have a local patch to work around it (see
> https://github.com/cjdoucette/dpdk/commit/3eaf124a781349b8ec8cd880db26a78115cb8c8f).
> Thus, an IPv4 and IPv6 solution would be best.
>
> PS: I've added Qiaobin Fu, another Gatekeeper maintainer, to this
> disscussion.
>
> [ ]'s
> Michel Machado
>
> On 10/13/20 9:53 AM, Kevin Traynor wrote:
>> Hi Gatekeeper maintainers (I think),
>>
>> fyi - there is a proposal to remove some members of a struct in DPDK LPM
>> API that Gatekeeper is using [1]. It would be only from DPDK 20.11 but
>> as it's an LTS I guess it would probably hit Debian in a few months.
>>
>> The full thread is here:
>> http://inbox.dpdk.org/dev/20200907081518.46350-1-ruifeng.wang@arm.com/
>>
>> Maybe you can take a look and tell us if they are needed in Gatekeeper
>> or you can workaround it?
>>
>> thanks,
>> Kevin.
>>
>> [1]
>> https://github.com/AltraMayor/gatekeeper/blob/master/gt/lua_lpm.c#L235-L248
>>
>>
>> On 09/10/2020 07:54, Ruifeng Wang wrote:
>>>
>>>> -----Original Message-----
>>>> From: Kevin Traynor <ktraynor@redhat.com>
>>>> Sent: Wednesday, September 30, 2020 4:46 PM
>>>> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Medvedkin, Vladimir
>>>> <vladimir.medvedkin@intel.com>; Bruce Richardson
>>>> <bruce.richardson@intel.com>
>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>> Subject: Re: [dpdk-dev] [PATCH 2/2] lpm: hide internal data
>>>>
>>>> On 16/09/2020 04:17, Ruifeng Wang wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
>>>>>> Sent: Wednesday, September 16, 2020 12:28 AM
>>>>>> To: Bruce Richardson <bruce.richardson@intel.com>; Ruifeng Wang
>>>>>> <Ruifeng.Wang@arm.com>
>>>>>> Cc: dev@dpdk.org; Honnappa Nagarahalli
>>>>>> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
>>>>>> Subject: Re: [PATCH 2/2] lpm: hide internal data
>>>>>>
>>>>>> Hi Ruifeng,
>>>>>>
>>>>>> On 15/09/2020 17:02, Bruce Richardson wrote:
>>>>>>> On Mon, Sep 07, 2020 at 04:15:17PM +0800, Ruifeng Wang wrote:
>>>>>>>> Fields except tbl24 and tbl8 in rte_lpm structure have no need to
>>>>>>>> be exposed to the user.
>>>>>>>> Hide the unneeded exposure of structure fields for better ABI
>>>>>>>> maintainability.
>>>>>>>>
>>>>>>>> Suggested-by: David Marchand <david.marchand@redhat.com>
>>>>>>>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>>>>>> Reviewed-by: Phil Yang <phil.yang@arm.com>
>>>>>>>> ---
>>>>>>>> lib/librte_lpm/rte_lpm.c | 152
>>>>>>>> +++++++++++++++++++++++---------------
>>>>>> -
>>>>>>>> lib/librte_lpm/rte_lpm.h | 7 --
>>>>>>>> 2 files changed, 91 insertions(+), 68 deletions(-)
>>>>>>>>
>>>>>>> <snip>
>>>>>>>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
>>>>>>>> index 03da2d37e..112d96f37 100644
>>>>>>>> --- a/lib/librte_lpm/rte_lpm.h
>>>>>>>> +++ b/lib/librte_lpm/rte_lpm.h
>>>>>>>> @@ -132,17 +132,10 @@ struct rte_lpm_rule_info {
>>>>>>>>
>>>>>>>> /** @internal LPM structure. */
>>>>>>>> struct rte_lpm {
>>>>>>>> - /* LPM metadata. */
>>>>>>>> - char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
>>>>>>>> - uint32_t max_rules; /**< Max. balanced rules per lpm. */
>>>>>>>> - uint32_t number_tbl8s; /**< Number of tbl8s. */
>>>>>>>> - struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
>>>>>> Rule info table. */
>>>>>>>> -
>>>>>>>> /* LPM Tables. */
>>>>>>>> struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
>>>>>>>> __rte_cache_aligned; /**< LPM tbl24 table. */
>>>>>>>> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>>>>>>> - struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>>>>>>>> };
>>>>>>>>
>>>>>>>
>>>>>>> Since this changes the ABI, does it not need advance notice?
>>>>>>>
>>>>>>> [Basically the return value point from rte_lpm_create() will be
>>>>>>> different, and that return value could be used by rte_lpm_lookup()
>>>>>>> which as a static inline function will be in the binary and using
>>>>>>> the old structure offsets.]
>>>>>>>
>>>>>>
>>>>>> Agree with Bruce, this patch breaks ABI, so it can't be accepted
>>>>>> without prior notice.
>>>>>>
>>>>> So if the change wants to happen in 20.11, a deprecation notice should
>>>>> have been added in 20.08.
>>>>> I should have added a deprecation notice. This change will have to
>>>>> wait for
>>>> next ABI update window.
>>>>>
>>>>
>>>> Do you plan to extend? or is this just speculative?
>>> It is speculative.
>>>
>>>>
>>>> A quick scan and there seems to be several projects using some of these
>>>> members that you are proposing to hide. e.g. BESS, NFF-Go, DPVS,
>>>> gatekeeper. I didn't look at the details to see if they are really
>>>> needed.
>>>>
>>>> Not sure how much notice they'd need or if they update DPDK much, but I
>>>> think it's worth having a closer look as to how they use lpm and
>>>> what the
>>>> impact to them is.
>>> Checked the projects listed above. BESS, NFF-Go and DPVS don't access
>>> the members to be hided.
>>> They will not be impacted by this patch.
>>> But Gatekeeper accesses the rte_lpm internal members that to be
>>> hided. Its compilation will be broken with this patch.
>>>
>>>>
>>>>> Thanks.
>>>>> Ruifeng
>>>>>>>> /** LPM RCU QSBR configuration structure. */
>>>>>>>> --
>>>>>>>> 2.17.1
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Vladimir
>>>
>>
--
Regards,
Vladimir
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 03/18] eal: rename lcore word choices
@ 2020-10-13 15:25 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-10-13 15:25 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Anatoly Burakov, Ray Kinsella, Neil Horman,
Mattias Rönnblom, Harry van Haaren, Bruce Richardson,
Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
Pallavi Kadam
Replace master lcore with main lcore and
replace slave lcore with worker lcore.
Keep the old functions and macros but mark them as deprecated
for this release.
The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/rel_notes/deprecation.rst | 19 -------
doc/guides/rel_notes/release_20_11.rst | 11 ++++
lib/librte_eal/common/eal_common_dynmem.c | 10 ++--
lib/librte_eal/common/eal_common_launch.c | 36 ++++++------
lib/librte_eal/common/eal_common_lcore.c | 8 +--
lib/librte_eal/common/eal_common_options.c | 64 ++++++++++++----------
lib/librte_eal/common/eal_options.h | 2 +
lib/librte_eal/common/eal_private.h | 6 +-
lib/librte_eal/common/rte_random.c | 2 +-
lib/librte_eal/common/rte_service.c | 2 +-
lib/librte_eal/freebsd/eal.c | 28 +++++-----
lib/librte_eal/freebsd/eal_thread.c | 32 +++++------
lib/librte_eal/include/rte_eal.h | 4 +-
lib/librte_eal/include/rte_eal_trace.h | 4 +-
lib/librte_eal/include/rte_launch.h | 60 ++++++++++----------
lib/librte_eal/include/rte_lcore.h | 35 ++++++++----
lib/librte_eal/linux/eal.c | 28 +++++-----
lib/librte_eal/linux/eal_memory.c | 10 ++--
lib/librte_eal/linux/eal_thread.c | 32 +++++------
lib/librte_eal/rte_eal_version.map | 2 +-
lib/librte_eal/windows/eal.c | 16 +++---
lib/librte_eal/windows/eal_thread.c | 30 +++++-----
22 files changed, 230 insertions(+), 211 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 584e72087934..7271e9ca4d39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -20,25 +20,6 @@ Deprecation Notices
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
for returning key match count. It will ease handling of no-match case.
-* eal: To be more inclusive in choice of naming, the DPDK project
- will replace uses of master/slave in the API's and command line arguments.
-
- References to master/slave in relation to lcore will be renamed
- to initial/worker. The function ``rte_get_master_lcore()``
- will be renamed to ``rte_get_initial_lcore()``.
- For the 20.11 release, both names will be present and the
- old function will be marked with the deprecated tag.
- The old function will be removed in a future version.
-
- The iterator for worker lcores will also change:
- ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
- ``RTE_LCORE_FOREACH_WORKER``.
-
- The ``master-lcore`` argument to testpmd will be replaced
- with ``initial-lcore``. The old ``master-lcore`` argument
- will produce a runtime notification in 20.11 release, and
- be removed completely in a future release.
-
* eal: The terms blacklist and whitelist to describe devices used
by DPDK will be replaced in the 20.11 relase.
This will apply to command line arguments as well as macros.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index b7881f2e9d5a..8fa0605ad6cb 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -292,6 +292,17 @@ API Changes
* bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
+* eal: Changed the function ``rte_get_master_lcore()`` is
+ replaced to ``rte_get_main_lcore()``. The old function is deprecated.
+
+ The iterator for worker lcores will also change:
+ ``RTE_LCORE_FOREACH_SLAVE`` will be replaced with
+ ``RTE_LCORE_FOREACH_WORKER``.
+
+ The ``master-lcore`` argument to testpmd will be replaced
+ with ``main-lcore``. The old ``master-lcore`` argument
+ will produce a runtime notification in 20.11 release, and
+ be removed completely in a future release.
ABI Changes
-----------
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
index 614648d8a4de..1cefe52443c4 100644
--- a/lib/librte_eal/common/eal_common_dynmem.c
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -427,19 +427,19 @@ eal_dynmem_calc_num_pages_per_socket(
total_size -= default_size;
}
#else
- /* in 32-bit mode, allocate all of the memory only on master
+ /* in 32-bit mode, allocate all of the memory only on main
* lcore socket
*/
total_size = internal_conf->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
struct rte_config *cfg = rte_eal_get_configuration();
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
- master_lcore_socket =
- rte_lcore_to_socket_id(cfg->master_lcore);
+ main_lcore_socket =
+ rte_lcore_to_socket_id(cfg->main_lcore);
- if (master_lcore_socket != socket)
+ if (main_lcore_socket != socket)
continue;
/* Update sizes */
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index cf52d717f68e..34f854ad80c8 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -21,55 +21,55 @@
* Wait until a lcore finished its job.
*/
int
-rte_eal_wait_lcore(unsigned slave_id)
+rte_eal_wait_lcore(unsigned worker_id)
{
- if (lcore_config[slave_id].state == WAIT)
+ if (lcore_config[worker_id].state == WAIT)
return 0;
- while (lcore_config[slave_id].state != WAIT &&
- lcore_config[slave_id].state != FINISHED)
+ while (lcore_config[worker_id].state != WAIT &&
+ lcore_config[worker_id].state != FINISHED)
rte_pause();
rte_rmb();
/* we are in finished state, go to wait state */
- lcore_config[slave_id].state = WAIT;
- return lcore_config[slave_id].ret;
+ lcore_config[worker_id].state = WAIT;
+ return lcore_config[worker_id].ret;
}
/*
- * Check that every SLAVE lcores are in WAIT state, then call
- * rte_eal_remote_launch() for all of them. If call_master is true
- * (set to CALL_MASTER), also call the function on the master lcore.
+ * Check that every WORKER lcores are in WAIT state, then call
+ * rte_eal_remote_launch() for all of them. If call_main is true
+ * (set to CALL_MAIN), also call the function on the main lcore.
*/
int
rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
- enum rte_rmt_call_master_t call_master)
+ enum rte_rmt_call_main_t call_main)
{
int lcore_id;
- int master = rte_get_master_lcore();
+ int main_lcore = rte_get_main_lcore();
/* check state of lcores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (lcore_config[lcore_id].state != WAIT)
return -EBUSY;
}
/* send messages to cores */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_remote_launch(f, arg, lcore_id);
}
- if (call_master == CALL_MASTER) {
- lcore_config[master].ret = f(arg);
- lcore_config[master].state = FINISHED;
+ if (call_main == CALL_MAIN) {
+ lcore_config[main_lcore].ret = f(arg);
+ lcore_config[main_lcore].state = FINISHED;
}
return 0;
}
/*
- * Return the state of the lcore identified by slave_id.
+ * Return the state of the lcore identified by worker_id.
*/
enum rte_lcore_state_t
rte_eal_get_lcore_state(unsigned lcore_id)
@@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void)
{
unsigned lcore_id;
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
rte_eal_wait_lcore(lcore_id);
}
}
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index d64569b3c758..66d6bad1a7d7 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -18,9 +18,9 @@
#include "eal_private.h"
#include "eal_thread.h"
-unsigned int rte_get_master_lcore(void)
+unsigned int rte_get_main_lcore(void)
{
- return rte_eal_get_configuration()->master_lcore;
+ return rte_eal_get_configuration()->main_lcore;
}
unsigned int rte_lcore_count(void)
@@ -93,7 +93,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
return cfg->lcore_role[lcore_id] == ROLE_RTE;
}
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
{
i++;
if (wrap)
@@ -101,7 +101,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
while (i < RTE_MAX_LCORE) {
if (!rte_lcore_is_enabled(i) ||
- (skip_master && (i == rte_get_master_lcore()))) {
+ (skip_main && (i == rte_get_main_lcore()))) {
i++;
if (wrap)
i %= RTE_MAX_LCORE;
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index a5426e12346a..d221886eb22c 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -81,6 +81,7 @@ eal_long_options[] = {
{OPT_TRACE_BUF_SIZE, 1, NULL, OPT_TRACE_BUF_SIZE_NUM },
{OPT_TRACE_MODE, 1, NULL, OPT_TRACE_MODE_NUM },
{OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM },
+ {OPT_MAIN_LCORE, 1, NULL, OPT_MAIN_LCORE_NUM },
{OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM},
{OPT_NO_HPET, 0, NULL, OPT_NO_HPET_NUM },
{OPT_NO_HUGE, 0, NULL, OPT_NO_HUGE_NUM },
@@ -144,7 +145,7 @@ struct device_option {
static struct device_option_list devopt_list =
TAILQ_HEAD_INITIALIZER(devopt_list);
-static int master_lcore_parsed;
+static int main_lcore_parsed;
static int mem_parsed;
static int core_parsed;
@@ -575,12 +576,12 @@ eal_parse_service_coremask(const char *coremask)
for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
j++, idx++) {
if ((1 << j) & val) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (master_lcore_parsed &&
- cfg->master_lcore == lcore) {
+ if (main_lcore_parsed &&
+ cfg->main_lcore == lcore) {
RTE_LOG(ERR, EAL,
- "lcore %u is master lcore, cannot use as service core\n",
+ "lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -748,12 +749,12 @@ eal_parse_service_corelist(const char *corelist)
min = idx;
for (idx = min; idx <= max; idx++) {
if (cfg->lcore_role[idx] != ROLE_SERVICE) {
- /* handle master lcore already parsed */
+ /* handle main lcore already parsed */
uint32_t lcore = idx;
- if (cfg->master_lcore == lcore &&
- master_lcore_parsed) {
+ if (cfg->main_lcore == lcore &&
+ main_lcore_parsed) {
RTE_LOG(ERR, EAL,
- "Error: lcore %u is master lcore, cannot use as service core\n",
+ "Error: lcore %u is main lcore, cannot use as service core\n",
idx);
return -1;
}
@@ -836,25 +837,25 @@ eal_parse_corelist(const char *corelist, int *cores)
return 0;
}
-/* Changes the lcore id of the master thread */
+/* Changes the lcore id of the main thread */
static int
-eal_parse_master_lcore(const char *arg)
+eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
struct rte_config *cfg = rte_eal_get_configuration();
errno = 0;
- cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+ cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
if (errno || parsing_end[0] != 0)
return -1;
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- master_lcore_parsed = 1;
+ main_lcore_parsed = 1;
- /* ensure master core is not used as service core */
- if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) {
+ /* ensure main core is not used as service core */
+ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
RTE_LOG(ERR, EAL,
- "Error: Master lcore is used as a service core\n");
+ "Error: Main lcore is used as a service core\n");
return -1;
}
@@ -1593,9 +1594,14 @@ eal_parse_common_option(int opt, const char *optarg,
break;
case OPT_MASTER_LCORE_NUM:
- if (eal_parse_master_lcore(optarg) < 0) {
+ fprintf(stderr,
+ "Option --" OPT_MASTER_LCORE
+ " is deprecated use " OPT_MAIN_LCORE "\n");
+ /* fallthrough */
+ case OPT_MAIN_LCORE_NUM:
+ if (eal_parse_main_lcore(optarg) < 0) {
RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_MASTER_LCORE "\n");
+ OPT_MAIN_LCORE "\n");
return -1;
}
break;
@@ -1763,9 +1769,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
RTE_CPU_AND(cpuset, cpuset, &default_set);
- /* if no remaining cpu, use master lcore cpu affinity */
+ /* if no remaining cpu, use main lcore cpu affinity */
if (!CPU_COUNT(cpuset)) {
- memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset,
+ memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
sizeof(*cpuset));
}
}
@@ -1797,12 +1803,12 @@ eal_adjust_config(struct internal_config *internal_cfg)
if (internal_conf->process_type == RTE_PROC_AUTO)
internal_conf->process_type = eal_proc_type_detect();
- /* default master lcore is the first one */
- if (!master_lcore_parsed) {
- cfg->master_lcore = rte_get_next_lcore(-1, 0, 0);
- if (cfg->master_lcore >= RTE_MAX_LCORE)
+ /* default main lcore is the first one */
+ if (!main_lcore_parsed) {
+ cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- lcore_config[cfg->master_lcore].core_role = ROLE_RTE;
+ lcore_config[cfg->main_lcore].core_role = ROLE_RTE;
}
compute_ctrl_threads_cpuset(internal_cfg);
@@ -1822,8 +1828,8 @@ eal_check_common_options(struct internal_config *internal_cfg)
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) {
- RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n");
+ if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+ RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
return -1;
}
@@ -1921,7 +1927,7 @@ eal_common_usage(void)
" '( )' can be omitted for single element group,\n"
" '@' can be omitted if cpus and lcores have the same value\n"
" -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n"
- " --"OPT_MASTER_LCORE" ID Core ID that is used as master\n"
+ " --"OPT_MAIN_LCORE" ID Core ID that is used as main\n"
" --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n"
" -n CHANNELS Number of memory channels\n"
" -m MB Memory to allocate (see also --"OPT_SOCKET_MEM")\n"
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index 89769d48b487..d363228a7a25 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -43,6 +43,8 @@ enum {
OPT_TRACE_BUF_SIZE_NUM,
#define OPT_TRACE_MODE "trace-mode"
OPT_TRACE_MODE_NUM,
+#define OPT_MAIN_LCORE "main-lcore"
+ OPT_MAIN_LCORE_NUM,
#define OPT_MASTER_LCORE "master-lcore"
OPT_MASTER_LCORE_NUM,
#define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name"
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index a6a6381567f4..4684c4c7df19 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -20,8 +20,8 @@
*/
struct lcore_config {
pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
+ int pipe_main2worker[2]; /**< communication pipe with main */
+ int pipe_worker2main[2]; /**< communication pipe with main */
lcore_function_t * volatile f; /**< function to call */
void * volatile arg; /**< argument of function */
@@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE];
* The global RTE configuration structure.
*/
struct rte_config {
- uint32_t master_lcore; /**< Id of the master lcore */
+ uint32_t main_lcore; /**< Id of the main lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c
index b2c5416b331d..ce21c2242a22 100644
--- a/lib/librte_eal/common/rte_random.c
+++ b/lib/librte_eal/common/rte_random.c
@@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
lcore_id = rte_lcore_id();
if (unlikely(lcore_id == LCORE_ID_ANY))
- lcore_id = rte_get_master_lcore();
+ lcore_id = rte_get_main_lcore();
return &rand_states[lcore_id];
}
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index 98565bbef340..6c955d319ad4 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
struct rte_config *cfg = rte_eal_get_configuration();
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_config[i].core_role == ROLE_SERVICE) {
- if ((unsigned int)i == cfg->master_lcore)
+ if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
count++;
diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c
index ccea60afe77b..d6ea02375025 100644
--- a/lib/librte_eal/freebsd/eal.c
+++ b/lib/librte_eal/freebsd/eal.c
@@ -625,10 +625,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
@@ -851,29 +851,29 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
- config->master_lcore, thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
+ config->main_lcore, thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -886,7 +886,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
rte_thread_setname(lcore_config[i].thread_id, thread_name);
ret = pthread_setaffinity_np(lcore_config[i].thread_id,
@@ -896,10 +896,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c
index 99b5fefc4c5b..1dce9b04f24a 100644
--- a/lib/librte_eal/freebsd/eal_thread.c
+++ b/lib/librte_eal/freebsd/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index e3c2ef185eed..0ae12cf4fbac 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -65,11 +65,11 @@ int rte_eal_iopl_init(void);
/**
* Initialize the Environment Abstraction Layer (EAL).
*
- * This function is to be executed on the MASTER lcore only, as soon
+ * This function is to be executed on the MAIN lcore only, as soon
* as possible in the application's main() function.
*
* The function finishes the initialization process before main() is called.
- * It puts the SLAVE lcores in the WAIT state.
+ * It puts the WORKER lcores in the WAIT state.
*
* When the multi-partition feature is supported, depending on the
* configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this
diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h
index 19df549d29be..495ae1ee1d61 100644
--- a/lib/librte_eal/include/rte_eal_trace.h
+++ b/lib/librte_eal/include/rte_eal_trace.h
@@ -264,10 +264,10 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_eal_trace_thread_remote_launch,
RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg,
- unsigned int slave_id, int rc),
+ unsigned int worker_id, int rc),
rte_trace_point_emit_ptr(f);
rte_trace_point_emit_ptr(arg);
- rte_trace_point_emit_u32(slave_id);
+ rte_trace_point_emit_u32(worker_id);
rte_trace_point_emit_int(rc);
)
RTE_TRACE_POINT(
diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h
index 06a671752ace..22a901ce62f6 100644
--- a/lib/librte_eal/include/rte_launch.h
+++ b/lib/librte_eal/include/rte_launch.h
@@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *);
/**
* Launch a function on another lcore.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * Sends a message to a slave lcore (identified by the slave_id) that
+ * Sends a message to a worker lcore (identified by the worker_id) that
* is in the WAIT state (this is true after the first call to
* rte_eal_init()). This can be checked by first calling
- * rte_eal_wait_lcore(slave_id).
+ * rte_eal_wait_lcore(worker_id).
*
* When the remote lcore receives the message, it switches to
* the RUNNING state, then calls the function f with argument arg. Once the
@@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *);
* the return value of f is stored in a local variable to be read using
* rte_eal_wait_lcore().
*
- * The MASTER lcore returns as soon as the message is sent and knows
+ * The MAIN lcore returns as soon as the message is sent and knows
* nothing about the completion of f.
*
* Note: This function is not designed to offer optimum
@@ -56,37 +56,41 @@ typedef int (lcore_function_t)(void *);
* The function to be called.
* @param arg
* The argument for the function.
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore on which the function should be executed.
* @return
* - 0: Success. Execution of function f started on the remote lcore.
* - (-EBUSY): The remote lcore is not in a WAIT state.
*/
-int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
+int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned worker_id);
/**
- * This enum indicates whether the master core must execute the handler
+ * This enum indicates whether the main core must execute the handler
* launched on all logical cores.
*/
-enum rte_rmt_call_master_t {
- SKIP_MASTER = 0, /**< lcore handler not executed by master core. */
- CALL_MASTER, /**< lcore handler executed by master core. */
+enum rte_rmt_call_main_t {
+ SKIP_MAIN = 0, /**< lcore handler not executed by main core. */
+ CALL_MAIN, /**< lcore handler executed by main core. */
};
+/* These legacy definitions will be removed in future release */
+#define SKIP_MASTER RTE_DEPRECATED(SKIP_MASTER) SKIP_MAIN
+#define CALL_MASTER RTE_DEPRECATED(CALL_MASTER) CALL_MAIN
+
/**
* Launch a function on all lcores.
*
- * Check that each SLAVE lcore is in a WAIT state, then call
+ * Check that each WORKER lcore is in a WAIT state, then call
* rte_eal_remote_launch() for each lcore.
*
* @param f
* The function to be called.
* @param arg
* The argument for the function.
- * @param call_master
- * If call_master set to SKIP_MASTER, the MASTER lcore does not call
- * the function. If call_master is set to CALL_MASTER, the function
- * is also called on master before returning. In any case, the master
+ * @param call_main
+ * If call_main set to SKIP_MAIN, the MAIN lcore does not call
+ * the function. If call_main is set to CALL_MAIN, the function
+ * is also called on main before returning. In any case, the main
* lcore returns as soon as it finished its job and knows nothing
* about the completion of f on the other lcores.
* @return
@@ -95,49 +99,49 @@ enum rte_rmt_call_master_t {
* case, no message is sent to any of the lcores.
*/
int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg,
- enum rte_rmt_call_master_t call_master);
+ enum rte_rmt_call_main_t call_main);
/**
- * Get the state of the lcore identified by slave_id.
+ * Get the state of the lcore identified by worker_id.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
* The state of the lcore.
*/
-enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id);
+enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned int worker_id);
/**
* Wait until an lcore finishes its job.
*
- * To be executed on the MASTER lcore only.
+ * To be executed on the MAIN lcore only.
*
- * If the slave lcore identified by the slave_id is in a FINISHED state,
+ * If the worker lcore identified by the worker_id is in a FINISHED state,
* switch to the WAIT state. If the lcore is in RUNNING state, wait until
* the lcore finishes its job and moves to the FINISHED state.
*
- * @param slave_id
+ * @param worker_id
* The identifier of the lcore.
* @return
- * - 0: If the lcore identified by the slave_id is in a WAIT state.
+ * - 0: If the lcore identified by the worker_id is in a WAIT state.
* - The value that was returned by the previous remote launch
- * function call if the lcore identified by the slave_id was in a
+ * function call if the lcore identified by the worker_id was in a
* FINISHED or RUNNING state. In this case, it changes the state
* of the lcore to WAIT.
*/
-int rte_eal_wait_lcore(unsigned slave_id);
+int rte_eal_wait_lcore(unsigned worker_id);
/**
* Wait until all lcores finish their jobs.
*
- * To be executed on the MASTER lcore only. Issue an
+ * To be executed on the MAIN lcore only. Issue an
* rte_eal_wait_lcore() for every lcore. The return values are
* ignored.
*
* After a call to rte_eal_mp_wait_lcore(), the caller can assume
- * that all slave lcores are in a WAIT state.
+ * that all worker lcores are in a WAIT state.
*/
void rte_eal_mp_wait_lcore(void);
diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
index b8b64a625200..48b87e253afa 100644
--- a/lib/librte_eal/include/rte_lcore.h
+++ b/lib/librte_eal/include/rte_lcore.h
@@ -78,12 +78,24 @@ rte_lcore_id(void)
}
/**
- * Get the id of the master lcore
+ * Get the id of the main lcore
*
* @return
- * the id of the master lcore
+ * the id of the main lcore
*/
-unsigned int rte_get_master_lcore(void);
+unsigned int rte_get_main_lcore(void);
+
+/**
+ * Deprecated function the id of the main lcore
+ *
+ * @return
+ * the id of the main lcore
+ */
+__rte_deprecated
+static inline unsigned int rte_get_master_lcore(void)
+{
+ return rte_get_main_lcore();
+}
/**
* Return the number of execution units (lcores) on the system.
@@ -203,32 +215,35 @@ int rte_lcore_is_enabled(unsigned int lcore_id);
*
* @param i
* The current lcore (reference).
- * @param skip_master
- * If true, do not return the ID of the master lcore.
+ * @param skip_main
+ * If true, do not return the ID of the main lcore.
* @param wrap
* If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise,
* return RTE_MAX_LCORE.
* @return
* The next lcore_id or RTE_MAX_LCORE if not found.
*/
-unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
+unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap);
/**
* Macro to browse all running lcores.
*/
#define RTE_LCORE_FOREACH(i) \
for (i = rte_get_next_lcore(-1, 0, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 0, 0))
/**
- * Macro to browse all running lcores except the master lcore.
+ * Macro to browse all running lcores except the main lcore.
*/
-#define RTE_LCORE_FOREACH_SLAVE(i) \
+#define RTE_LCORE_FOREACH_WORKER(i) \
for (i = rte_get_next_lcore(-1, 1, 0); \
- i<RTE_MAX_LCORE; \
+ i < RTE_MAX_LCORE; \
i = rte_get_next_lcore(i, 1, 0))
+#define RTE_LCORE_FOREACH_SLAVE(l) \
+ RTE_DEPRECATED(RTE_LCORE_FOREACH_SLAVE) RTE_LCORE_FOREACH_WORKER(l)
+
/**
* Callback prototype for initializing lcores.
*
diff --git a/lib/librte_eal/linux/eal.c b/lib/librte_eal/linux/eal.c
index 9cf0e2ec0137..1c9dd8db1e6a 100644
--- a/lib/librte_eal/linux/eal.c
+++ b/lib/librte_eal/linux/eal.c
@@ -883,10 +883,10 @@ eal_check_mem_on_local_socket(void)
int socket_id;
const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->master_lcore);
+ socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n");
+ RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
}
static int
@@ -1215,28 +1215,28 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (pthread_setaffinity_np(pthread_self(), sizeof(rte_cpuset_t),
- &lcore_config[config->master_lcore].cpuset) != 0) {
+ &lcore_config[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
- config->master_lcore, (uintptr_t)thread_id, cpuset,
+ RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ config->main_lcore, (uintptr_t)thread_id, cpuset,
ret == 0 ? "" : "...");
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_master2slave) < 0)
+ if (pipe(lcore_config[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_slave2master) < 0)
+ if (pipe(lcore_config[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
lcore_config[i].state = WAIT;
@@ -1249,7 +1249,7 @@ rte_eal_init(int argc, char **argv)
/* Set thread_name for aid in debugging. */
snprintf(thread_name, sizeof(thread_name),
- "lcore-slave-%d", i);
+ "lcore-worker-%d", i);
ret = rte_thread_setname(lcore_config[i].thread_id,
thread_name);
if (ret != 0)
@@ -1263,10 +1263,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
/* initialize services so vdevs register service during bus_probe. */
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 89725291b0ce..3e47efe58212 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -1737,7 +1737,7 @@ memseg_primary_init_32(void)
/* the allocation logic is a little bit convoluted, but here's how it
* works, in a nutshell:
* - if user hasn't specified on which sockets to allocate memory via
- * --socket-mem, we allocate all of our memory on master core socket.
+ * --socket-mem, we allocate all of our memory on main core socket.
* - if user has specified sockets to allocate memory on, there may be
* some "unused" memory left (e.g. if user has specified --socket-mem
* such that not all memory adds up to 2 gigabytes), so add it to all
@@ -1751,7 +1751,7 @@ memseg_primary_init_32(void)
for (i = 0; i < rte_socket_count(); i++) {
int hp_sizes = (int) internal_conf->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
- unsigned int master_lcore_socket;
+ unsigned int main_lcore_socket;
struct rte_config *cfg = rte_eal_get_configuration();
bool skip;
@@ -1767,10 +1767,10 @@ memseg_primary_init_32(void)
skip = active_sockets != 0 &&
internal_conf->socket_mem[socket_id] == 0;
/* ...or if we didn't specifically request memory on *any*
- * socket, and this is not master lcore
+ * socket, and this is not main lcore
*/
- master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore);
- skip |= active_sockets == 0 && socket_id != master_lcore_socket;
+ main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+ skip |= active_sockets == 0 && socket_id != main_lcore_socket;
if (skip) {
RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c
index 068de2559555..83c2034b93d5 100644
--- a/lib/librte_eal/linux/eal_thread.c
+++ b/lib/librte_eal/linux/eal_thread.c
@@ -26,35 +26,35 @@
#include "eal_thread.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
int rc = -EBUSY;
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
goto finish;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
+ n = write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = read(s2m, &c, 1);
+ n = read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -62,7 +62,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
rc = 0;
finish:
- rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc);
+ rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc);
return rc;
}
@@ -74,21 +74,21 @@ eal_thread_loop(__rte_unused void *arg)
int n, ret;
unsigned lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -104,7 +104,7 @@ eal_thread_loop(__rte_unused void *arg)
/* wait command */
do {
- n = read(m2s, &c, 1);
+ n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -115,7 +115,7 @@ eal_thread_loop(__rte_unused void *arg)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
+ n = write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a93dea9fe616..33ee2748ede0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -74,7 +74,7 @@ DPDK_21 {
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
- rte_get_master_lcore;
+ rte_get_main_lcore;
rte_get_next_lcore;
rte_get_tsc_hz;
rte_hexdump;
diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c
index bc48f27ab39a..cbca20956210 100644
--- a/lib/librte_eal/windows/eal.c
+++ b/lib/librte_eal/windows/eal.c
@@ -350,8 +350,8 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- __rte_thread_init(config->master_lcore,
- &lcore_config[config->master_lcore].cpuset);
+ __rte_thread_init(config->main_lcore,
+ &lcore_config[config->main_lcore].cpuset);
bscan = rte_bus_scan();
if (bscan < 0) {
@@ -360,16 +360,16 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- RTE_LCORE_FOREACH_SLAVE(i) {
+ RTE_LCORE_FOREACH_WORKER(i) {
/*
- * create communication pipes between master thread
+ * create communication pipes between main thread
* and children
*/
- if (_pipe(lcore_config[i].pipe_master2slave,
+ if (_pipe(lcore_config[i].pipe_main2worker,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
- if (_pipe(lcore_config[i].pipe_slave2master,
+ if (_pipe(lcore_config[i].pipe_worker2main,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
@@ -394,10 +394,10 @@ rte_eal_init(int argc, char **argv)
}
/*
- * Launch a dummy function on all slave lcores, so that master lcore
+ * Launch a dummy function on all worker lcores, so that main lcore
* knows they are all ready when this function returns.
*/
- rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER);
+ rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN);
rte_eal_mp_wait_lcore();
return fctret;
}
diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c
index 20889b6196c9..908e726d16cc 100644
--- a/lib/librte_eal/windows/eal_thread.c
+++ b/lib/librte_eal/windows/eal_thread.c
@@ -17,34 +17,34 @@
#include "eal_windows.h"
/*
- * Send a message to a slave lcore identified by slave_id to call a
+ * Send a message to a worker lcore identified by worker_id to call a
* function f with argument arg. Once the execution is done, the
* remote lcore switch in FINISHED state.
*/
int
-rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id)
+rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
{
int n;
char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
+ int m2w = lcore_config[worker_id].pipe_main2worker[1];
+ int w2m = lcore_config[worker_id].pipe_worker2main[0];
- if (lcore_config[slave_id].state != WAIT)
+ if (lcore_config[worker_id].state != WAIT)
return -EBUSY;
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
+ lcore_config[worker_id].f = f;
+ lcore_config[worker_id].arg = arg;
/* send message */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(m2s, &c, 1);
+ n = _write(m2w, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
/* wait ack */
do {
- n = _read(s2m, &c, 1);
+ n = _read(w2m, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -61,21 +61,21 @@ eal_thread_loop(void *arg __rte_unused)
int n, ret;
unsigned int lcore_id;
pthread_t thread_id;
- int m2s, s2m;
+ int m2w, w2m;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
thread_id = pthread_self();
/* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (thread_id == lcore_config[lcore_id].thread_id)
break;
}
if (lcore_id == RTE_MAX_LCORE)
rte_panic("cannot retrieve lcore id\n");
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
+ m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ w2m = lcore_config[lcore_id].pipe_worker2main[1];
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
@@ -88,7 +88,7 @@ eal_thread_loop(void *arg __rte_unused)
/* wait command */
do {
- n = _read(m2s, &c, 1);
+ n = _read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
if (n <= 0)
@@ -99,7 +99,7 @@ eal_thread_loop(void *arg __rte_unused)
/* send ack */
n = 0;
while (n == 0 || (n < 0 && errno == EINTR))
- n = _write(s2m, &c, 1);
+ n = _write(w2m, &c, 1);
if (n < 0)
rte_panic("cannot write on configuration pipe\n");
--
2.27.0
^ permalink raw reply [relevance 1%]
Results 4401-4600 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2019-12-04 10:05 [dpdk-dev] [PATCH] drivers/net: fix mlx* glue libraries ABI version David Marchand
2020-10-19 9:41 9% ` [dpdk-dev] [PATCH v2] drivers: remove mlx* glue libraries separate " David Marchand
2020-10-27 12:13 4% ` David Marchand
2020-11-01 14:48 4% ` Thomas Monjalon
2020-11-01 15:02 7% ` Slava Ovsiienko
2020-11-01 15:09 4% ` Raslan Darawsheh
2020-01-04 1:33 [dpdk-dev] [PATCH 00/14] cleanup resources on shutdown Stephen Hemminger
2020-04-28 23:58 ` [dpdk-dev] [PATCH v3 0/8] eal: " Stephen Hemminger
2020-05-03 17:21 ` David Marchand
2020-10-19 22:24 0% ` Thomas Monjalon
2020-06-12 21:24 [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites McDaniel, Timothy
2020-10-29 14:57 3% ` [dpdk-dev] [PATCH v7 00/23] Add DLB PMD Timothy McDaniel
2020-10-30 9:40 3% ` [dpdk-dev] [PATCH v8 " Timothy McDaniel
2020-10-30 12:41 3% ` [dpdk-dev] [PATCH v9 " Timothy McDaniel
2020-10-30 18:27 3% ` [dpdk-dev] [PATCH v10 " Timothy McDaniel
2020-10-30 23:41 3% ` [dpdk-dev] [PATCH v11 " Timothy McDaniel
2020-10-31 1:19 3% ` [dpdk-dev] [PATCH v12 " Timothy McDaniel
2020-10-31 2:12 3% ` [dpdk-dev] [PATCH v13 " Timothy McDaniel
2020-10-31 12:49 0% ` Jerin Jacob
2020-10-31 18:17 3% ` [dpdk-dev] [PATCH v14 " Timothy McDaniel
2020-11-01 19:26 3% ` [dpdk-dev] [PATCH v15 " Timothy McDaniel
2020-11-01 23:29 3% ` [dpdk-dev] [PATCH v16 " Timothy McDaniel
2020-11-02 14:07 0% ` Jerin Jacob
2020-06-25 16:03 [dpdk-dev] [PATCH 0/2] ethdev: tunnel offload model Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 0/3] Tunnel Offload API Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 15:41 3% ` Kinsella, Ray
2020-07-30 19:49 [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites McDaniel, Timothy
2020-10-17 19:03 3% ` [dpdk-dev] [PATCH v5 00/22] Add DLB PMD Timothy McDaniel
2020-10-23 18:32 3% ` [dpdk-dev] [PATCH v6 00/23] " Timothy McDaniel
2020-08-07 12:29 [dpdk-dev] [PATCH 20.11 00/19] remove make support in DPDK Ciara Power
2020-10-21 8:17 ` [dpdk-dev] [PATCH v7 00/14] " Ciara Power
2020-10-21 8:17 9% ` [dpdk-dev] [PATCH v7 12/14] doc: remove references to make from contributing guide Ciara Power
2020-10-21 8:17 2% ` [dpdk-dev] [PATCH v7 14/14] doc: update patch cheatsheet to use meson Ciara Power
2020-08-17 17:49 [dpdk-dev] [RFC] ethdev: introduce Rx buffer split Slava Ovsiienko
2020-10-14 18:11 ` [dpdk-dev] [PATCH v6 0/6] " Viacheslav Ovsiienko
2020-10-14 18:11 ` [dpdk-dev] [PATCH v6 1/6] " Viacheslav Ovsiienko
2020-10-14 18:57 ` Jerin Jacob
2020-10-15 7:43 ` Slava Ovsiienko
2020-10-15 9:27 3% ` Jerin Jacob
2020-10-15 10:27 3% ` Jerin Jacob
2020-10-15 10:51 3% ` Slava Ovsiienko
2020-10-15 11:26 0% ` Jerin Jacob
2020-10-15 11:36 0% ` Ferruh Yigit
2020-10-15 11:49 3% ` Slava Ovsiienko
2020-10-15 12:49 0% ` Thomas Monjalon
2020-10-15 13:07 0% ` Andrew Rybchenko
2020-10-15 13:57 0% ` Slava Ovsiienko
2020-10-15 20:22 0% ` Slava Ovsiienko
2020-10-15 9:49 ` Andrew Rybchenko
2020-10-15 10:34 3% ` Slava Ovsiienko
2020-10-15 11:09 0% ` Andrew Rybchenko
2020-10-15 14:39 0% ` Slava Ovsiienko
2020-10-16 10:22 ` [dpdk-dev] [PATCH v9 0/6] " Viacheslav Ovsiienko
2020-10-16 10:22 ` [dpdk-dev] [PATCH v9 1/6] " Viacheslav Ovsiienko
2020-10-16 11:21 4% ` Ferruh Yigit
2020-10-16 13:08 0% ` Slava Ovsiienko
2020-09-07 8:15 [dpdk-dev] [PATCH 0/2] LPM changes Ruifeng Wang
2020-09-07 8:15 ` [dpdk-dev] [PATCH 2/2] lpm: hide internal data Ruifeng Wang
2020-09-15 16:02 ` Bruce Richardson
2020-09-15 16:28 ` Medvedkin, Vladimir
2020-09-16 3:17 ` Ruifeng Wang
2020-09-30 8:45 ` Kevin Traynor
2020-10-09 6:54 ` Ruifeng Wang
2020-10-13 13:53 ` Kevin Traynor
2020-10-13 14:58 0% ` Michel Machado
2020-10-13 15:41 0% ` Medvedkin, Vladimir
2020-10-13 17:46 0% ` Michel Machado
2020-10-13 19:06 0% ` Medvedkin, Vladimir
2020-10-13 19:48 0% ` Michel Machado
2020-10-14 13:10 0% ` Medvedkin, Vladimir
2020-10-14 23:57 0% ` Honnappa Nagarahalli
2020-10-19 17:53 0% ` Honnappa Nagarahalli
2020-10-21 3:02 ` [dpdk-dev] [PATCH v2 0/2] LPM changes Ruifeng Wang
2020-10-21 3:02 5% ` [dpdk-dev] [PATCH v2 2/2] lpm: hide internal data Ruifeng Wang
2020-10-21 7:58 0% ` Thomas Monjalon
2020-10-21 8:15 0% ` Ruifeng Wang
2020-10-23 9:38 ` [dpdk-dev] [PATCH v3 0/2] LPM changes David Marchand
2020-10-23 9:38 3% ` [dpdk-dev] [PATCH v3 2/2] lpm: hide internal data David Marchand
2020-09-09 20:30 [dpdk-dev] [RFC 0/3] introduce Stateful Flow Table Andrey Vesnovaty
2020-11-04 12:59 ` [dpdk-dev] [PATCH v2 0/2] introduce stateful flow table Ori Kam
2020-11-04 12:59 1% ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce sft lib Ori Kam
2020-11-04 13:17 ` [dpdk-dev] [RFC v3 0/2] introduce stateful flow table Ori Kam
2020-11-04 13:17 1% ` [dpdk-dev] [RFC v3 2/2] ethdev: introduce sft lib Ori Kam
2020-09-11 16:58 [dpdk-dev] [PATCH 1/2] eventdev: implement ABI change Timothy McDaniel
2020-10-14 21:36 9% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-14 21:36 2% ` [dpdk-dev] [PATCH 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-14 21:36 6% ` [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-15 14:26 7% ` [dpdk-dev] [PATCH 0/2] Eventdev ABI changes for DLB/DLB2 Jerin Jacob
2020-10-15 14:38 4% ` McDaniel, Timothy
2020-10-15 17:31 9% ` [dpdk-dev] [PATCH 0/3] " Timothy McDaniel
2020-10-15 17:31 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 17:31 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 17:31 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2020-10-15 18:07 9% ` [dpdk-dev] [PATCH 0/3] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-15 18:07 1% ` [dpdk-dev] [PATCH 1/3] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-15 18:07 4% ` [dpdk-dev] [PATCH 2/3] doc: remove eventdev ABI change announcement Timothy McDaniel
2020-10-15 18:27 4% ` Jerin Jacob
2020-10-15 18:07 13% ` [dpdk-dev] [PATCH 3/3] doc: announce new eventdev ABI changes Timothy McDaniel
2020-09-11 16:58 [dpdk-dev] [PATCH 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-14 17:33 6% ` [dpdk-dev] [PATCH v3] " Timothy McDaniel
2020-10-14 20:01 4% ` Jerin Jacob
2020-09-11 19:06 [dpdk-dev] [PATCH 00/15] Replace terms master/slave lcore with main/worker lcore Stephen Hemminger
2020-10-14 15:27 ` [dpdk-dev] [PATCH v6 00/18] Replace terms master/slave Stephen Hemminger
2020-10-14 15:27 1% ` [dpdk-dev] [PATCH v6 03/18] eal: rename lcore word choices Stephen Hemminger
2020-10-15 22:57 ` [dpdk-dev] [PATCH v7 00/20] Replace terms master/slave Stephen Hemminger
2020-10-15 22:57 1% ` [dpdk-dev] [PATCH v7 03/20] eal: rename lcore word choices Stephen Hemminger
2020-09-11 20:26 [dpdk-dev] [PATCH 01/22] event/dlb2: add meson build infrastructure Timothy McDaniel
2020-10-29 15:24 ` [dpdk-dev] [PATCH v4 00/23] Add DLB2 PMD Timothy McDaniel
2020-10-29 15:24 ` [dpdk-dev] [PATCH v4 03/23] event/dlb2: add private data structures and constants Timothy McDaniel
2020-10-29 15:29 3% ` Stephen Hemminger
2020-10-29 16:07 0% ` McDaniel, Timothy
2020-09-14 18:19 [dpdk-dev] [PATCH v2 00/17] Replace terms master/slave Stephen Hemminger
2020-10-13 15:25 ` [dpdk-dev] [PATCH v5 00/18] " Stephen Hemminger
2020-10-13 15:25 1% ` [dpdk-dev] [PATCH v5 03/18] eal: rename lcore word choices Stephen Hemminger
2020-09-15 16:50 [dpdk-dev] [PATCH v2 00/12] acl: introduce AVX512 classify method Konstantin Ananyev
2020-10-05 18:45 ` [dpdk-dev] [PATCH v3 00/14] acl: introduce AVX512 classify methods Konstantin Ananyev
2020-10-06 15:05 ` David Marchand
2020-10-06 16:07 ` Ananyev, Konstantin
2020-10-14 9:23 4% ` Kinsella, Ray
2020-09-16 10:40 [dpdk-dev] [PATCH v3] mbuf: minor cleanup Morten Brørup
2020-10-07 9:16 ` Olivier Matz
2020-10-20 11:55 0% ` Thomas Monjalon
2020-11-04 22:17 0% ` Morten Brørup
2020-09-16 16:44 [dpdk-dev] [RFC PATCH 0/5] rework feature enabling macros for compatibility Bruce Richardson
2020-10-14 14:12 ` [dpdk-dev] [PATCH v3 0/7] Rework build macros Bruce Richardson
2020-10-14 14:13 ` [dpdk-dev] [PATCH v3 6/7] build: standardize component names and defines Bruce Richardson
2020-10-15 10:30 ` Luca Boccassi
2020-10-15 11:18 ` Bruce Richardson
2020-10-15 13:05 3% ` Luca Boccassi
2020-10-15 14:03 3% ` Bruce Richardson
2020-10-15 15:32 0% ` Luca Boccassi
2020-10-15 15:34 0% ` Bruce Richardson
2020-09-22 14:31 [dpdk-dev] [PATCH 0/8] replace blacklist/whitelist with block/allow Stephen Hemminger
2020-10-20 16:20 ` [dpdk-dev] [PATCH v2 0/5] " Stephen Hemminger
2020-10-20 16:20 1% ` [dpdk-dev] [PATCH v2 5/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-22 14:39 ` [dpdk-dev] [PATCH v3 0/5] replace blacklist/whitelist with block/allow Stephen Hemminger
2020-10-22 14:39 1% ` [dpdk-dev] [PATCH v3 5/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-22 20:39 ` [dpdk-dev] [PATCH v4 0/5] replace blacklist/whitelist with block/allow Stephen Hemminger
2020-10-22 20:40 1% ` [dpdk-dev] [PATCH v4 5/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-24 1:01 ` [dpdk-dev] [PATCH v5 0/5] replace blacklist/whitelist with block/allow Stephen Hemminger
2020-10-24 1:01 1% ` [dpdk-dev] [PATCH v5 5/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-25 16:57 ` [dpdk-dev] [PATCH v6 0/5] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-10-25 16:57 1% ` [dpdk-dev] [PATCH v6 5/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-25 20:57 ` [dpdk-dev] [PATCH v7 0/5] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-10-25 20:57 1% ` [dpdk-dev] [PATCH v7 4/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-10-25 21:15 ` [dpdk-dev] [PATCH v8 0/5] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-10-25 21:15 1% ` [dpdk-dev] [PATCH v8 4/5] doc: change references to blacklist and whitelist Stephen Hemminger
2020-11-05 22:35 ` [dpdk-dev] [PATCH v9 0/6] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-11-05 22:36 4% ` [dpdk-dev] [PATCH v9 6/6] doc: update release notes now for block allow changes Stephen Hemminger
2020-11-10 22:55 ` [dpdk-dev] [PATCH v10 0/7] replace blacklist/whitelist with allow/block Stephen Hemminger
2020-11-10 22:55 1% ` [dpdk-dev] [PATCH v10 4/7] doc: update documentation to reflect new options Stephen Hemminger
2020-09-30 14:10 [dpdk-dev] [PATCH 00/10] support match on L3 fragmented packets Dekel Peled
2020-10-14 16:35 3% ` [dpdk-dev] [PATCH v7 0/5] " Dekel Peled
2020-10-14 16:35 4% ` [dpdk-dev] [PATCH v7 1/5] ethdev: add extensions attributes to IPv6 item Dekel Peled
2020-10-14 17:18 0% ` [dpdk-dev] [PATCH v7 0/5] support match on L3 fragmented packets Ferruh Yigit
2020-10-01 0:25 [dpdk-dev] [PATCH 0/4] introduce support for hairpin between two ports Bing Zhao
2020-10-15 5:35 ` [dpdk-dev] [PATCH v5 0/5] " Bing Zhao
2020-10-15 5:35 4% ` [dpdk-dev] [PATCH v5 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-15 13:08 ` [dpdk-dev] [PATCH v6 0/5] introduce support for hairpin between two ports Bing Zhao
2020-10-15 13:08 4% ` [dpdk-dev] [PATCH v6 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-05 20:27 [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-05 20:27 ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-06 8:26 ` Van Haaren, Harry
2020-10-12 19:09 ` Pavan Nikhilesh Bhagavatula
2020-10-13 19:20 4% ` Jerin Jacob
2020-10-07 16:45 [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 " Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-15 0:55 3% ` Thomas Monjalon
2020-10-08 12:05 [dpdk-dev] [PATCH v3 0/6] introduce support for hairpin between two ports Bing Zhao
2020-10-13 16:19 ` [dpdk-dev] [PATCH v4 0/5] " Bing Zhao
2020-10-13 16:19 4% ` [dpdk-dev] [PATCH v4 2/5] ethdev: add new attributes to hairpin config Bing Zhao
2020-10-10 22:11 [dpdk-dev] [PATCH v2] security: update session create API Akhil Goyal
2020-10-14 18:56 2% ` [dpdk-dev] [PATCH v3] " Akhil Goyal
2020-10-15 1:11 0% ` Lukasz Wojciechowski
2020-10-12 8:08 [dpdk-dev] [PATCH v5 0/4] devtools: abi breakage checks Conor Walsh
2020-10-12 13:03 ` [dpdk-dev] [PATCH v6 " Conor Walsh
2020-10-12 13:03 ` [dpdk-dev] [PATCH v6 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-14 9:38 4% ` Kinsella, Ray
2020-10-12 13:03 ` [dpdk-dev] [PATCH v6 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
2020-10-14 9:43 4% ` Kinsella, Ray
2020-10-12 13:03 ` [dpdk-dev] [PATCH v6 3/4] devtools: change dump file not found to warning in check-abi.sh Conor Walsh
2020-10-14 9:44 4% ` Kinsella, Ray
2020-10-12 13:03 ` [dpdk-dev] [PATCH v6 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
2020-10-14 9:46 0% ` Kinsella, Ray
2020-10-14 9:37 4% ` [dpdk-dev] [PATCH v6 0/4] devtools: abi breakage checks Kinsella, Ray
2020-10-14 10:33 4% ` Walsh, Conor
2020-10-14 10:41 9% ` [dpdk-dev] [PATCH v7 " Conor Walsh
2020-10-14 10:41 21% ` [dpdk-dev] [PATCH v7 1/4] devtools: add generation of compressed abi dump archives Conor Walsh
2020-10-15 10:15 4% ` Kinsella, Ray
2020-10-14 10:41 26% ` [dpdk-dev] [PATCH v7 2/4] devtools: abi and UX changes for test-meson-builds.sh Conor Walsh
2020-10-15 10:16 4% ` Kinsella, Ray
2020-10-14 10:41 15% ` [dpdk-dev] [PATCH v7 3/4] devtools: change not found to warning check-abi.sh Conor Walsh
2020-10-14 10:41 18% ` [dpdk-dev] [PATCH v7 4/4] doc: test-meson-builds.sh doc updates Conor Walsh
[not found] ` <7206c209-ed4a-2aeb-12d8-ee162ef92596@ashroe.eu>
[not found] ` <CAJFAV8wmpft6XLRg1RAL+d4ibbJVrR9C0ghkE-kqyig_q_Meeg@mail.gmail.com>
2020-11-03 10:07 9% ` [dpdk-dev] [PATCH v7 0/4] devtools: abi breakage checks Kinsella, Ray
2020-11-10 12:53 8% ` David Marchand
2020-11-10 13:54 4% ` Kinsella, Ray
2020-11-10 13:57 4% ` David Marchand
2020-10-14 13:28 [dpdk-dev] [PATCH 00/11] ethdev: change device stop to return status Andrew Rybchenko
2020-10-15 13:30 ` [dpdk-dev] [PATCH v2 " Andrew Rybchenko
2020-10-15 13:30 4% ` [dpdk-dev] [PATCH v2 01/11] ethdev: change eth dev stop function to return int Andrew Rybchenko
2020-10-16 9:22 0% ` Ferruh Yigit
2020-10-16 11:20 3% ` Kinsella, Ray
2020-10-16 17:13 0% ` Andrew Rybchenko
2020-10-19 9:37 0% ` Kinsella, Ray
2020-10-15 9:56 11% [dpdk-dev] [PATCH] cryptodev: revert support for 20.0 node Ray Kinsella
2020-10-15 10:08 0% ` David Marchand
2020-10-15 10:10 3% ` Kinsella, Ray
2020-10-15 16:00 [dpdk-dev] performance degradation with fpic Ali Alnubani
2020-10-15 17:08 ` Bruce Richardson
2020-10-15 17:14 ` Thomas Monjalon
2020-10-15 21:44 ` Stephen Hemminger
2020-10-16 8:35 3% ` Bruce Richardson
2020-10-19 2:57 [dpdk-dev] [v3 0/2] support enqueue callbacks on cryptodev Abhinandan Gujjar
2020-10-19 2:57 ` [dpdk-dev] [v3 1/2] cryptodev: support enqueue callback functions Abhinandan Gujjar
2020-10-21 19:33 3% ` Ananyev, Konstantin
2020-10-23 12:36 0% ` Gujjar, Abhinandan S
2020-10-22 7:47 3% [dpdk-dev] [PATCH] build: fix version map file references in documentation David Marchand
2020-10-22 7:52 0% ` Kinsella, Ray
2020-10-22 12:11 3% ` David Marchand
2020-10-22 14:24 0% ` Kinsella, Ray
2020-10-23 16:07 33% [dpdk-dev] [PATCH v1] doc: update abi version references Ray Kinsella
2020-10-23 16:51 7% ` David Marchand
2020-10-26 18:27 4% ` Kinsella, Ray
2020-10-26 19:23 33% ` [dpdk-dev] [PATCH] " Ray Kinsella
2020-10-26 19:31 33% ` [dpdk-dev] [PATCH v3] " Ray Kinsella
2020-10-27 11:40 4% ` David Marchand
2020-10-25 9:44 [dpdk-dev] [v4 0/3] support enqueue callbacks on cryptodev Abhinandan Gujjar
2020-10-25 9:44 ` [dpdk-dev] [v4 1/3] cryptodev: support enqueue callback functions Abhinandan Gujjar
2020-10-27 18:19 4% ` Akhil Goyal
2020-10-27 19:16 0% ` Gujjar, Abhinandan S
2020-10-27 19:26 0% ` Akhil Goyal
2020-10-27 19:41 0% ` Gujjar, Abhinandan S
2020-10-27 18:28 4% ` Akhil Goyal
2020-10-28 8:20 0% ` Gujjar, Abhinandan S
2020-10-28 12:55 0% ` Ananyev, Konstantin
2020-10-28 14:28 0% ` Akhil Goyal
2020-10-28 14:52 0% ` Ananyev, Konstantin
2020-10-28 15:11 3% ` [dpdk-dev] [dpdk-techboard] " Bruce Richardson
2020-10-28 15:22 4% ` Honnappa Nagarahalli
2020-10-29 13:52 0% ` Gujjar, Abhinandan S
2020-10-29 14:00 0% ` Akhil Goyal
2020-10-30 4:24 0% ` Gujjar, Abhinandan S
2020-10-30 17:18 0% ` Gujjar, Abhinandan S
2020-10-29 14:26 3% ` Kinsella, Ray
2020-10-26 6:47 3% [dpdk-dev] [PATCH v3] gso: fix free issue of mbuf gso segments attach to yang_y_yi
2020-10-27 19:55 0% ` Ananyev, Konstantin
2020-10-28 0:51 0% ` Hu, Jiayu
2020-10-28 23:10 [dpdk-dev] [v6 0/2] support enqueue & dequeue callbacks on cryptodev Abhinandan Gujjar
2020-10-28 23:10 2% ` [dpdk-dev] [v6 1/2] cryptodev: support enqueue & dequeue callback functions Abhinandan Gujjar
2020-10-29 8:52 [dpdk-dev] [PATCH] ethdev: deprecate shared counters using action attribute Andrew Rybchenko
2020-10-29 16:11 ` Thomas Monjalon
2020-11-01 7:49 ` Ori Kam
2020-11-03 17:21 3% ` Thomas Monjalon
2020-11-03 17:26 0% ` Andrew Rybchenko
2020-10-29 9:27 [dpdk-dev] [PATCH 00/15] remove mbuf timestamp Thomas Monjalon
2020-10-29 9:27 ` [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotter first half Thomas Monjalon
2020-10-29 14:42 4% ` Kinsella, Ray
2020-10-31 20:40 ` [dpdk-dev] [PATCH 15/15] mbuf: move pool pointer in hotterfirst half Thomas Monjalon
2020-11-01 9:12 ` Morten Brørup
2020-11-01 16:38 ` Thomas Monjalon
2020-11-01 20:59 3% ` Morten Brørup
2020-11-02 15:58 0% ` Thomas Monjalon
2020-11-03 12:10 4% ` Morten Brørup
2020-11-03 12:25 0% ` Bruce Richardson
2020-11-03 13:46 0% ` Morten Brørup
2020-11-03 13:50 0% ` Bruce Richardson
2020-11-03 14:03 0% ` Morten Brørup
2020-11-03 14:02 0% ` Slava Ovsiienko
2020-11-03 15:03 0% ` Morten Brørup
2020-11-04 15:00 0% ` Olivier Matz
2020-11-05 0:25 0% ` Ananyev, Konstantin
2020-11-05 9:35 3% ` Morten Brørup
2020-11-05 10:29 0% ` Bruce Richardson
2020-10-29 12:22 3% [dpdk-dev] DPDK Release Status Meeting 29/10/2020 Ferruh Yigit
2020-11-02 16:17 3% [dpdk-dev] Ionic PMD - can we still get patches into a 20.02 stable? Andrew Boyer
2020-11-02 16:25 0% ` Burakov, Anatoly
2020-11-02 16:31 0% ` Ferruh Yigit
2020-11-03 14:25 4% [dpdk-dev] Minutes of Technical Board Meeting, 2020-10-21 Jerin Jacob Kollanukkaran
2020-11-04 17:00 [dpdk-dev] [PATCH] mbuf: fix reset on mbuf free Olivier Matz
2020-11-05 0:15 ` Ananyev, Konstantin
2020-11-05 7:46 ` Olivier Matz
2020-11-05 8:26 ` Andrew Rybchenko
2020-11-05 9:10 ` Olivier Matz
2020-11-05 11:34 ` Ananyev, Konstantin
2020-11-05 12:31 ` Olivier Matz
2020-11-05 13:14 ` Ananyev, Konstantin
2020-11-05 13:24 ` Olivier Matz
2020-11-05 13:55 ` Ananyev, Konstantin
2020-11-05 16:30 ` Morten Brørup
2020-11-05 23:55 ` Ananyev, Konstantin
2020-11-06 7:52 4% ` Morten Brørup
2020-11-06 8:20 0% ` Olivier Matz
2020-11-06 8:50 0% ` Morten Brørup
2020-11-06 10:04 0% ` Olivier Matz
2020-11-06 10:07 0% ` Morten Brørup
2020-11-06 11:53 0% ` Ananyev, Konstantin
2020-11-06 12:23 0% ` Morten Brørup
2020-11-08 14:16 0% ` Andrew Rybchenko
2020-11-08 14:19 0% ` Ananyev, Konstantin
2020-11-10 16:26 0% ` Olivier Matz
2020-11-05 18:09 [dpdk-dev] [RFC] app/testpmd: fix MTU after device configure Ferruh Yigit
2020-11-13 11:44 ` [dpdk-dev] [PATCH] " Ferruh Yigit
2020-11-16 18:50 3% ` Ferruh Yigit
2020-11-12 13:38 4% [dpdk-dev] [PATCH] devtools: fix x86-default env when installing David Marchand
2020-11-16 7:55 [dpdk-dev] [PATCH 0/5] fix protocol size calculation Xiaoyu Min
2020-11-16 7:55 ` [dpdk-dev] [PATCH 4/5] net/iavf: fix protocol size for virtchnl copy Xiaoyu Min
2020-11-16 16:23 3% ` Ferruh Yigit
2020-11-22 13:28 0% ` Jack Min
2020-11-19 3:52 1% [dpdk-dev] [RFC] remove unused functions Ferruh Yigit
2020-11-20 12:27 3% [dpdk-dev] [PATCH v1 1/1] build: alias default build as generic Juraj Linkeš
2020-11-22 13:40 [dpdk-dev] Minutes of Technical Board Meeting, 2020-11-18 Ananyev, Konstantin
2020-11-23 9:30 2% ` Morten Brørup
2020-11-23 10:00 0% ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2020-11-23 11:16 0% ` Morten Brørup
2020-11-23 13:40 5% [dpdk-dev] [PATCH] doc: announce flow API matching pattern struct changes Ferruh Yigit
2020-11-23 13:50 0% ` Andrew Rybchenko
2020-11-23 14:17 0% ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).